Hive & Spark Architecture including Spark Core, Spark SQL, Data Frames.
Experience Required
6-8 Years
Roles & Responsibilities
Build data systems and pipelines on cloud providers (GCP preferable); Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Big Query, Big Table, Google Cloud Storage, PubSub, Data Fusion, Dataflow, Dataprocc
In-depth understanding of Hive & Spark Architecture including Spark Core, Spark SQL, Data Frames.
Good knowledge of Hadoop Architecture, Strong knowledge in NOSQL column-oriented databases like HBase, Cassandra, MongoDB.
Good knowledge of architecting graph databases and techniques is a must. Knowledge of Java frameworks like Spring Boot and Spring framework.
Broader responsibilities include but not limited to analysis, design, development and testing as needed to enhance the product.
Should be a self-starter, Team Player, Collaborative and should be thorough focusing on the quality of the deliverable and time to market.
Seniority level
Entry level
Employment type
Contract
Job function
Information Technology
Industries
Information Technology & Services
Referrals increase your chances of interviewing at Syntricate Technologies by 2x