Job Details

ID #19632811
State California
City Sanjose
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Vertogic
Showed 2021-09-15
Date 2021-09-14
Deadline 2021-11-12
Category Et cetera
Create resume

Sr BigData Engineer

California, Sanjose, 95101 Sanjose USA

Vacancy expired!

Position: Sr BigData Engineer

Location: Costa Mesa, CA // San Jose, CA

Duration: 12+ Months

Requirements
  • BS degree in computer science, computer engineering or equivalent
  • Proficient in Java, Spark, Kafka, Python, AWS Cloud technologies
  • Must have active current experience with Scala, Java, Python, Oracle, Cassandra, Hbase, Hive
  • 3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, Cassandra, HBase, Hive, Flume, Sqoop, Spark, Kafka, Scala Familiarity with AWS scripting and automation
  • Flair for data, schema, data model, how to bring efficiency in big data related life cycle
  • Must be able to quickly understand technical and business requirements and can translate them into technical implementations
  • Experience with Agile Development methodologies
  • Experience with data ingestion and transformation
  • Solid understanding of secure application development methodologies
  • Experienced in developing microservices using spring framework is a plus
  • Understanding of automated QA needs related to Big data
  • Strong object-oriented design and analysis skills
  • Excellent written and verbal communication skills

Responsibilities
  • Utilize your software engineering skills including Java, Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services
  • Integrate new data sources and tools
  • Implement scalable and reliable distributed data replication strategies
  • Ability to mentor and provide direction in architecture and design to onsite/offshore developers
  • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases
  • Perform analysis of large data sets using components from the Hadoop ecosystem
  • Own product features from the development, testing through to production deployment
  • Evaluate big data technologies and prototype solutions to improve our data processing architecture
  • Automate everything

Vacancy expired!

Subscribe Report job