Vacancy expired!
- Must have extensive hands on experience in designing, developing, and maintaining software solutions on Big Data platform such as Hadoop eco-system.
- Must have experience with strong UNIX shell scripting
- Must have experience with one of the IDE tools such as Eclipse.
- Must have working experience with Spark and Scala/Python.
- Preferred experience with developing Pig scripts/Hive QL, HBASE, SQOOP, UDF for analyzing all semi-structured/unstructured/structured data flows.
- Preferred experience with developing MapReduce programs running on the Hadoop cluster using Java/Python.
- Preferred experience with developing in Cloud environment such as Azure.
- Not mandatory but a big advantage to have prior experience using Talend with Hadoop technologies.
- 4-6 years of experience in Big Data development using Hadoop with a good understanding in all phases of software development life cycle.
- Experience using Hadoop Technologies like Hive, Spark, Pig, or Kafka
- Participate in sprint planning, design, coding, unit testing, sprint reviews
- Provide basic design documents and translates into component-level designs to accelerate development. Designs, develops, and distributes reusable technical components.
- Assist in developing technical documentation; participate in test-plan development, integration and deployment
Vacancy expired!