Vacancy expired!
- Hadoop
- Pyspark
- Hive / Impala / Spark
- Cloudera
- ETL / SQL
- Bachelor's degree in a technical or business-related field or equivalent working experience.
- 3+ years of experience in data warehousing architectural approaches.
- Minimum of 3 years in big data.
- Experience with Hadoop ecosystem (Cloudera).
- Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand.
- Experience in working with a Big Data implementation in production environment.
- Big Data technologies like Hadoop, Hive, Spark, python, scala etc.
- Experience in python and Unix shell scripting.
- Experience in scheduling tool like Autosys Understanding of Agile methodologies and technologies
- Sound knowledge of relational databases (SQL) and experience with large SQL based systems.
- Exposure to and strong working knowledge of distributed systems.
- Excellent understanding of client-service models and customer orientation in service delivery.
- Ability to grasp the 'big picture' for a solution by considering all potential options in impacted area.
- Aptitude to understand and adapt to newer technologies.
- The ability to work with team mates in a collaborative manner to achieve a mission.
- Experience in query optimization, performance tuning of the complex SQL queries.
- Benchmark and debug critical issues with algorithms and software as they arise
Vacancy expired!