Vacancy expired!
- Need Hadoop experience
- Knowledge and implementation of cloud based Big Data solutions
- Experience with statistics, modeling and numerical analysis
- Must have experience in managing the full life-cycle of a Hadoop solution
- Hands-on experience with open source software platforms and languages
- Experience building comprehensive Big Data platform for data science and engineering that can run batch process and machine learning algorithms reliably
- Implementation and tuning experience with various frameworks in the Big Data Ecosystem such as Hadoop, Presto, Apache Spark, Hive, and Impala
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Knowledge of Scala and Python would be a plus
- Explore and validate new technology platforms to solve business needs
- Hadoop Ecosystem (map reduce, spark, hbase)
- Responsible for developing and maintaining data warehouse, ensuring Data Architecture aligns to business centric roadmap and analytics capabilities
- Establish data management and development standards for data warehouse, data integration and Big Data technologies
- Support ongoing data modeling and Integration efforts for all development and production environments including Big Data Design Patterns
Vacancy expired!