Vacancy expired!
- Expertise in Data Engineering and implementing multiple end-to-end DW projects in Big Data environment
- Experience of building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage
- Experience of building Nifi pipelines
- Strong experience with Spark Processing engine.
- Experience in Kafka and PCF (Pivotal Cloud Foundry)
- Experience analyzing data to discover opportunities and address gaps.
- Experience of working with Databases like Oracle and have strong SQL knowledge
- Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan
- Strong communication skills - both verbal and written
- Ability to multi-task across multiple projects, interface with external / internal resources
- Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results
- Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement
- Flexibility to work as a member of a matrix based diverse and geographically distributed project teams.
Vacancy expired!