Vacancy expired!
- BA/BS required; preferably in Computer Science, Data Analytics, Data Architecture
- 4+ years of experience in Hadoop ecosystem (Ex: Hive, Spark, Kafka, HBase, Oozie, Sqoop etc)
- 3+ years of hands-on experience in architecting, designing, and implementing data ingestion pipes for batch, real-time, and streams on Azure cloud platform at scale
- 1+ year of experience in using ETL tools like Infoworks/ Nifi/ or similar tools
- 1+ year of hands-on experience on Databricks
- 1+ year of experience in evaluating emerging technologies is required
- 3+ years of hands on experience with Azure cloud especially with Spark, ADLS, and Blob storage etc technologies
- 1+ year of experience on distributed querying tools like Presto or similar tools
- 1+year of experience in Java/Python/Scala is preferable
- Knowledge of SAS, Teradata, Oracle etc other databases is a plus
- Having exposure to R and ML technologies is a plus
- Having experience in extracting/querying/Joining a large amount of data sets at scale is essential
- Able to understand statistical solutions and execute similar activities
Vacancy expired!