Vacancy expired!
1. Minimum 8-10 years’ experience in Java / Spark technologies, proficient in Spark data frames, Spark SQL, RDDs2. Well versed with Hadoop ecosystem, Hive, HiveQL, Parquet-to-text conversions and text-to-Parquet conversions3. End-to-end experience in implementing data pipelines leveraging Java and Spark technologies in Hadoop ecosystem (minimum 2 projects)
Vacancy expired!