Vacancy expired!
- Total 7-10 years of experience
- Rich experience with Hadoop distributed frameworks, Spark data manipulation framework, handling large amount of big data using Apache Spark and Hadoop Ecosystem
- Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB)
- Extensive knowledge on designing the jobs in Hadoop system with Spark and Scala and should be able to prepare the Technical design document.
- Extensive knowledge on data processing using Spark, Kafka, ELK on Hadoop system
- Experience on optimizing the performance of the Spark jobs
- Experience using or contributing to SQL systems, such as SQL Server, PostgreSQL, or others.
Vacancy expired!