Vacancy expired!
- Over all 7 to 12 years of IT experience. Extensive experience in Big Data, Analytics, ETL technologies
- Application Development background along with knowledge of Analytics libraries, statistical and big data computing libraries
- Minimum 4+ years of experience in Spark, Python/Scala/java programming.
- Hands on experience in coding, designing and development of complex data pipelines using big data technologies
- Experience in developing applications on Big Data. Design and build highly scalable data pipelines
- Expertise in Python, SQL Database, Spark, non-relational databases
- Knowledge of Palantir would be added advantage
- Responsible to ingest data from files, streams and databases. Process the data using Spark, Python
- Develop programs in PySpark and Python as part of data cleaning and processing
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems
- Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform
- Provide high operational excellence guaranteeing high availability and platform stabilityImplement scalable solutions to meet the ever-increasing data volumes, using big data/Palantir technologies Pyspark, any Cloud computing etc.
- Individual who can work under their own direction towards agreed targets/goals and with creative approach to work
- Intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
Vacancy expired!