Vacancy expired!
- Design the architecture for migrating on-premise data warehouse and data marts to the data lake and subsequent vending on AWS Cloud.
- Architect solutions for the design and implementation of Big & Fast Data Infrastructure on AWS cloud using Kafka, Kinesis, Glue, Athena, Redshift, DynamoDB & Quick Sight
- Define information models supporting data assets for complex data structures represented through various data management systems such as a graph, relational and hierarchical databases.
- Guide other teams to design, develop, and deploy data sets and tools that support product use cases
- 8+ years of relevant experience.
- Must have AWS data AND application experience
- PySpark/Spark
- PQL skills
- Big Data
- Strong Python or Java skills
- AWS experience
- Database systems (SQL and NoSQL)
- Data warehousing solutions
- ETL tools
- Data APIs.
- Understanding the basics of distributed systems.
- Knowledge of algorithms and data structures.
- Knowledge of Machine Learning concepts such as KNN, Random Forest, Naïve Bayes, Neural Networks, and deploying them on Sagemaker is a plus.
Vacancy expired!