Vacancy expired!
- 7 or more years of experience working directly with enterprise data solutions
- Hands-on experience working in a public cloud environment and on-prem infrastructure.
- Specialty on Columnar Databases like Redshift Spectrum, Time Series data stores like Apache Pinot, and the AWS cloud infrastructure
- Experience with in-memory, serverless, streaming technologies and orchestration tools such as Spark, Kafka, Airflow, Kubernetes
- Current hands-on implementation experience required, possessing 7or more years of IT platform implementation experience.
- AWS Certified Big Data - Specialty desirable
- Experience designing and implementing AWS big data and analytics solutions in large digital and retail environments is desirable
- Advanced knowledge and experience in online transactional (OLTP) processing and analytical processing (OLAP) databases, data lakes, and schemas.
- Experience with AWS Cloud Data Lake Technologies and operational experience of Kinesis/Kafka, S3, Glue, and Athena.
- Experience with any of the message/file formats: Parquet, Avro, ORC
- Design and development experience on a Streaming Service, EMS, MQ, Java, XSD, File Adapter, and ESB based applications
- Experience in distributed architectures such as Microservices, SOA, RESTful APIs, and data integration architectures.
- Experience with a wide variety of modern data processing technologies, including:
- Big Data Stack (Spark, spectrum, Flume, Kafka, Kinesis, etc.)
- Data streaming (Kafka, SQS/SNS queuing, etc)
- Columnar databases (Redshift, Snowflake, Firebolt, etc)
- Commonly used AWS services (S3, Lambda, Redshift, Glue, EC2, etc)
- Expertise in Python, pySpark, or similar programming languages
- BI tools (Tableau, Domo, MicroStrategy)
- Understanding Continuous Integration/Continuous Delivery
Vacancy expired!