Vacancy expired!
Job Description
Data Platform Engineer - Kafka Thousand Oaks, CA Direct Hire Up to $170,000 The Data Platform Engineer will be an integral part of the team whose mission is to update the enterprises infrastructure to a more forward thinking, and modern technology stack. This role will be responsible for constructing Real-time, Resilient, and Scalable event handling processes for message transport across the enterprise. They will be working on an AWS stack with Kafka (Confluent), MQ, Kubernetes and Lambdas as well as interfacing with TSDBs such as Data Lake, Redshift and Pinot. They will also work to help us go server less and to bring AI/Client infrastructure online with Sagemaker (python). We are looking for a professional with strong operational experience running Kafka Clusters at scale, with knowledge of both On-Prem systems and public cloud infrastructure. Required Skills- Extensive hands-on experience working in an on-prem infrastructure as well as a public cloud environment.
- Strong background designing and developing Clusters, Producers/Consumers and enabling Cloud/hybrid Cloud using Confluent Data streaming through Kafka, SQS/SNS queuing and more.
- Strong experience with AWS Cloud Data Lake Technologies and operational experience of Kinesis/Kafka, S3, Glue and Athena.
- 7+ years working with data pipelining and application integrations
- Familiarity with Ansible, Puppet, Terraform, OpenShift, Kubernetes, AWS, AWS Lambda, Event Streaming is a must
- Strong knowledge of containers including Docker
- Streaming Service design and development experience - EMS, MQ, Java, XSD, File Adapter, and ESB based applications.
- Experience in distributed architectures: Microservices, SOA, RESTful APIs and data integration architectures.
- Experience with one or more of these message/file formats: ORC, Parquet, Avro
- Strong experience working with metadata repositories, data modeling and business analytics tools.
- Advanced knowledge and experience in OLTP and OLAP, databases, data lakes, and schemas.
- Experience with RabbitMQ and/or Tibco Messaging tools
- Experience with Spark, PySpark, DataDog, KSQL Splunk
Vacancy expired!