Job Details

ID #17358775
State California
City Sunnyvale
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Intelliswift Software Inc
Showed 2021-07-27
Date 2021-07-26
Deadline 2021-09-24
Category Et cetera
Create resume

Python Cloud Data Pipeline Engineer - Sunnyvale & Seattle

California, Sunnyvale, 94085 Sunnyvale USA

Vacancy expired!

Title: Python Cloud Data Pipeline Engineer

Position Type:

Location: Location is SCV(cupertino) or Seattle after September 1st. Remote until then., CA, United States

Description:• 7+ years overall software development experience with at least 3+ years of experience in large scale data platforms. • Excellent programming expertise in Object oriented programming using Java. • Excellent problem solving and debugging skills to provide quick fixes for critical show stopping problems. • Hands-on experience in developing Streaming pipelines (Data Ingestion) from Kafka to S3/Hive using Spark/Flink. • Answer complex questions using data, analysis, and clearly communicate findings to engineering teams for direction. • Hands-on experience in AWS cloud services like S3, IAM, EC2, EKS and VPN. • Proficient in designing scheduling workflows in Apache Airflow. • Improve the Data Pipeline monitoring system and add more relavent metrics to monitor the health of the system. • Understanding of Docker containers and Kubernetes. • Understanding of CI/CD automation and willing to learn new technologies. • Excellent communication and interpersonal skills.

Mandatory1. Excellent expertise in Python.2. Basic understanding of Data Modeling and Good SQL Skills.3. Experience in building data pipelines to migrate data from on-prem to AWS cloud.4. Experience in Hadoop (YARN, Hdfs, Hive, distcp) and AWS Cloud ( S3 , IAM, Kubernetes).5. Experience in containerization (Docker, EKS)6. Experience in collecting metrics and building dashboards in DataDog.7. Expert in Scheduling Complex DAG's in Airflow.8. Understanding of version control like Git and CI/CD workflow.

Optional1. Experience in search engines for logs like Splunk.2. Experience in Object oriented programming like Java or Scala.3. Experience in Spark, Iceberg.

Bonus• Expertise in Data Dog, Splunk, Apache Iceberg, Docker, Kubernetes.

Day to Day life • Feature implementation in python using Hadoop client API’s.• Production Support and On-call rotation. • Monitor the ingestion pipeline and rapid problem solving if somethings breaks. • Collect and add metrics to existing Data Dog Dashboard. • Monitor Slack channel and answer customer/user questions about data and latency. • Agile, Highly collaborative in webEx and effective communications over slack, emails and verbal.

Vacancy expired!

Subscribe Report job