Job Details

ID #19900037
State California
City Sanfrancisco
Job type Permanent
Salary USD TBD TBD
Source Uber
Showed 2021-09-19
Date 2021-09-10
Deadline 2021-11-08
Category Internet engineering
Create resume

Software Engineer, Global Data Warehouse

California, Sanfrancisco, 94103 Sanfrancisco USA

Vacancy expired!

About the Role Data underpins our products, enabling intelligent decision-making, and improved user experiences. Leveraging the state-of-the-art data processing technologies, the Global Data Warehouse team builds and maintains high-quality, analytics-optimized canonical data models derived from petabytes of raw data and deliver actionable insights through various reporting methods. GDW also builds data transformation frameworks and tools that enable data processing at Uber's scale. Engineers in GDW focus on data architecture, design, source data instrumentation, ETL pipeline optimization, and data model implementation. They work extensively with HDFS, Hive, Presto, and Spark to build efficient and scalable solutions for end-users. They also work on the data transformation framework to implement and extend its functionality using Python such that all users of the framework benefit from it. They identify limitations and required features in Data tools and partner with peer teams to design and implement them. They automate manual work by developing scripts and utilities for repeated tasks. They own big data problems, understand all underlying infra systems and platforms, and work towards improving resource requirements and customer SLA. What the Candidate Will Do Engineers in GDW focus on data architecture, design, source data instrumentation, ETL pipeline optimization, and data model implementation. They work extensively with HDFS, Hive, Presto, and Spark to build efficient and scalable solutions for end-users. They also work on the data transformation framework to implement and extend its functionality using Python such that all users of the framework benefit from it. They identify limitations and required features in Data tools and partner with peer teams to design and implement them. They automate manual work by developing scripts and utilities for repeated tasks. They own big data problems, understand all underlying infra systems and platforms, and work towards improving resource requirements and customer SLA. Basic Qualifications BS or MS in Computer Science or a related technical field, or equivalent experience. 2+ years hands-on experience analyzing business metrics and investigate data problems and improving data quality. 2+ years experience writing and deploying code in one of the following programming languages: Python, Scala, or Java. 1+ years hands-on experience using Hadoop, Hive, Presto, Spark, or another Big Data system like AWS or GCP. Proficient in writing and analyzing SQL queries. Track record of successful partnerships with product and engineering teams resulting in on-time delivery of impactful data products. Preferred Qualifications Familiarity with Kimball's data warehouse lifecycle. Experience with real-time data ingestion and stream processing. 1+ years experience using Spark, Presto for data transformations.

Vacancy expired!

Subscribe Report job