Job Details

ID #45242226
State Texas
City Austin
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Donato Technologies Inc
Showed 2022-08-28
Date 2022-08-26
Deadline 2022-10-25
Category Et cetera
Create resume

Data Engineer

Texas, Austin, 73301 Austin USA

Vacancy expired!

Data EngineerLocation: Bay Area, CA/ Austin, TX Preferred – Onsite(3-4 Week remote allowed)Duration: 6+ MonthsExp level: 10+ Years exp

DescriptionSeeking an experienced Data Engineer with a proven record of developing advanced data models and SQL Queries on complex, disparate systems, preferably in a Big Data warehouse environment like Snowflake. The candidate will be a member of a team developing and evolving an internal business intelligence reporting, building a data model getting data from various source systems into one single source infrastructure AKA Common Data Foundation (CDF). The data models/objects developed by this team will supply information to multiple applications and teams across the organization groups to consume data either in standard reporting or for ad-hoc data needs.

Responsibilities:The specific responsibilities of the Data Engineer position include, but are not limited to, the development and deployment of data models and reports that provide business intelligence to engineering and operations groups.• Development of data models using Teradata/Snowflake that summarize complex data into usable, digestible datasets, dashboards, and reports.• Development of algorithms, routines, and jobs to extract, transform and normalize, and load the databases with operational data in preparation for reporting and analysis.• Develop, test, and implement scheduled and ad hoc reports that enable data driven decisions• Investigation, profiling, and resolution of performance bottlenecks attendant to usage of the databases, either when loading or extracting. This will likely involve interaction with key users to ensure that performance is suited to their particular needs.• Implementation of extracts and reports from the production databases, in support of repeated, periodic, automated requests, as well as ad-hoc requests as needed by key stakeholders.• Working with multiple data source owners to understand the process and align with CDF requirements• Basic documentation of data dictionary and process around data attributes and measures

Required Skills:• Strong experience writing SQL on Teradata/Snowflake/SingleStore (MemSql)• Strong experience writing Stored Procedures/Performance Tuning• Experience reverse engineering tableau dashboards to get custom calculations and replicate the same in SQL queries• Skilled in advanced SQL performance tuning, particularly involving complex data model, capable of re-writing SQL to reduce execution time.• Build ETL pipelines using keystone/Airflow and Strong understanding of AWS/Big data architecture• Experience with Python ETL/Scripting is definitely a plus• Ability to document and communicate Project status, issues, and risks in a timely manner to team and senior management.• Minimum of 5 years developing SQL queries• Experience with Tableau to build dashboards for prototyping

Must-Have:• Advanced SQL Programming MUST• Experience with Tableau MUST• Worked/Knowledge on OLAP concepts (MUST)• Experience/Knowledge on ETL using KeyStone preferred (ETL is MUST, any tool is acceptable if not Keystone)

Preferred:• Experience with Teradata/SingleStore is preferred• Experience with Python, AirFlow and JSON (Preferred)

Vacancy expired!

Subscribe Report job