Vacancy expired!
Job Description
Title: Senior Data Engineer Location: Remote Type: 6 months project with a probable extension Job Description:Senior Data Engineer to join Data Architecture Services team to designing and engineering an Enterprise healthcare Data Warehouse that serves the reporting and analytical needs for business and IT consumers.This position will be responsible for SDLC of Data Integration projects that involves, but not limited to, preparing the detailed technical design documents from functional requirement documents, designing and engineering ELT framework, setting up the DB application schemas and developing database code, unit and system testing of end-to-end integration, release management and production support as needed. Responsibilities:- Creates and maintains optimal data pipeline architecture integrating large, complex data sets that meet functional and non-functional business requirements.
- Identifies, designs, and implements internal process improvements such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability.
- Builds the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS cloud native technologies.
- Builds analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Works with project stakeholders to assist with data-related technical issues and supports data infrastructure needs.
- Works with data and analytics team to strive for greater functionality in our data systems.
- Participates in special projects and performs other duties, as required.
- 3+ years of data replication, data engineering, and data integration and transformation experience
- Experience with Python as it relates to and support data workloads and applications (PySpark, Pandas, DataFrames, etc.)
- Experience with relational SQL databases, such as Oracle, SQL Server, MySQL, Postgres, and Snowflake required(Data Integration, transformation, manipulation).
- Strong proficiency to write complex SQL to perform common types of data extraction and manipulation.
- Experience with AWS cloud services such as EC2, RDS, S3 & Athena, lambda, and DynamoDB required
- Experience with data pipeline and workflow management tool Airflow required.
- Working knowledge in data modeling, Change Data Capture and Slowly Changing Dimensions, understating of various aspects of logical, conceptual and physical modeling of data warehouse projects
- Working Experience with version controlling GitHub
- Hands on work experience in Unix shell scripting.
- Experience with Data Profiling and Data Quality concepts and techniques
- Hands-on experience with additional AWS data pipeline development and AWS-based technologies
- Hands-on Experience with snowscripts & snowpipe.
Vacancy expired!