Job Details

ID #12366505
State Washington
City Seattle-tacoma
Job type Permanent
Salary USD $133000.00 - $133000.00 per annum 133000.00 - 133000.00 per annum
Source Experis
Showed 2021-04-18
Date 2021-04-17
Deadline 2021-06-16
Category Et cetera
Create resume

Sr Data Engineer

Washington, Seattle-tacoma, 98141 Seattle-tacoma USA

Vacancy expired!

Experis is seeking a Data Engineer on behalf of a company in Seattle, WA! This will be a full-time, remote position and the candidate must be based in Seattle, WA or Boise, ID. This data technology expert will be instrumental in implementing a new data strategy and architecture, including design standards and patterns, to develop the company's data environments into a highly valued business asset supporting the dynamic data and analytics needs of the future, leveraging appropriate data platforms and technologies.

Responsibilities:

  • Build data pipelines: Manage data pipelines consisting of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines have to be created, maintained and optimized as workloads move from development to production for specific use cases. Architecting, creating and maintaining data pipelines will be the primary responsibility of the data engineer.
  • Drive Automation through effective metadata management: The data engineer will be responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. The data engineer will also need to assist with renovating the data management infrastructure to drive automation in data integration and management.
    • Learning and using modern data preparation, integration and AI-enabled metadata management tools and techniques.
    • Tracking data consumption patterns.
    • Performing intelligent sampling and caching.
    • Monitoring schema changes.
    • Recommending or sometimes even automating existing and future integration flows.
  • Collaborate across departments: The newly hired data engineer will need strong collaboration skills in order to work with varied stakeholders within the organization. In particular, the data engineer will work in close relationship with the data architect and with business (data) analysts in refining their data requirements for various data and analytics initiatives and their data consumption requirements.
  • Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate and innovative data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements. The data engineer will be required to train counterparts such as data analysts, business users or any data consumers in these data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases.
  • Participate in ensuring compliance and governance during data use: Data engineers should work with data governance team and data stewards within this team and participate in vetting and promoting content created in the business and by data architects to the curated data catalog for governed reuse.
  • Become a data and analytics evangelist: The data engineer will be considered a blend of data and analytics "evangelist," "data guru" and "fixer." This role will promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals

Education and Training

  • A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field or equivalent work experience is required.

Technical and Business Knowledge/Skills

  • Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as R, Python, Java, others.
  • Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.
  • Strong experience with database programming languages including SQL, PL/SQL, others for relational databases and certifications.
  • Experience on NoSQL/Hadoop oriented databases like MongoDB, Cassandra, others for nonrelational databases are a plus.
  • Experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, and data virtualization.
  • Strong experience in working with Oracle packages and data pump, ability to troubleshoot and reverse-engineer exiting processes, fix and improve data logic and query performance.
  • Strong experience in creating, documenting and improving data models from business specification
  • Strong experience in profiling source data with little or no documentation and documenting data quality
  • Basic experience working with popular data discovery, analytics and BI software tools like Tableau, Qlik, PowerBI and others for semantic-layer-based data discovery.
  • Strong experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
  • Demonstrated success in working with large, heterogeneous datasets to extract business value using tools such as AWS Data Pipeline, Airflow or Segment to reduce or even automate parts of the tedious data preparation tasks.
  • Experience in working with data governance/data quality and data security teams and specifically data stewards and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification.
  • Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization

Vacancy expired!

Subscribe Report job