Job Details

ID #45242546
State Ohio
City Westfieldcenter
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Strategic Systems Inc
Showed 2022-08-28
Date 2022-08-09
Deadline 2022-10-08
Category Et cetera
Create resume

Data Engineer Level 1,2 or 3

Ohio, Westfieldcenter, 44251 Westfieldcenter USA

Vacancy expired!

Must be Authorized to work in USA without any sponsorship now or in future.

Must have Python as ETL and Snowflake experience

Title: Data Engineer Level 1,2 or 3

Location: Westfield Center, OH 44251

Duration: : 6 Months right-to-hire

Job Summary

Team/Goal: Building separate Data Engineering practice to get better and faster. Currently have a team of 20 Data Stage developers.As a member of the Enterprise Information Management and Analytics (EIMA) team within the IT department, a Data Engineer will have direct impact on important initiatives that enable our business by architecting, designing, and implementing advanced analytics capabilities. Working in an agile team environment, the Data Engineer partners with other engineers, members of business units and IT to bring their vision to the table and implement positive change in taking the company's data analytics to the next level.Primary responsibilities include designing, building and managing custom data pipelines and (ETL) processes in support of data initiatives. They will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply their skills to generate insights that help solve business challenges.The Data Engineer is expected to be an excellent communicator that challenges themselves and has a strong desire to continually improve their knowledge and skills. The level 2 Data Engineer works under general supervision and is a peer mentor to less experienced Data Engineers.

Essential Functions (primary functions and/or reasons the job exists in order of importance)
  • Populate data warehouses, data marts, and build data pipelines to meet demand for data across the organization using traditional data integration technologies including, ETL, data replication/CDC, message-oriented data movement and API development.
  • Optimize data availability with intention to refine and enhance AI/ML models and algorithms in partnership with data analysts, data scientists, and other data consumers across IT and the business.
  • Drive continuous improvement in performance of data warehouses and partner with others across IT to ensure both timely availability and security of data.
  • Support daily operations via analyzing and correcting incidents and defects in a timely and accurate fashion.
  • Desired Qualifications/Experience/Certification/Education (in order of importance)
  • 3-6+ years of work experience in data management disciplines including data integration, modeling, data security, and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
  • Experience working in agile teams.
  • Highly skilled at SQL programming.
  • Highly skilled at ETL tool and methods.

    Experience with DataStage (BONUS, but not must have), experience with Python as an ETL tool is MANDATORY.
  • Highly skilled at data analysis with the goal of discovering useful information, informing conclusions, and supporting decision-making.
  • Skilled at performing complex operations against large, heterogeneous datasets and at consuming and working with large volumes of unstructured data in formats such as XML, JSON, etc.,
  • Familiar navigating relational databases (Netezza, DB2, SQL Server) and nonrelational databases, (NoSQL/Hadoop oriented databases like MongoDB, and Azure), in multiple deployment environments including cloud, on-premises, and hybrid. NOT ADMINISTRATIVE Experience.
  • Skilled at building conceptual, logical and physical data models. Experience with ErWin or other modeling tools is important.
  • Familiarity in API development with various protocols (e.g. SOAP, REST). Experience with Mulesoft (BONUS) platform a plus.
  • Knowledgeable of working with both open-source and commercial message queuing technologies (e.g. MQ, RabbitMQ, Kafka), and stream analytics technologies (e.g. Apache Kafka KSQL, Apache Spark Streaming). NICE TO HAVE. MOVING INTO SNOWFLAKE.
  • Knowledgeable in writing simple scripts to automate manual, repeatable tasks. Familiarity with scripting languages (e.g. Python, Powershell, Bash) a plus.
  • Familiar with data visualization using tools such as Cognos, Tableau, and OutSystems a plus. NOT NECESSARILY WRITTEN IN IT.
  • Knowledgeable with DevOps and containerization tools (e.g. Kubernetes, Docker, Git, Azure DevOps, Jenkins, UCD, Puppet, Chef, Ansible, etc.) a plus. DIFFERENTIATOR.
  • Bachelor’s degree in Information Technology, Computer Science, Engineering, Mathematics or related field or commensurate experience.
  • Key Tech Skills
    • Ability to partner directly with Data Scientists and create new data pipelines pulling data from existing Enterprise Data Marts or 3rd party data to help build new AI/ML models
    • Strong Data Analysis and investigation skills. They need to be able to review existing Data Models to find data without being led.
    • Strong Python skills Mandatory. Python will be used to extract, massage and present data via pipelines
    • Experience working in Agile environments is a plus or will train.
    • Python experience with slant towards Snowflake
    • Snowflake Experience is Needed.

    Key Soft Skills
    • Written and verbal communication – MUST HAVE
    • Inquisitive and thirst for learning
    • Self-starter – don’t rely on someone else describing data model. Be able to take data model and run with it by having data analysis and tech skills.

    Vacancy expired!

    Subscribe Report job

    Related jobs

    »Data Engineer
    2022-08-09