Job Details

ID #17213307
State Arkansas
City Bentonville
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Spar Information Systems
Showed 2021-07-24
Date 2021-07-15
Deadline 2021-09-13
Category Et cetera
Create resume

Data Engineer with (Airflow, Python exp MUST)

Arkansas, Bentonville, 72712 Bentonville USA

Vacancy expired!

Data Engineer

Location: Bentonville, AR

Long Term

The mandatory skills are Google Cloud Platform, Data proc, Airflow, Python, Spark , Hive

.

Description:Client is looking for a highly energetic and collaborative Data Engineer with experience building enterprise data intelligence on Cloud platforms. The Data Engineer will be responsible for delivering quality reports and data intelligence solutions to the organization and assist client teams with drawing insights to make informed, data driven decisions for a leading Retailer in the United States. The ideal candidate is expected to be experienced in all phases of the data management lifecycle, gather and analyze requirements, collect, process, store and secure, use, share and communicate, archive, reuse and repurpose data. Identify and manage existing and emerging risks that stem from business activities and ensure these risks are effectively identified and escalated to be measured, monitored and controlled. The candidate should be a proven self-starter with demonstrated ability to make decisions and accept responsibility and risk. Excellent written and verbal communication skills with the ability to collaborate effectively with domain experts and IT leadership team is key to be successful in this role.Responsibilities:

As a Data Engineer, you will
  • Excepted to provide hands on software development for a large data project, hosted in a cloud environment.
  • Develop and refine the technical architecture used with Teradata, Python, Spark and Hadoop development teams.
  • Provide expertise in the development of estimates for EPICs and User Stories for planning and execution.
  • Be able to help others break down large team goals into specific and manageable tasks.
  • Be involved and supportive of agile sprint model of development, helping to enforce the practice and the discipline.
  • Coach and mentor team members on Teradata, Python, Spark and Hadoop development best practices.
  • Define and enforce application coding standards and best practices.
  • Identify and resolve technical and process impediments preventing delivery teams from meeting delivery commitments.
  • Align and collaborate with architects, other team leads, and IT leadership to develop technical architectural runways supporting upcoming features and capabilities.
  • Diagnose and troubleshoot performance and other issues.
  • Collaborate with peers, reviewing complex change and enhancement requests.
  • Evaluate potential changes and enhancements for objectives, scope and impact.
  • Take a proactive approach to development work, leading peers and partners to strategic technical solutions in a complex IT environment.
  • Document functional/technical requirements and design based on requirements or objectives.
  • Mentor peers on coding standards, patterns and strategy.
  • Guide the team on best practices in Teradata, Python, Spark and Hadoop as well as perform code reviews.
  • Build and maintain active relationships with customers to determine business requirements.
  • Partner with other IT teams during integration activities to facilitate successful implementations.
  • Participate in on-call application support and respond to application issues when identified.
  • Communicate effectively with technical peers in in a clear manner, while also being able to articulate complex solutions in ways nontechnical business partners can understand.
  • Have a good understanding of where their project fits into the larger goals for engineering and adapts their work so that the priorities of the systems they are creating match those of the organization

Requirements:
  • BA/BS degree or technical institute training or equivalent work experience
  • 4+ years of hands on Teradata, Python, Spark and Hadoop development experience
  • 1+ years combined of hands on Google Cloud Platform (Google Cloud Platform) development experience
  • Expertise working in GCS Connector, DataProc, Bigquery
  • Experience working in ADF Python will be an added advantage
  • Experience with Big Data processing frameworks (Spark, Hadoop) is required.
  • Experience with DevOps tools and techniques (Continuous Integration, Jenkins, Puppet) is required.
  • Experience with one or more software version control systems (e.g. Git, Subversion)
  • Experience overseeing team members.
  • Excellent communication and presentation skills.
  • Experience in agile environment
  • Experience with Sprint Boot, Maven, Bamboo and great debugging skills.
  • Great understanding with builds, software development and GIT.
  • Strong effective communication skills, both written and verbal
The most successful candidates will also have experience in the following:
  • Gitflow
  • Atlassian products – BitBucket, JIRA, Confluence etc.
  • Continuous Integration tools such as Bamboo, Jenkins, or TFS

Vacancy expired!

Subscribe Report job