Job Details

ID #12094814
State Pennsylvania
City Pittsburgh
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source VLink Inc
Showed 2021-04-10
Date 2021-04-02
Deadline 2021-06-01
Category Et cetera
Create resume

AWS Data Engineer

Pennsylvania, Pittsburgh, 15201 Pittsburgh USA

Vacancy expired!

Job Title: AWS Data Engineer

Location: Knoxville, Tennessee

Employment Type: Contract

Duration: 12 Months

About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients.

Job Description: Client is looking for an AWS Data Engineer who can join in our emerging technology group in the digital insight practice. The AWS Data Engineer with 2-5 years of experience will be responsible for designing & developing data ingestion and data transformation framework for modern data lake solutions. Design and build production data pipelines from ingestion to consumption within a big data architecture, using, Python, Scala, GLUE. Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming. Code & Unit test, and design continuous integration/development.

Responsibilities:
  • Candidates with the skills (not all but some) AWS Step Functions, S3, Glue, EMR, Redshift, Dynamo DB, Aurora, Athena, Big data on AWS.
  • AWS data services, Hadoop, Pyspark,
  • Secondary Skill: Glue, Data Pipe Line, Databricks, Virtual Machine
  • Role Description: Design & develop solution on AWS Cloud for data lake/integration using PaaS services like Glue, data pipe line etc Hadoop, Pyspark platform.

Qualifications:
  • 7+ years of work experience with ETL, and business intelligence AWS data architectures.
  • 3+ years of hands-on Spark/Scala/Python development experience.
  • Experience developing and managing data warehouses on a terabyte or petabyte scale.
  • Experience with core competencies in Data Structures, Rest/SOAP APIs, JSON, etc.
  • Strong experience in massively parallel processing & columnar databases.
  • Expert in writing SQL.
  • Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.
  • Experience with common software engineering tools (e.g., Git, JIRA, Confluence, or similar)
  • Ability to manage numerous requests concurrently and strategically, prioritizing when necessary.
  • Good communication and presentation skills.
  • Dynamic team player.

Vacancy expired!

Subscribe Report job