Job Details

ID #19593339
State New York
City New york city
Job type Contract
Salary USD Depends on Experience Depends on Experience
Source Super Technology Solutions, Inc.
Showed 2021-09-14
Date 2021-09-07
Deadline 2021-11-06
Category Et cetera
Create resume

Senior Data Engineer in NYC, NY (Contract to HIRE)

New York, New york city, 10001 New york city USA

Vacancy expired!

One of our premier clients is looking for a Senior Data Engineer in NYC, NY for a Contract to Hire position. If interested, please submit your resume ASAP indicating (1) current location (2) desired hourly rate (3) Your email address.

Job Description:As a Sr. Data Engineer, you will be part of the Data Engineering team responsible for developing and deploying Engineering and Integration solutions. Primary responsibility will be to work closely with the Data Architects and Machine Learning Team to implement data solutions for the organization using Python, Java, Kafka and other big data solutions, creating technical specification documents and test plans. Also provide support for the data solutions across the enterprise.

Responsibilities:
  • Work with business users, technology teams, and executives to understand their data needs to create innovative solutions to fulfil them
  • Design, organize, and implement data structures, workflows, and integrations between enterprise platforms to ensure the accurate and timely execution of business processes.
  • Develop and maintain scalable data pipelines and build out new API integrations to support continuing increases in data volume and complexity.
  • Guide decisions and establish best practices on data integration/engineering, as well as the future of our data infrastructure
  • Manage and improve the performance of our database, queries, tools, and solutions
  • Create and maintain data warehouse, databases, tables, SQL queries, and ingestion pipelines to power reports(Tableau), dashboards, predictive models, and downstream analysis
  • Write complex and efficient queries to transform raw data sources into easily accessible models for our teams and reporting platforms
  • Prepare data for predictive and prescriptive modeling
  • Identify and analyze data patterns
  • Identify ways to improve data reliability, efficiency and quality
  • Work with analytics, data science, and wider engineering teams to help with automating data analysis and visualization needs, advise on transformation processes to populate data models, and explore ways to design and develop data infrastructure
  • Other duties as assigned

REQUIREMENTS
  • Minimum Bachelor’s degree in computer science, information technology, or a similarly relevant field.
  • Minimum 5 years of data engineering experience preferred.
  • Minimum of 3 years of experience with OOP, SQL, schema designing, data modeling, designing, building, and maintaining data processing systems
  • Strong experience with advanced analytics tools for Object-Oriented/object function scripting using languages such as R, Python, Java, others.
  • Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
  • Database development experience using SQL, SPARK, or BigQuery and experience with a variety of relational, NoSQL oriented databases like Hadoop, MongoDB, Cassandra
  • Big Data Development experience using Hive, Impala, Spark, and familiarity with Kafka (preferred)
  • Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics
  • Extensive experience in triaging data issues, analyzing end-to-end data pipelines and working with business users in resolving issues.
  • Experience in working with data governance/data quality and data security teams and specifically data stewards and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification.
  • Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines.
  • Exposure to containerization using Docker, Kubernetes etc.
  • Understanding business processes and how they are modeled in various systems
  • Ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.

Vacancy expired!

Subscribe Report job