Vacancy expired!
Lead AWS Big Data Engineer 322474/73 Contract to Hire - $60-$65/hr Fully Remote
W2 ONLY; NO C2C, OPT, CPT or Sponsorship available We are looking for an experienced AWS/Big Data Lead to join the Data & Analytics team, a team of data and technology professionals who define, lead and execute the strategic data for the organization. The successful candidate will build data solutions using state of the art technologies to acquire, ingest and transform big datasets As an AWS/Big Data Lead, you will partner with our users and other data product teams to understand their needs and architect/build impactful data/analytics solutions. You will lead the design and build of data pipelines to support applications and data solutions following software engineering best practices Responsibilities:- Work with development teams and other project leaders/stakeholders to provide technical solutions that enable business capabilities
- Execute strategies that inform data design and architecture partnering with enterprise standard
- Architect, design and develop data applications using big data technologies (Hadoop, AWS) to ingest, process, and analyze large disparate datasets
- Build robust data pipelines on Cloud using Airflow, Spark/EMR, Kinesis, Kafka, Lambda or other technologies
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS 'big data' technologies.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Implement architectures to handle large scale data and its organization
- Execute strategies that inform data design and architecture partnering with enterprise standard
- Work across teams to deliver meaningful reference architectures that outline architecture principles and best practices for technology advancement
- Understand our data ecosystem and current data ingestion/distribution patterns
- Build understanding on our processes, strategic plays, and key outcomes
- Build partnership with other data product and functional teams and start aligning to data and analytics roadmap
- Have a fully functional environment setup for the development activity
- BS in Computer Science, Applied Mathematics, Physics, Statistics or area of study related to data sciences and data mining or relevant experience
- 5+ years of solid development in big data stack and programming experience in Python and Scala
- 5+ years of experience in data modeling, data warehousing, and big data architectures using Hadoop /EMR, spark, Redshift, Snowflake, No- SQL, or similar large scale distributed systems
- 5+ years of experience in a data engineering role to build Highly scalable large data pipelines
- Experience using JIRA and Agile Project Management software
- Experience with microservices development and Docker/Kubernetes
- Hands-on experience with the creation and automation of enterprise data pipelines and orchestration framework, Spark, Lambda on Cloud/hybrid / On-prem deployment architecture
- Experience with orchestration framework like Airflow
- Deep expertise in (at least one) programming language, Python/Scala
- Strong experience with writing complex programs, implementing architectures, and enabling automation in these environments
- Experience in CI/CD pipeline and code repositories like GitHub or Bit Bucket
- Proficient in application/software architecture (Definition, Business Process Modeling, etc.)
- DevOps experience with Terraform, Data, and ML pipeline experience
- Robust analytics and reporting skills - hands-on Experience in BI Tools like Tableau, Power BI
Vacancy expired!