Vacancy expired!
TECHNOGEN, Inc. is a Proven Leader in providing full IT Services, Software Development and Solutions for 15 years. TECHNOGEN is a Small & Woman Owned Minority Business with GSA Advantage Certification. We have offices in VA; MD & Offshore development centers in India. We have successfully executed 100+ projects for clients ranging from small business and non-profits to Fortune 50 companies and federal, state and local agencies. Hi I am Swarna from TechnoGen. I am currently looking for Data Engineer or Big Data Developer. Please let me know if you would be interested and share your updated resume. You can reach me swarnalatha.t@technogeninc.com or else call me on 571 350 0499. Here we have openings for Data Engineer or Big Data Developer. Its Hybrid(50% Remote and 50% DTC) Location:DENVER. Note: Candidate must be your W2 Employee Job Description: The Data Engineer works within the ETL and Operations team to build high quality data pipelines driving analytic solutions from diverse, disparate sources of data. This role requires a comprehensive understanding of data architecture, data engineering, and data analysis. The ideal candidate is a skilled data engineer with experience creating data products supporting analytic solutions. They are able to identify and implement solutions in a highly technical environment and work as part of a technical, cross-functional team. Strong problem-solving and troubleshooting skills are a must. This role focuses on designing, developing, and maintaining datasets within AWS. Day-to-day responsibilities include:
- Coordinating, building, and managing new data ingests
- Architecting updates, fixes, and optimizations across suite of production jobs
- Providing feedback on and enacting changes for improvements across the team including both technical and process updates
- Learn and assess new technologies for implementation through proof-of-concept projects and testing
- Mentoring junior developers
- Partnering with data analysts and data scientists to design, build, and deploy aggregation processes
- Hadoop/Hive experience
- Very Strong SQL
- AWS experience (EMR, Lambda, Glue, Step Functions)
- Experience working with large data sets (billions of records per day)
- Write and review code
- Python scripting (PySpark, control scripts)
- Architecting/designing data pipelines
- Query and data pipeline optimization
- Strong communication and collaboration skills
- Ability to help others and mentor junior resources
- Spark
- Scala
- Git
- Shell scripting
- Linux
Vacancy expired!