Job Details

ID #15239159
State California
City Santaclara
Job type Contract
Salary USD $0 - $0 0 - 0
Source IDC Technologies
Showed 2021-06-08
Date 2021-05-19
Deadline 2021-07-18
Category Et cetera
Create resume

AWS Data Engineer

California, Santaclara, 95050 Santaclara USA

Vacancy expired!

Job Description:· In depth knowledge on AWS, especially data storage and processing technology stack like

Athena,

Glue, S3, EMR, Lambda, Cloud front, Cloud Formation, Kinesis,

Data Pipeline Services, AWS Batch Services, Dynamo DB, etc.· Knowledge of tools like Apache Airflow.·

Expert in writing SQL and some experience on Python and shell scripting.· Clear, professional written and verbal communication skills, ability to easily communicate complex ideas· Leadership skills Project management understanding for smooth functioning.· It would be exceptional, if you also have this Knowledge on Big data Experience in Spark and real time processing.· Any AWS certification related to Data.

AWS Data Specialist

Job Description:
  • Demonstrated experience in building data pipelines in data analytics implementations such as Data Lake and Data Warehouse
  • At least 2 instances of end to end implementation of data processing pipeline
  • Experience configuring or developing custom code components for data ingestion, data processing and data provisioning, using Big data & distributed computing platforms such as

    Hadoop, PySpark, and Cloud platforms such as

    AWS.
  • Hands on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development in 2 or more of the following areas
    • End to end implementation of Cloud data engineering solution AWS (EC2,

      S3, EMR, Spectrum, Dynamo DB, RDS,

      Lambda, Redshift,

      Glue, Kinesis)
    • End to end implementation of Big data solution on Cloudera/Hortonworks/MapR ecosystem
    • Languages (

      Python, Scala,

      Spark) Proficiency in data modelling, for both structured and unstructured data, for various layers of storage
    • Ability to collaborate closely with business analysts, architects and client stake holders to create technical specifications
    • Ensure quality of code components delivered by employing unit testing and test automation techniques including CI in DevOps environments.
    • Ability to profile data, assess data quality in the context of business rules, and incorporate validation and certification mechanism to ensure data quality
    • Ability to review technical deliverables, mentor and drive technical teams to deliver quality technical deliverables.
    • Understand system Architecture and provide component level design specifications, both high level and low level design

    Role- AWS Data Architect (Lead

    Additional notes from HM – Needs some Lead Architect/Principal Architect/Enterprise Architect who will be leading Technical Architects and also should be strong in Pyspark and Spark coding and Python

    Job Description:· Strong experience in the following AWS tech stack (Glue, S3, Redshift, Lambda, Step Functions)· Should have comprehensive experience on PySpark and Spark· Experience in NoSQL DB like Dynamo DB, AWS Database migration service; AWS Airflow, Cloud Watch (good to have)· Proficiency in data modelling, for both structured and unstructured data, for various layers of storage· Evaluate and recommend alternate architectural patterns, technology stack and components & tool options.· Mentor and lead a data engineering teams to design, develop, test and deploy high performance data analytics solutions· Collaborate with Scrum Master and PMO to organize deliverables into phases and sprints; drive estimation and sizing, contribute actively to planning and scheduling the solution implementationDefine best practices, standards, processes for cloud and data engineering implementation

    Vacancy expired!

    Subscribe Report job