Vacancy expired!
- Design and implement Data Engineering Solutions and ETL Processes with the Azure stack including Azure Data Factory, Azure Data Lake, Azure SQL Server, Databricks, etc.
- Independently solve complex technical problems.
- Ability to work under pressure and maintain composure and professionalism in a developing environment with multiple changing priorities/tasks.
- Participate in the definition and revision of development best practices and standards.
- Competently perform advanced technical tasks with minimal supervision, including design and implementation of Data Engineering Solution components (Data Ingestion, Curation, Process Orchestration, etc.)
- Strong understanding of Enterprise ETL tools, Data Engineering Technology stacks and solutions.
- Strong understanding of Software Engineering principles and best practices.
- Strong understanding of ETL design patterns and architectures (ETL vs ELT)
- Strong understanding of cloud data topologies (i.e. data lake)
- Strong SQL Querying and Programming skills
- Good understanding of data analytics architectural approaches and data models (i.e. Kimball, Data Vault)
- Understanding of all aspects of development including, but not limited to, gathering requirements, development of technical components related to process scope and supporting testing and post implementation support.
- Ability to work and partner with users and stakeholders to gather solution requirements.
- Ability to adapt to new technical innovations and business processes.
- 5 or more years of relevant experience
- Bachelor's degree or certification required
- Experience with one or more Enterprise ETL tools (Azure Data Factory, Informatica).
- Experience working with RDBMS platforms (SQL Server) and MPP Platforms (Azure Synapse, Snowflake, Redshift)
- Experience in the development and implementation of complex, high volume Data Engineering solutions.
- Experience ingesting data from a variety of sources and mediums including relational systems, APIs, webhooks, event streams, data lakes, etc.
- Experience working with cloud technology stacks (Azure, AWS, Google Cloud)
- Experience applying Software Development best practices and following SDLC processes.
- Experience working with major source control systems (Git) and applying source control best practices and methodologies (i.e. Branching)
- Exposure to one or more Object Oriented or Procedural Programming languages (C#, Java, Python)
- Exposure to Big Data processing technologies like Spark (i.e. Databricks) and data streaming technologies like Kafka, Kinesis, etc.
- Exposure to Data Science environments and operationalizing Data Science/Machine Learning solutions.
- Exposure to agile development methodologies (Scrum, Kanban)
- Comprehensive Medical Benefits
- Competitive Pay, 401K
- Retirement Plan
- And Much More
Vacancy expired!