Vacancy expired!
The Data Consumption team of Data Security & Infrastructure (DSI) is seeking a highly motivated Spark / Big Data Developer to start or continue an IT career on the Big Data Platform project. Teaming up with architects, scrum masters, leads, managers and directors, you will work in an Agile environment to make the data on the Big Data Platform accessible for the needs of the organization through Atomic Models and Analytical Data Stores. In this role, you will coordinate with a variety of IT departments to elicit and translate high level business requirements into detailed system requirements. You will participate in the entire software development lifecycle by writing and executing test plans, finding solutions to issues during development and after deployment. You should be intellectually curious, have a solutions-oriented attitude and enjoy learning new tools and techniques.
Basic Qualifications:- 5+ years' experience with Relational Database Systems and SQL
- 5+ years' experience designing, developing, and implementing ETL
- 5+ years' Spark and Scala experience and NoSQL databases
- 5+ years' experience with UNIX including basic commands and shell scripting
- 5+ years' experience providing technical leadership on relevant applications
- 3+ years' experience in at least one scripting language (Python, JavaScript, Shell)
- 3+ years' experience with Agile engineering practices
- Strong critical thinking, decision making, and problem-solving skills
- Excellent verbal/written communication skills, including communicating technical issues to non-technical audiences
- Developing new and enhancing existing data processing (Data Ingest, Data Transformation, Data Store, Data Management, Data Quality) components
- Strong working knowledge of SQL and the ability to write, debug, and optimize SQL queries
- Bachelor's degree in a computer related field or equivalent professional experience required
- 5+ years professional data engineering experience focused on batch and real time data pipelines development using Spark, Python or Java; Data processing / data transformation using ETL tools, Azure Databricks platform (preferred)
- 3+ years' experience with Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift);
- 3+ years' experience with a DevOps model utilizing a CI/CD tool
- 3+ years experienced in Azure Cloud Platform. (ADLS, Blob)
- 3+ years exposure to Cloud and Distributed Data Storage (HDFS, S3, ADLS, Cassandra or other NoSQL storage systems)
- 3+ years' experience in Data integration technologies: Kafka, eventing/streaming, NiFi, Azure Data Factory
- Complete software development lifecycle experience including design, documentation, implementation, testing, and deployment
- Familiarity with Data Vault, DataBricks, Fishtown/DBT tool & Graph Databases
Vacancy expired!