Vacancy expired!
- Design and build data services that deliver Strategic Enterprise Risk Management data
- Design high performing data models on big-data architecture as data services.
- Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB and object storage architecture.
- Design and build the data services on container-based architecture such as Kubernetes and Docker
- Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data
- Work with business analysts, development teams and project managers for requirements and business rules.
- Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
- Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
- Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
- Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance
- Support ongoing data management efforts for Development, QA and Production environments
- Utilizes a thorough understanding of available technology, tools, and existing designs.
- Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
- Acts as expert technical resource to programming staff in the program development, testing, and implementation process.
- 5+ years of application development and implementation experience
- 5+ years of experience delivering complex enterprise wide information technology solutions
- 5+ years of ETL (Extract, Transform, Load) Programming experience
- 3+ years of reporting experience, analytics experience or a combination of both
- 4+ years of Hadoop development/programming experience
- 5+ years of operational risk or credit risk or compliance domain experience
- 5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
- 6+ years of Java or Python experience
- 5+ years of Agile experience
- 5+ years of design and development experience with columnar databases using Parquet or ORC file formats on Hadoop
- 5+ years of Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs)
- 2+ years of experience integrating with RESTful API
- Excellent verbal, written, and interpersonal communication skills
- Experience designing and developing data analytics solutions using object data stores such as S3
- Experience in Hadoop ecosystem tools for real-time & batch data ingestion, processing and provisioning such as Apache Spark and Apache Sqoop
- Ability to work effectively in virtual environment where key team members and partners are in various time zones and locations
- Ability to interact effectively and confidently with senior management
- Knowledge and understanding of project management methodologies: used in waterfall or Agile development projects
- Knowledge and understanding of DevOps principles
- A BS/BA degree or higher in information technology - Masters required
Vacancy expired!