Job Details

ID #43586317
State North Carolina
City Charlotte
Job type Permanent
Salary USD TBD TBD
Source Bank Of America
Showed 2022-06-24
Date 2022-06-23
Deadline 2022-08-22
Category Et cetera
Create resume

Distributed Computing Infrastructure Architect, Risk Technology

North Carolina, Charlotte, 28201 Charlotte USA

Vacancy expired!

Job Description:

Responsible for designing and developing complex requirements to accomplish business goals. Ensures that software is developed to meet functional, non-functional, and compliance requirements. Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Possess strong proficiency in development and testing practices common to the industry, and have extensive experience of using design and architectural patterns. At this level, specializations start to form in either Architecture, Test Engineering or DevOp. Contributes to story refinement/defining requirements. Participates and guides team in estimating work necessary to realize a story/requirement through the delivery lifecycle. Performs spike/proof of concept as necessary to mitigate risk or implement new ideas. Codes solutions and unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Assists team with resolving technical complexities involved in realizing story work. Designs/develops/modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained. Designs/develops/maintains automated test suites (integration, regression, performance). Sets up and develops a continuous integration/continuous delivery pipeline. Automates manual release activities. Mentors other Software Engineers and coaches team on CI-CD practices and automating tool stack. Individual contributor.

Job Summary:

The Global Risk Analytics technology team is seeking a distributed computing/big data infrastructure architect with a wide breath of experience in grid computing, data storage clusters, high performance computing (HPC) and application development.

The successful candidate will be responsible for grid infrastructure architecture, infrastructure monitoring, infrastructure configuration, optimization, and management, and will provide expertise to teams of software developers and model developers (quants). The architect will also be responsible for evaluating emerging technologies, creating strategic roadmaps and blueprints, and collaborating with infrastructure partners.

Job Responsibilities:
  • Architect, optimize and manage large distributed computing grids based on Apache Spark.
  • Evaluate emerging technologies against business and IT strategic needs; plan and lead proof of concepts (POCs) in support of new technology (e.g., streaming databases).
  • Create and maintain the target application architecture and blueprints.
  • Creates technology roadmaps on distributed computing and big data solutions. governed by strategic direction, business requirements as well as resourcing and financial constraints.
  • Development of light weight web applications to monitoring cluster performance.
  • Partner with core development teams to remediate cluster performance issues by providing expertise on application job tuning and optimization.
  • Collaborate with internal partners (Spark Grid & Hadoop clusters) and multiple vendors (IBM, Cloudera) to produce technical solutions that is aligned to business needs and direction.
  • In partnership with solution architects and development managers, foster innovative and efficient design to enable technical capabilities supporting the business strategy.

Required Skills:
  • 8+ years' experience as software engineer, infrastructure engineer or infrastructure architect
  • Architecture/Development Lead experience with distributed computing / big data platforms, working with technologies such as Apache Spark, Hadoop, Hive, Impala, Kafka.
  • Proven track record of execution and delivery
  • Experience in dealing with multi-level organizations
  • Deep understanding of big data, distributed parallel processing and High-Performance Computing (HPC)
  • Experience with Apache Spark, Hadoop, SQL, and Python
  • Understanding of network technologies (e.g., SSL, load balancing)
  • Strong problem solving and troubleshooting skills
  • Executive presentation skills
  • Self-starter that is curious
  • Ability to work autonomously with minimal supervision
  • Comfortable in a matrix organization, collaborating with developers and technical business partners (Model developers/quants)
  • Excellent oral and written communication skills with the ability to articulate complex subject matter to non-technical audiences, presentation skills
  • Highly organized and can lead by example
  • 4-year degree (Computer Science or Data Science preferable)

Risk Technology:
  • Believes diversity makes us stronger so we can reflect, connect, and meet the diverse needs of our clients and employees around the world.
  • Is committed to building a workplace where every employee is welcomed and given the support and resources to perform their jobs successfully.
  • Wants to be a great place for people to work and strives to create an environment where all employees have the opportunity to achieve their goals.
  • Provides continuous training and development opportunities to help employees achieve their career goals, whatever their background or experience.
  • Is committed to advancing our tools, technology, and ways of working to better serve our clients and their evolving business needs.
  • Believes in responsible growth and is dedicated to supporting our communities by connecting them to the lending, investing, and giving them what they need to remain vibrant and vital.

Shift:1st shift (United States of America)

Hours Per Week:40

Learn more about this role

Vacancy expired!

Subscribe Report job