Job Details

ID #15430550
State New York
City New york city
Job type Permanent
Salary USD TBD TBD
Source BlackRock
Showed 2021-06-14
Date 2021-06-08
Deadline 2021-08-07
Category Architect/engineer/CAD
Create resume

Senior Data Engineer (Spark/Hadoop) - New York, NY

New York, New york city, 10055 New york city USA

Vacancy expired!

Description

About this role

Elevate your career by joining the world's largest asset manager! Thrive in an environment that fosters positive relationships and recognizes outstanding performance! We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and effectively engage your involvement in our success.

Financial Modeling Group (FMG)

BlackRock is a global investment management corporation based in New York City. Founded in 1988, BlackRock is the world's largest asset manager, with over $8tn in assets under management as of Q4 2020. BlackRock operates globally with 70 offices in 30 countries and clients in over 100 countries around the world.

FMG is a diverse and global team with a keen interest and expertise in all things related to technology and financial analytics. The group is responsible for the research and development of quantitative financial models and tools across many different areas - single-security pricing, prepayment models, risk, return attribution, optimization and portfolio construction, scenario analysis and simulations, etc. - and covering all asset classes. The group is also responsible for the technology platform that delivers those models to our internal partners and external clients, and their integration with Aladdin. FMG conducts leading research on the areas above, delivering state-of-the-art models. FMG publishes applied scientific research frequently, and our members present regularly at leading industry conferences. FMG engages constantly with the sales team in client visits and meetings.

Job Purpose/ Background

FMG is looking for self-starter Data Engineer to contribute on the big data platform. The data platform is compelling analytics offering, supported by high quality historical data. This encompasses both individual security as well as index constituent levels and portfolio context. This historical data includes indicative information, prices, analytics, exposures, index-related and other fields. This data is fully quality-controlled and can be integrated with client custom data. This data set will be utilized by working with modelers for research and by clients for analytics. To craft, implement and maintain this platform we would primarily need Scala and distributed computing knowledge using Hadoop and Spark along with knowledge for data curation and analytical jobs. Experience with DevOps will be handy as we move to cloud.

Key Role Responsibilities

Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority)
  • Design, maintenance and ownership of a Data Infrastructure
  • Working with modelers to understand the business and their requirements. Help determine the optimal data set and structure to deliver on those user requirements
  • Act as Domain Experts on the products over the course of time
  • Understanding the data and setup and monitor the QC/ Surveillance
  • Implementation of and maintenance of a standard data / technology deployment workflow to ensure that all deliverables/enhancements are delivered in a disciplined and robust manner

Qualifications
  • Bachelors/Masters Computer Science or related field
  • 7-10 years of programming experience and minimum of 5 years of relevant experience
  • Experience with Scala
  • Experience with big data technologies such as Hadoop, Pig, Cassandra, Spark
  • Aptitude for design and building tools for data due diligence and data extraction pipeline
  • Knowledge of ETL, data curation and analytical jobs using distributed computing framework with Spark and Hadoop
  • Knowledge and Experience of working with large enterprise wide data warehouses
  • Java /Python knowledge is a plus
  • DevOps and Cloud experience is a plus

Our benefits

To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.

About BlackRock

At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children's educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.

This mission would not be possible without our smartest investment - the one we make in our employees. It's why we're dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.

For additional information on BlackRock, please visit careers.blackrock.com | www.blackrock.com/corporate | Instagram: @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock

BlackRock is proud to be an Equal Opportunity and Affirmative Action Employer. We evaluate qualified applicants without regard to race, color, national origin, religion, sex, sexual orientation, gender identity, disability, protected veteran status, and other statuses protected by law.

BlackRock will consider for employment qualified applicants with arrest or conviction records in a manner consistent with the requirements of the law, including any applicable fair chance law.

Vacancy expired!

Subscribe Report job