Vacancy expired!
- Improve BlackRock's product and services suite by creating, expanding and optimizing our data and data pipeline architecture.
- Lead architecture on a multi-discipline, multi-region team of data scientists, engineers, and investment professionals on a corporate-wide set of client, investor, and operational problems.
- Lead and/or mentor other data engineers
- You will create and operationalize data pipelines to enable squads to deliver high quality data-driven product.
- You will be accountable for managing high-quality datasets exposed for internal and external consumption by downstream users and applications. Lead in the creation and maintenance of optimized data pipeline architectures on large and complex data sets.
- Assemble large, complex data sets that meet BlackRock business requirements.
- Act as lead to Identify, design, and implement internal process improvements and relay to relevant technology organization.
- Work with stakeholders to assist in the data-related technical issues and support their data infrastructure needs.
- Automate manual ingest processes and optimize data delivery subject to service level agreements; work with infrastructure on re-design for greater scalability.
- Keep data separated and segregated according to relevant data policies.
- Work with data scientists to develop data ready tools to support their job.
- Assist in the development of business recommendations with effective presentation of findings at multiple levels of stakeholders using visual analytic displays of quantitative information. Communicate findings with stakeholders as necessary.
- 10+ years of experience in Data Engineering or Software Engineering
- Experience leading or managing other data engineers
- Experience with building and optimizing 'big data' pipelines, architectures, and data sets. Familiarity with data pipeline and workflow management tools Luigi, Airflow
- Advanced working SQL knowledge and experience with relational databases.
- Experience with Hadoop, Spark, and Kafka
- Experience with Amazon AWS and Google Cloud Platforms
- Experience with stream-processing systems: Storm, Spark-Streaming
- Experience with OO or object scripting language such as Python, Scala, and Java
Vacancy expired!