Vacancy expired!
- Improve BlackRock's ability to enhance our retail distribution capabilities and services suite by creating, expanding and optimizing our next generation of data platform's architecture.
- Collaborate with various partners to understand multi-functional requirements and convert them into reusable service components.
- Develop multi-threaded server-based solutions for a wide variety of web and mobile-based applications.
- Top technical / programming skills - Java (Scala is plus), RDBMS (MS SQL, MySQL or similar), messaging systems, Hadoop ecosystem and/or distributed NoSQL databases like Cassandra.
- Define and drive the data modeling solution for OLTP and distributed data storages
- You will be a builder and an owner of your work product!
- Participate in design and implementation of scalable computation and data distribution platform.
- You will create and operationalize data pipelines to enable squad to deliver high quality data-driven product.
- Act as Lead to identify, design, and implement internal process improvements and relay to the relevant technology organization.
- Develop and mentor other team members in design and development, provide development estimates on projects and tasks.
- Work with data scientists to develop data ready tools to support their job.
- Identify, investigate, and resolve data discrepancies by finding the root cause of issues; work with partners across various cross-functional teams to prevent future occurrences.
- Understand existing systems and resolve operations issues while working with other support staff located across the globe.
- Automate manual ingest processes and optimize data delivery subject to service level agreements; work with infrastructure on re-design for greater scalability.
- Design, maintenance and ownership of a Data Infrastructure.
- Be up to date with the latest tech trends in the big-data space and recommend them as needed.
- Overall 5+ years of hands-on experience in computer engineers/software development.
- 3-5+ years of hands-on experience with
- Java/J2EE/Spring architecture design and development.
- Python for data transformation and server-side implementation (Core Python, Pandas and PySpark).
- 3-5+ years of experience using SQL (e.g., MS SQL Server, MySQL), stored procedures and complex queries.
- 3+ years of experience using Hive (on Spark). Proficiency on bucketing, partitioning, tuning and different file formats (ORC, PARQUET & AVRO).
- Strong analytical, architectural and programming skills.
- Working experience with any No-SQL databases (Cassandra, MongoDB).
- Committed to code quality, software testing, and CI/CD.
- Excellent analytical and problem-solving skills.
- Ability to work in a team-oriented and distributed environment.
- B.S. / M.S. degree in Computer Science, Engineering or a related discipline.
- Experience with Hadoop or any Distributed system
- Experience with stream-processing systems: Storm, Spark-Streaming
- Experience with building and optimizing 'big data' pipelines, architectures, and data sets.
- Familiarity with data pipeline and workflow management tools (e.g., Luigi, Airflow, NiFi, Kylo and etc.).
- Experience with containerization architecture: Docker and Kubernetes.
- Experience with Web Development (JavaScript, Angular/React, Node and etc.).
- Knowledge of any Graph Databases
- Any experience with Cloud platform is huge PLUS!
Vacancy expired!