Job Details

ID #20979052
State California
City Irvine
Job type Permanent
Salary USD $185,000 - $220,000 185000 - 220000
Source Task Management Inc
Showed 2021-10-11
Date 2021-10-06
Deadline 2021-12-04
Category Et cetera
Create resume

Engineering Manager Data Platforms

California, Irvine, 92602 Irvine USA

Vacancy expired!

Position Overview

Seeking a highly technical engineering manager to direct the evolution of data platform, which ingests over ten thousand datasets, stores over a petabyte of spatial, relational and raster data, and serves over five billion API calls per month in support of over two trillion dollars of real estate transactions annually.The Manager, Engineering (Data Platform) at has three key functions:(1) execution of data platform strategy, transitioning from traditional RDBMS clusters and API monoliths to modern data lakes, distributed data processing, real-time data streams, automated ingest pipelines and microservices on top of horizontally scalable cloud infrastructure.(2) creation of new ingestion, storage, compute and service capabilities, leveraging big data technologies, working alongside of the data ingestion services team.(3) supporting delivery of new data engineering processes to improve velocity, quality, and efficiency, and to expand the number and types of datasets within ecosystem.An ideal candidate is a hands-on, technology manager with experience building distributed data storage and processing systems, achieving large scale data ingestion operations, and creating high performance engineering organizations. This individual will lead the transformation and evolution of data platform and play a key role in the convergence of industry-leading but independent brands into a single informatics platform at the center of the commercial real estate ecosystem

What you will do and achieve:
  • Lead the creation of a next generation data platform to ingest tens of thousands of datasets, support petabyte-scale storage and compute, and deliver billions of real-time queries per month, while maintaining cost effectiveness and implementing appropriate data safeguards.
  • Manage a growing organization of software engineers building the next generation of services supporting data storage and compute strategy.
  • Identify opportunities to improve the performance and scale of APIs, the velocity and efficiency of data ingestion, the connectivity and linking of datasets, and the extraction of natural language and imagery sources.
  • Work with senior leadership to translate platform vision and strategy into an actionable roadmap, maintain KPIs to track process, and deliver on-time and on-budget.
  • Collaborate with the application engineering, product management, project management, data science and market-facing teams to align the data platform with business needs.
  • Define standards and practices around automation, system reliability, data architecture, process management, containerization, infrastructure-as-code, auto-scaling, data security, etc.
  • Serve as a mentor for team members, an evangelist of the data platform for other engineering teams, and a translator between engineering and the business. This will include facilitating and participating in design sessions, code reviews and sprint ceremonies, as well as giving presentations on s data platform for technical and non-technical audiences.
  • Investigate and resolve technical and non-technical issues, including leading and participating within incident management processes and root cause analyses.
  • Contribute to technology strategy as a member of its architectural leadership team.

Who you are:

Education
  • S. in Computer Science (or equivalent)

Experience
  • 3 or more years of experience managing software engineering teams
  • 3 or more years of experience with big data systems and cloud architecture
  • 7 or more years of experience in software engineering

Knowledge & Skills
  • Big data architecture and systems, including distributed data processing systems (such as Spark or Dask), distributed data storage systems (such as Parquet or HDFS), low-latency data lake query architectures (such as Alluxio) and real-time streaming systems (such as Kafka)
  • Data lake design strategies for metadata, ontology, governance, authorization, etc.
  • Test automation for data quality, data flow, and API endpoints
  • Data engineering techniques for big data, including data automation frameworks (such as Airflow or Prefect), metadata management (such as Amundsen) and process management strategies
  • Infrastructure management and automation, such as Kubernetes, Terraform and Chef
  • Cloud infrastructure management, ideally with experience in AWS, including both technical aspects, such as solutions architecture, and non-technical aspects, such as financial planning
  • Modern practices around agile development, release management, continuous integration, system reliability, cloud architecture, authN/Z and data security
  • Fundamentals of computer science and software engineering

Vacancy expired!

Subscribe Report job