Job Details

ID #52817300
State California
City Sunnyvale
Full-time
Salary USD TBD TBD
Source Meta
Showed 2024-11-03
Date 2024-11-03
Deadline 2025-01-01
Category Et cetera
Create resume
Apply Now

Software Engineer, SystemML - Scaling / Performance

California, Sunnyvale, 94085 Sunnyvale USA
Apply Now

Summary: In this role, you will be a member of the Network.AI Software team and part of the bigger DC networking organization. The team develops and owns the software stack around NCCL (NVIDIA Collective Communications Library), which enables multi-GPU and multi-node data communication through HPC-style collectives. NCCL has been integrated into PyTorch and is on the critical path of multi-GPU distributed training. In other words, nearly every distributed GPU-based ML workload in Meta Production goes through the SW stack the team owns.At the high level, the team aims to enable Meta-wide ML products and innovations to leverage our large-scale GPU training and inference fleet through an observable, reliable and high-performance distributed AI/GPU communication stack. Currently, one of the team’s focus is on building customized features, SW benchmarks, performance tuners and SW stacks around NCCL and PyTorch to improve the full-stack distributed ML reliability and performance (e.g. Large-Scale GenAI/LLM training) from the trainer down to the inter-GPU and network communication layer. And we are seeking for engineers to work on the space of GenAI/LLM scaling reliability and performance.Required Skills: Software Engineer, SystemML - Scaling / Performance Responsibilities:

Enabling reliable and highly scalable distributed ML training on Meta's large-scale GPU training infra with a focus on GenAI/LLM scaling

Minimum Qualifications: Minimum Qualifications:

Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

Specialized experience in one or more of the following machine learning/deep learning domains: Distributed ML Training, GPU architecture, ML systems, AI infrastructure, high performance computing, performance optimizations, or Machine Learning frameworks (e.g. PyTorch).

Preferred Qualifications: Preferred Qualifications:

PhD in Computer Science, Computer Engineering, or relevant technical field

Experience with NCCL and distributed GPU reliability/performance improvment on RoCE/Infiniband

Experience working with DL frameworks like PyTorch, Caffe2 or TensorFlow

Experience with both data parallel and model parallel training, such as Distributed Data Parallel, Fully Sharded Data Parallel (FSDP), Tensor Parallel, and Pipeline Parallel

Experience in AI framework and trainer development on accelerating large-scale distributed deep learning models

Experience in HPC and parallel computing

Knowledge of GPU architectures and CUDA programming

Knowledge of ML, deep learning and LLM

Public Compensation: $70.67/hour to $208,000/year + bonus + equity + benefitsIndustry: InternetEqual Opportunity: Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.

Apply Now Subscribe Report job