x.ai
Hardcore Engineer - Pretraining Infrastructure Bay Area (San Francisco and Palo
x.ai, Palo Alto, California, United States, 94306
About xAIxAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company, and as a result, all engineers and researchers share the title "Member of Technical Staff."
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
xAI does not have recruiters. Every application is reviewed directly by a technical member of the team.
Tech Stack
Python / Rust / C++
JAX and XLA
NCCL
CUDA (C++ and Triton)
LocationThe role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
Design, build, and implement large-scale distributed training systems.
Profiling, debugging, and optimizing multi-host GPU utilization.
Hardware / Software / Algorithm co-design.
Maintain and innovate on the codebase.
Build tools to boost the productivity of the team.
Ideal Experiences
Experience in configuring and troubleshooting operating systems for maximum performance.
Built scalable training framework for AI models in HPC clusters, including but not limited to
Scalable orchestration framework and tools
Machine learning compilers and runtime such as XLA, MLIR, and Triton
Distributed training strategies such as FSDP, Megatron, and pipeline parallelism
NCCL or custom communication libraries for performant communication collectives
Interview ProcessAfter submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
Coding assessment in a language of your choice.
Systems hands-on: Demonstrate practical skills in a live problem-solving session.
Project deep-dive: Present your past exceptional work to a small audience.
Meet and greet with the wider team.
Our goal is to finish the main process within one week. We don’t rely on recruiters for assessments. Every application is reviewed by a member of our technical team. All interviews will be conducted via Google Meet.
Annual Salary Range$180,000 - $440,000 USD
#J-18808-Ljbffr
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company, and as a result, all engineers and researchers share the title "Member of Technical Staff."
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
xAI does not have recruiters. Every application is reviewed directly by a technical member of the team.
Tech Stack
Python / Rust / C++
JAX and XLA
NCCL
CUDA (C++ and Triton)
LocationThe role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
Design, build, and implement large-scale distributed training systems.
Profiling, debugging, and optimizing multi-host GPU utilization.
Hardware / Software / Algorithm co-design.
Maintain and innovate on the codebase.
Build tools to boost the productivity of the team.
Ideal Experiences
Experience in configuring and troubleshooting operating systems for maximum performance.
Built scalable training framework for AI models in HPC clusters, including but not limited to
Scalable orchestration framework and tools
Machine learning compilers and runtime such as XLA, MLIR, and Triton
Distributed training strategies such as FSDP, Megatron, and pipeline parallelism
NCCL or custom communication libraries for performant communication collectives
Interview ProcessAfter submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
Coding assessment in a language of your choice.
Systems hands-on: Demonstrate practical skills in a live problem-solving session.
Project deep-dive: Present your past exceptional work to a small audience.
Meet and greet with the wider team.
Our goal is to finish the main process within one week. We don’t rely on recruiters for assessments. Every application is reviewed by a member of our technical team. All interviews will be conducted via Google Meet.
Annual Salary Range$180,000 - $440,000 USD
#J-18808-Ljbffr