x.ai
AI Engineer & Researcher - Inference Bay Area (San Francisco and Palo Alto) AI E
x.ai, Palo Alto, California, United States, 94306
About xAIxAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company, and as a result, all engineers and researchers share the title "Member of Technical Staff."
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
xAI does not have recruiters. Every application is reviewed directly by a technical member of the team.
Tech Stack
Python / Rust
PyTorch / JAX
NCCL
CUDA (C++ and Triton)
LocationThe role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
Optimizing the latency and throughput of model inference.
Building reliable production serving systems to serve millions of users.
Accelerating research on scaling test-time compute.
Innovating new ideas that bring us closer to our goal: developing AI systems that can accurately understand the universe and generate new knowledge.
Ideal Experiences
Worked on system optimizations for model serving, such as batching, caching, load balancing, and model parallelism.
Worked on low-level optimizations for inference, such as CUDA kernels and code generation.
Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding.
Interview ProcessAfter submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
Coding assessment in a language of your choice.
Systems hands-on: Demonstrate practical skills in a live problem-solving session.
Project deep-dive: Present your past exceptional work to a small audience.
Meet and greet with the wider team.
Our goal is to finish the main process within one week. We don’t rely on recruiters for assessments. Every application is reviewed by a member of our technical team. All interviews will be conducted via Google Meet.
Annual Salary Range$180,000 - $440,000 USD
#J-18808-Ljbffr
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. Engineers are encouraged to work across multiple areas of the company, and as a result, all engineers and researchers share the title "Member of Technical Staff."
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
xAI does not have recruiters. Every application is reviewed directly by a technical member of the team.
Tech Stack
Python / Rust
PyTorch / JAX
NCCL
CUDA (C++ and Triton)
LocationThe role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
Optimizing the latency and throughput of model inference.
Building reliable production serving systems to serve millions of users.
Accelerating research on scaling test-time compute.
Innovating new ideas that bring us closer to our goal: developing AI systems that can accurately understand the universe and generate new knowledge.
Ideal Experiences
Worked on system optimizations for model serving, such as batching, caching, load balancing, and model parallelism.
Worked on low-level optimizations for inference, such as CUDA kernels and code generation.
Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding.
Interview ProcessAfter submitting your application, the team reviews your CV and statement of exceptional work. If your application passes this stage, you will be invited to a 15-minute interview (“phone interview”) during which a member of our team will ask some basic questions. If you clear the initial phone interview, you will enter the main process, which consists of four technical interviews:
Coding assessment in a language of your choice.
Systems hands-on: Demonstrate practical skills in a live problem-solving session.
Project deep-dive: Present your past exceptional work to a small audience.
Meet and greet with the wider team.
Our goal is to finish the main process within one week. We don’t rely on recruiters for assessments. Every application is reviewed by a member of our technical team. All interviews will be conducted via Google Meet.
Annual Salary Range$180,000 - $440,000 USD
#J-18808-Ljbffr