Essential
Member of Technical Staff: Machine Learning Infrastructure Engineer
Essential, San Francisco, California, United States, 94199
About Us
Essential AI’s mission is to deepen the partnership between humans and computers, unlocking collaborative capabilities that far exceed what could be achieved today. We believe that building delightful end-user experiences requires innovating across the stack - from the UX all the way down to models that achieve the best user value per FLOP.We believe that a small, focused team of motivated individuals can create outsized breakthroughs. We are building a world-class multi-disciplinary team who are excited to solve hard real-world AI problems. We are well-capitalized and supported by March Capital and Thrive Capital, with participation from AMD, Franklin Venture Partners, Google, KB Investment, NVIDIA.The Role
The Machine Learning Infrastructure Engineer will be responsible for architecting and building the compute infrastructure that powers the training and serving of our models. This requires a full understanding of the complete backend stack → from frameworks to compilers to runtimes to kernels. In addition, the role requires familiarity with tools and services common in cloud-based infrastructure like Kubernetes and Docker.What you’ll be working on
Design, build, and maintain scalable machine learning infrastructure to support our model training, inference, and applications.Design and implement scalable machine learning and distributed systems that enable training and scaling of LLMs. Work on parallelism methods to improve training in a fast and reliable way.Develop tools and frameworks to automate and streamline ML experimentation and management.Collaborate with other researchers and product engineers to bring magical product experiences through large language models.Work on lower levels of the stack to build high-performing and optimal training and serving infrastructure, including researching new techniques and writing custom kernels as needed to achieve improvements.Optimize performance and efficiency across different accelerators.What we are looking for
A strong understanding of architectures of new AI accelerators like TPU, IPU, HPU, etc., and their tradeoffs.Knowledge of parallel computing concepts and distributed systems.Prior experience in performance tuning of training and/or inference LLM workloads. Experience with MLPerf or internal production workloads will be valued.6+ years of relevant industry experience in leading the design of large-scale and production ML infrastructure systems.Experience with training and building large language models using frameworks such as Megatron, DeepSpeed, etc., and deployment frameworks like vLLM, TGI, TensorRT-LLM, etc.Comfortable with working under-the-hood with kernel languages like OAI Triton, Pallas, and compilers like XLA.Experience with INT8/FP8 training and inference, quantization, and/or distillation.Knowledge of container technologies like Docker and Kubernetes and cloud platforms like AWS, GCP, etc.Intermediate fluency with network fundamentals like VPC, Subnets, Routing Tables, Firewalls, etc.We encourage you to apply for this position even if you don’t check all of the above requirements but want to spend time pushing on these techniques.We are based in-person in SF and work fully onsite 5 days a week. We offer relocation assistance to new employees.The base pay range target for the role seniority described in this job description is up to $225,000 in San Francisco, CA. Final offer amounts depend on various job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews and our benchmarks against market compensation data. In addition to cash pay, full-time regular positions are eligible for equity, 401(k), health benefits, and other benefits like daily onsite lunches and snacks; some of these benefits may be available for part-time or temporary positions.Essential AI commits to providing a work environment free of discrimination and harassment, as well as equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. You may view all of Essential AI’s
recruiting notices here , including our EEO policy, recruitment scam notice, and recruitment agency policy.
#J-18808-Ljbffr
Essential AI’s mission is to deepen the partnership between humans and computers, unlocking collaborative capabilities that far exceed what could be achieved today. We believe that building delightful end-user experiences requires innovating across the stack - from the UX all the way down to models that achieve the best user value per FLOP.We believe that a small, focused team of motivated individuals can create outsized breakthroughs. We are building a world-class multi-disciplinary team who are excited to solve hard real-world AI problems. We are well-capitalized and supported by March Capital and Thrive Capital, with participation from AMD, Franklin Venture Partners, Google, KB Investment, NVIDIA.The Role
The Machine Learning Infrastructure Engineer will be responsible for architecting and building the compute infrastructure that powers the training and serving of our models. This requires a full understanding of the complete backend stack → from frameworks to compilers to runtimes to kernels. In addition, the role requires familiarity with tools and services common in cloud-based infrastructure like Kubernetes and Docker.What you’ll be working on
Design, build, and maintain scalable machine learning infrastructure to support our model training, inference, and applications.Design and implement scalable machine learning and distributed systems that enable training and scaling of LLMs. Work on parallelism methods to improve training in a fast and reliable way.Develop tools and frameworks to automate and streamline ML experimentation and management.Collaborate with other researchers and product engineers to bring magical product experiences through large language models.Work on lower levels of the stack to build high-performing and optimal training and serving infrastructure, including researching new techniques and writing custom kernels as needed to achieve improvements.Optimize performance and efficiency across different accelerators.What we are looking for
A strong understanding of architectures of new AI accelerators like TPU, IPU, HPU, etc., and their tradeoffs.Knowledge of parallel computing concepts and distributed systems.Prior experience in performance tuning of training and/or inference LLM workloads. Experience with MLPerf or internal production workloads will be valued.6+ years of relevant industry experience in leading the design of large-scale and production ML infrastructure systems.Experience with training and building large language models using frameworks such as Megatron, DeepSpeed, etc., and deployment frameworks like vLLM, TGI, TensorRT-LLM, etc.Comfortable with working under-the-hood with kernel languages like OAI Triton, Pallas, and compilers like XLA.Experience with INT8/FP8 training and inference, quantization, and/or distillation.Knowledge of container technologies like Docker and Kubernetes and cloud platforms like AWS, GCP, etc.Intermediate fluency with network fundamentals like VPC, Subnets, Routing Tables, Firewalls, etc.We encourage you to apply for this position even if you don’t check all of the above requirements but want to spend time pushing on these techniques.We are based in-person in SF and work fully onsite 5 days a week. We offer relocation assistance to new employees.The base pay range target for the role seniority described in this job description is up to $225,000 in San Francisco, CA. Final offer amounts depend on various job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews and our benchmarks against market compensation data. In addition to cash pay, full-time regular positions are eligible for equity, 401(k), health benefits, and other benefits like daily onsite lunches and snacks; some of these benefits may be available for part-time or temporary positions.Essential AI commits to providing a work environment free of discrimination and harassment, as well as equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. You may view all of Essential AI’s
recruiting notices here , including our EEO policy, recruitment scam notice, and recruitment agency policy.
#J-18808-Ljbffr