Amazon
Machine Learning Engineer III, FAR (Frontier AI & Robotics)
Amazon, Seattle, Washington, us, 98127
DESCRIPTION
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to make breakthrough foundation models run at production scale. As a Senior Machine Learning Engineer embedded in our science team, you'll be instrumental in transforming cutting-edge research into high-performance production systems. You'll collaborate directly with scientists to optimize large-scale transformer architectures for robotics applications, leveraging your expertise in CUDA and TensorRT to achieve unprecedented inference efficiency at Amazon scale. In this role, you'll balance deep technical optimization work with strategic input on model architecture decisions, ensuring our innovative robotics models are designed with performance in mind from the ground up. You'll leverage NVIDIA's acceleration stack and other compilation techniques to tackle ambitious performance targets, working at the intersection of large language models and real-world robotics applications.
Key job responsibilities Drive inference optimization strategies for large-scale foundation models using TensorRT, CUDA, and other NVIDIA tools Collaborate closely with scientists to influence model architectures for optimal hardware utilization Design and implement efficient compilation pipelines for complex transformer architectures Develop comprehensive benchmarking frameworks to measure and optimize model performance Build robust monitoring solutions to ensure reliable model serving at scale Explore and evaluate emerging optimization techniques including ONNX Runtime and other ML compilers Maintain high engineering standards through proper testing, documentation, and code review practices
A day in the life Optimize transformer blocks using custom CUDA kernels and TensorRT optimization techniques Partner with scientists to analyze model architectures and propose efficiency improvements Implement and benchmark various optimization strategies for large-scale models Debug performance bottlenecks using NVIDIA profiling tools Participate in technical discussions about new model architectures with the science team Design and maintain performance monitoring systems for production deployment Prototype new acceleration approaches using emerging compilation frameworks
BASIC QUALIFICATIONS Bachelor's degree in computer science or equivalent 5+ years of non-internship professional software development experience 5+ years of programming with at least one software programming language experience 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience as a mentor, tech lead or leading an engineering team Strong expertise in Python, C++ and CUDA programming Experience with TensorRT or similar ML optimization frameworks Track record of optimizing ML models for production
PREFERRED QUALIFICATIONS Expertise in NVIDIA's ML stack (cuDNN, CUDA Graph, etc.) Experience with ML compilers (ONNX Runtime, TVM, etc.) Experience with transformer model optimization Background in performance profiling and optimization Experience working directly with research teams Track record of building robust monitoring systems Experience with large-scale ML serving systems
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. #J-18808-Ljbffr
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to make breakthrough foundation models run at production scale. As a Senior Machine Learning Engineer embedded in our science team, you'll be instrumental in transforming cutting-edge research into high-performance production systems. You'll collaborate directly with scientists to optimize large-scale transformer architectures for robotics applications, leveraging your expertise in CUDA and TensorRT to achieve unprecedented inference efficiency at Amazon scale. In this role, you'll balance deep technical optimization work with strategic input on model architecture decisions, ensuring our innovative robotics models are designed with performance in mind from the ground up. You'll leverage NVIDIA's acceleration stack and other compilation techniques to tackle ambitious performance targets, working at the intersection of large language models and real-world robotics applications.
Key job responsibilities Drive inference optimization strategies for large-scale foundation models using TensorRT, CUDA, and other NVIDIA tools Collaborate closely with scientists to influence model architectures for optimal hardware utilization Design and implement efficient compilation pipelines for complex transformer architectures Develop comprehensive benchmarking frameworks to measure and optimize model performance Build robust monitoring solutions to ensure reliable model serving at scale Explore and evaluate emerging optimization techniques including ONNX Runtime and other ML compilers Maintain high engineering standards through proper testing, documentation, and code review practices
A day in the life Optimize transformer blocks using custom CUDA kernels and TensorRT optimization techniques Partner with scientists to analyze model architectures and propose efficiency improvements Implement and benchmark various optimization strategies for large-scale models Debug performance bottlenecks using NVIDIA profiling tools Participate in technical discussions about new model architectures with the science team Design and maintain performance monitoring systems for production deployment Prototype new acceleration approaches using emerging compilation frameworks
BASIC QUALIFICATIONS Bachelor's degree in computer science or equivalent 5+ years of non-internship professional software development experience 5+ years of programming with at least one software programming language experience 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience as a mentor, tech lead or leading an engineering team Strong expertise in Python, C++ and CUDA programming Experience with TensorRT or similar ML optimization frameworks Track record of optimizing ML models for production
PREFERRED QUALIFICATIONS Expertise in NVIDIA's ML stack (cuDNN, CUDA Graph, etc.) Experience with ML compilers (ONNX Runtime, TVM, etc.) Experience with transformer model optimization Background in performance profiling and optimization Experience working directly with research teams Track record of building robust monitoring systems Experience with large-scale ML serving systems
Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. #J-18808-Ljbffr