Advanced Mircro Devices, Inc.
Principal Machine Learning Software Engineer
Advanced Mircro Devices, Inc., Bellevue, Washington, 98009
WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ Principal Machine Learning Software Engineer Job Description As a Machine Learning Engineer specializing in low-level performance optimization, you will play a critical role in helping our customers to advance AMD-based machine learning infrastructure and ensuring the efficient deployment of state-of-the-art large models. You will be part of a dynamic team working on groundbreaking projects and will be responsible for optimizing model execution, including GPU kernels, both for inference and training, in a multi-GPU and multi-node environment. Your contributions will directly impact our ability to deliver cutting-edge AI solutions efficiently and at scale. Key ResponsibilitiesGPU Kernel Optimization: Develop and optimize low-level GPU kernels to accelerate inference and training of large machine learning models. Maximize the computational efficiency and reduce execution time while ensuring model accuracy. Multi-GPU and Multi-Node Optimization: Design and implement strategies for distributed model training and inference across multiple GPUs and nodes. Address data parallelism and model parallelism challenges to fully utilize available resources. Performance Profiling: Profile and analyze system and application performance to identify bottlenecks and areas for improvement. Use profiling tools to understand and optimize hardware resource utilization. Parallel Computing: Leverage parallel computing techniques to improve the scalability and performance of machine learning workloads. Implement multi-threading and GPU synchronization techniques. Model Quantization: Explore and apply model quantization techniques to reduce memory and computation overhead, especially for edge and cloud deployment. Benchmarking and Testing: Develop benchmarks and testing procedures to assess the performance and stability of optimized models and frameworks. Ensure that the solutions meet or exceed the defined performance criteria. Collaboration: Collaborate closely with machine learning researchers, software engineers, and infrastructure teams to integrate optimized kernels and solutions into production systems. Documentation: Create detailed documentation of optimizations, best practices, and implementation guidelines to facilitate knowledge sharing and maintainable code. Qualifications A Bachelor, Master's or Ph.D. in Computer Science, Electrical Engineering, or a related field or equivalent practical experience. Solid understanding of GPU accelerators like ONNX, DeepSpeed, VLLM Strong experience in low-level GPU kernel optimization. Proficiency in CUDA and GPU programming. Experience with distributed computing and multi-GPU environments. Proficiency in performance profiling and optimization tools. Solid programming skills in Python and/or C++. Experience with deep learning frameworks (e.g., JAX, PyTorch, TensorFlow). Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. LI-RL1 At AMD, your base pay is one part of your total rewards package. Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employe