NVIDIA Corporation
System Software Engineer, LLM Inference and Performance Optimization
NVIDIA Corporation, Santa Clara, California, us, 95053
System Software Engineer, LLM Inference and Performance OptimizationAs a System Software Engineer (LLM Inference & Performance Optimization) you will be at the heart of our AI advancements. Our team is dedicated to pushing the boundaries of machine learning and optimizing large language models (LLMs) for flawless, real-time performance across diverse hardware platforms. This is your chance to contribute to world-class solutions that impact the future of technology.
What you'll be doing:
Design, implement, and optimize inference logic for fine-tuned LLMs, working closely with Machine Learning Engineers.
Develop efficient, low-latency glue logic and inference pipelines scalable across various hardware platforms, ensuring outstanding performance and minimal resource usage.
Apply hardware accelerators such as GPUs, and other specialized hardware to improve inference speed and effectively implement real-world applications.
Collaborate with cross-functional teams to integrate models seamlessly into diverse environments, meeting strict functional and performance requirements.
Conduct detailed performance analysis and optimization for specific hardware platforms, focusing on efficiency, latency, and power consumption.
What we need to see:
8+ years of expert proficiency in C++ with a deep understanding of memory management, concurrency, and low-level optimizations.
M.S. or higher degree (or equivalent experience) in Computer Science/Engineering and related field.
Strong experience in system-level software engineering, including multi-threading, data parallelism, and performance tuning.
Validated expertise in LLM inference, with experience in model serving frameworks like ONNX Runtime, TensorRT.
Familiarity with real-time systems and performance-tuning techniques, especially for machine learning inference pipelines.
Ability to work collaboratively with Machine Learning Engineers and cross-functional teams to align system-level optimizations with model goals.
Extensive understanding of hardware architectures and the ability to bring to bear specialized hardware for optimized ML model inference.
Ways to stand out from the crowd:
Experience with deep learning hardware accelerators, such as Nvidia GPUs.
Familiarity with ONNX, TensorRT, or cuDNN for LLM inference on GPU.
Experience with low-latency optimizations and real-time system constraints for ML inference.
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits.
NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr
What you'll be doing:
Design, implement, and optimize inference logic for fine-tuned LLMs, working closely with Machine Learning Engineers.
Develop efficient, low-latency glue logic and inference pipelines scalable across various hardware platforms, ensuring outstanding performance and minimal resource usage.
Apply hardware accelerators such as GPUs, and other specialized hardware to improve inference speed and effectively implement real-world applications.
Collaborate with cross-functional teams to integrate models seamlessly into diverse environments, meeting strict functional and performance requirements.
Conduct detailed performance analysis and optimization for specific hardware platforms, focusing on efficiency, latency, and power consumption.
What we need to see:
8+ years of expert proficiency in C++ with a deep understanding of memory management, concurrency, and low-level optimizations.
M.S. or higher degree (or equivalent experience) in Computer Science/Engineering and related field.
Strong experience in system-level software engineering, including multi-threading, data parallelism, and performance tuning.
Validated expertise in LLM inference, with experience in model serving frameworks like ONNX Runtime, TensorRT.
Familiarity with real-time systems and performance-tuning techniques, especially for machine learning inference pipelines.
Ability to work collaboratively with Machine Learning Engineers and cross-functional teams to align system-level optimizations with model goals.
Extensive understanding of hardware architectures and the ability to bring to bear specialized hardware for optimized ML model inference.
Ways to stand out from the crowd:
Experience with deep learning hardware accelerators, such as Nvidia GPUs.
Familiarity with ONNX, TensorRT, or cuDNN for LLM inference on GPU.
Experience with low-latency optimizations and real-time system constraints for ML inference.
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits.
NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr