NVIDIA
Python Software Engineer, GPU - Accelerated LLM Data Applications
NVIDIA, Santa Clara, California, us, 95053
NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and outstanding people! Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work.
Come join the team and see how you can make a lasting impact on the world! NVIDIA is seeking a Python Software Engineer to further our efforts to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. This role is pivotal in accelerating pre-processing pipelines for high-quality multi-modal dataset curation. The day to day focus is on developing efficient, scalable systems for de-duplicating, filtering, and classifying training corpora for foundation model LLMs, as well as ingesting and prepping datasets for use in Retrieval Augmented Generation (RAG) pipelines. Fundamental to these efforts are iterative testing and improvement in system cost, speed, & accuracy through micro-optimization, prompt engineering, fine tuning, and applying new research. The ideal candidate is happiest releasing early and often! They court user feedback with an ear open to the spirit of related feature requests. You are comfortable objectively evaluating the latest AI models and frameworks with an eye on acceleration potential. Would you like to run your training & test experiments on our supercomputers on thousands of GPU? Come work with us!
What you'll be doing:
Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.
Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.
Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.
Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.
Work closely with diverse teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.
What we need to see:
Advanced degree in Computer Science, Computer Engineering, or a related field (or equivalent experience).
5+ years of Python library development experience, including CI systems (GitHub Actions), integration testing, benchmarking, & profiling
Proficiency with LLMs and RAG pipelines: prompt engineering, LangChain, llama-index
Deep understanding of the PyData & ML/DL ecosystems, including RAPIDS, Pandas, numpy, scikit-learn, XGBoost, Numba, PyTorch
Familiarity with distributed programming frameworks like Dask, Apache Spark, or Ray
Visible contributions to open-source projects on GitHub
Ways to stand out from the crowd:
Active engagement (published papers, conference talks, blogs) in the data science community
Experience with production-level data pipelines, especially SQL-based
Experience with software packaging technologies: pip, conda, Docker images
Familiarity with Docker-Compose, Kubernetes, and Cloud deployment frameworks
Knowledge of parallel programming approaches, especially in CUDA C++
With a competitive salary package and benefits, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. Are you a creative and autonomous Python Software Engineer developing GPU - Accelerated LLM Data Applications, who loves challenges? Do you have a genuine passion for advancing the state of AI & machine learning across a variety of industries? If so, we want to hear from you. Come join us in these exciting times and make a sizable difference in the exploding world of Deep Learning! Doing what’s never been done before takes vision, innovation, and the world’s best talent.
The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits (https://www.nvidia.com/en-us/benefits/) . NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Come join the team and see how you can make a lasting impact on the world! NVIDIA is seeking a Python Software Engineer to further our efforts to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. This role is pivotal in accelerating pre-processing pipelines for high-quality multi-modal dataset curation. The day to day focus is on developing efficient, scalable systems for de-duplicating, filtering, and classifying training corpora for foundation model LLMs, as well as ingesting and prepping datasets for use in Retrieval Augmented Generation (RAG) pipelines. Fundamental to these efforts are iterative testing and improvement in system cost, speed, & accuracy through micro-optimization, prompt engineering, fine tuning, and applying new research. The ideal candidate is happiest releasing early and often! They court user feedback with an ear open to the spirit of related feature requests. You are comfortable objectively evaluating the latest AI models and frameworks with an eye on acceleration potential. Would you like to run your training & test experiments on our supercomputers on thousands of GPU? Come work with us!
What you'll be doing:
Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.
Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.
Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.
Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.
Work closely with diverse teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.
What we need to see:
Advanced degree in Computer Science, Computer Engineering, or a related field (or equivalent experience).
5+ years of Python library development experience, including CI systems (GitHub Actions), integration testing, benchmarking, & profiling
Proficiency with LLMs and RAG pipelines: prompt engineering, LangChain, llama-index
Deep understanding of the PyData & ML/DL ecosystems, including RAPIDS, Pandas, numpy, scikit-learn, XGBoost, Numba, PyTorch
Familiarity with distributed programming frameworks like Dask, Apache Spark, or Ray
Visible contributions to open-source projects on GitHub
Ways to stand out from the crowd:
Active engagement (published papers, conference talks, blogs) in the data science community
Experience with production-level data pipelines, especially SQL-based
Experience with software packaging technologies: pip, conda, Docker images
Familiarity with Docker-Compose, Kubernetes, and Cloud deployment frameworks
Knowledge of parallel programming approaches, especially in CUDA C++
With a competitive salary package and benefits, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. Are you a creative and autonomous Python Software Engineer developing GPU - Accelerated LLM Data Applications, who loves challenges? Do you have a genuine passion for advancing the state of AI & machine learning across a variety of industries? If so, we want to hear from you. Come join us in these exciting times and make a sizable difference in the exploding world of Deep Learning! Doing what’s never been done before takes vision, innovation, and the world’s best talent.
The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits (https://www.nvidia.com/en-us/benefits/) . NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.