Texas Instruments
Machine Learning Research Engineer Intern
Texas Instruments, Dallas, Texas, United States, 75215
Change the world. Love your job.
We are seeking a highly motivated PhD student to join our Generative AI team within Kilby Labs this summer to work on cutting-edge Large Language Model (LLM) research and development. As a key member of our team, you will focus on advancing the state-of-the-art in LLMs and Agentic LLMs, with applications in test software generation and optimization. Your work will involve exploring innovative approaches to integrate LLMs with other AI techniques to achieve human-like decision-making capabilities.
In this machine learning research engineer intern role, you'll have the chance to:
Research and develop novel LLM architectures and training methods to improve performance and efficiency Explore the application of LLMs in test software generation, optimization, and other areas of interest Collaborate with internal business teams to define and implement AI strategies for our products Develop and maintain large-scale deep learning systems, incorporating LLMs and other AI techniques Participate in the design and implementation of advanced Agentic LLM system Kilby Labs, TI's central technology organization,
is a global organization grouped by a variety of research engineers from multiple technology disciplines to deliver semiconductor technologies and solutions for future large volume products in a fast moving pace. The engineer is expected to do system modeling and simulations for solutions feasibility study, define and build system prototypes to demonstrate the functionality and understand application needs and limitations.
Our Generative AI team is at the forefront of AI research, developing and applying AI algorithms, training models, and optimizing hardware and software for real-world problem-solving. We are passionate about harnessing the power of AI to create intelligent systems that can interact, learn, and adapt in complex environments.
Put your talent to work with us as a Machine Learning Research Engineer intern!
Minimum Requirements:
Currently enrolled in a PhD program in Computer Science, Computer Engineering, Electrical Engineering or related field Cumulative 3.0/4.0 GPA or higher Preferred Qualifications:
Strong background in Natural Language Processing, Large Language Models, and Deep Learning frameworks Proficiency in Python, C/C++, and software design, including debugging, performance analysis, and optimization Excellent understanding of LLM architectures, including transformer-based models, and their applications Experience with popular deep learning frameworks (e.g., PyTorch, JAX, ONNX) and LLM-specific libraries (e.g., Hugging Face Transformers) Strong foundation in text processing, tokenization, and embedding techniques Experience with Agentic LLMs, multi-agent systems, and human-AI collaboration Knowledge of language model fine-tuning, few-shot learning, and transfer learning techniques Familiarity with LLM evaluation metrics, such as perplexity, BLEU, and ROUGE Experience with large-scale LLM training and optimization, including distributed training and model parallelism Excellent communication and interpersonal skills, with the ability to work in a dynamic and distributed team
#LI-KJ1
We are seeking a highly motivated PhD student to join our Generative AI team within Kilby Labs this summer to work on cutting-edge Large Language Model (LLM) research and development. As a key member of our team, you will focus on advancing the state-of-the-art in LLMs and Agentic LLMs, with applications in test software generation and optimization. Your work will involve exploring innovative approaches to integrate LLMs with other AI techniques to achieve human-like decision-making capabilities.
In this machine learning research engineer intern role, you'll have the chance to:
Research and develop novel LLM architectures and training methods to improve performance and efficiency Explore the application of LLMs in test software generation, optimization, and other areas of interest Collaborate with internal business teams to define and implement AI strategies for our products Develop and maintain large-scale deep learning systems, incorporating LLMs and other AI techniques Participate in the design and implementation of advanced Agentic LLM system Kilby Labs, TI's central technology organization,
is a global organization grouped by a variety of research engineers from multiple technology disciplines to deliver semiconductor technologies and solutions for future large volume products in a fast moving pace. The engineer is expected to do system modeling and simulations for solutions feasibility study, define and build system prototypes to demonstrate the functionality and understand application needs and limitations.
Our Generative AI team is at the forefront of AI research, developing and applying AI algorithms, training models, and optimizing hardware and software for real-world problem-solving. We are passionate about harnessing the power of AI to create intelligent systems that can interact, learn, and adapt in complex environments.
Put your talent to work with us as a Machine Learning Research Engineer intern!
Minimum Requirements:
Currently enrolled in a PhD program in Computer Science, Computer Engineering, Electrical Engineering or related field Cumulative 3.0/4.0 GPA or higher Preferred Qualifications:
Strong background in Natural Language Processing, Large Language Models, and Deep Learning frameworks Proficiency in Python, C/C++, and software design, including debugging, performance analysis, and optimization Excellent understanding of LLM architectures, including transformer-based models, and their applications Experience with popular deep learning frameworks (e.g., PyTorch, JAX, ONNX) and LLM-specific libraries (e.g., Hugging Face Transformers) Strong foundation in text processing, tokenization, and embedding techniques Experience with Agentic LLMs, multi-agent systems, and human-AI collaboration Knowledge of language model fine-tuning, few-shot learning, and transfer learning techniques Familiarity with LLM evaluation metrics, such as perplexity, BLEU, and ROUGE Experience with large-scale LLM training and optimization, including distributed training and model parallelism Excellent communication and interpersonal skills, with the ability to work in a dynamic and distributed team
#LI-KJ1