Google DeepMind
Research Scientist, Multimodal LLMs
Google DeepMind, Mountain View, California, us, 94039
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot
The VIVID team at Google DeepMind focuses on cutting-edge research to advance the capabilities of foundation models, to enable personalized, multimodal, agentic experiences. Our work spans new modeling approaches, problem definitions, and data, with a strong emphasis on the bridge between perceptual (audio, image, video) and semantic (language, code) modalities. In addition to producing highly-cited research published at top academic venues, our innovations land in flagship models like Gemini, and in Google products used by people every day. About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The role
We are seeking a highly motivated and talented Research Scientist to join our team at the forefront of multimodal large language model (LLM) research. You will be working alongside a world-class team of researchers and engineers to develop and advance the next generation of AI models that can seamlessly integrate and reason across different modalities such as text, images, audio, and video. Key responsibilities
Develop novel multimodal LLM architectures with a focus on memory : Design and implement new model architectures that effectively integrate diverse modalities like text, images, audio, and video. This includes researching innovative approaches to memory representation and retrieval, enabling these models to learn and reason from long-range dependencies and complex multimodal information. Investigate post-training techniques for planning and reasoning : Explore and develop methods to enhance the planning and reasoning capabilities of multimodal LLMs after initial training. This may include fine-tuning strategies, reinforcement learning techniques, or novel approaches to improve the agent's ability to solve complex tasks, make decisions, and interact with dynamic environments. Develop and evaluate applications of multimodal LLMs for personalized and agentic experiences : Translate research findings into practical applications by building and evaluating prototypes that showcase the potential of multimodal LLMs. Focus on creating personalized and agentic experiences, such as interactive assistants, creative tools, or educational platforms, that leverage the model's ability to understand and generate content across multiple modalities. About you
In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience: PhD in Computer Science, Statistics, or a related field Strong publication record in top machine learning conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, ECCV) Expertise in one or more of the following areas: natural language processing, computer vision, reinforcement learning In addition, the following would be an advantage: Experience with training, evaluating, or interpreting large language models Proven ability to design and execute independent research projects Excellent communication and collaboration skills Extensive experience with deep learning frameworks (e.g. PyTorch, JAX) and large-scale model training. The US base salary range for this full-time position is between $136,000 - $210,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
#J-18808-Ljbffr
The VIVID team at Google DeepMind focuses on cutting-edge research to advance the capabilities of foundation models, to enable personalized, multimodal, agentic experiences. Our work spans new modeling approaches, problem definitions, and data, with a strong emphasis on the bridge between perceptual (audio, image, video) and semantic (language, code) modalities. In addition to producing highly-cited research published at top academic venues, our innovations land in flagship models like Gemini, and in Google products used by people every day. About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The role
We are seeking a highly motivated and talented Research Scientist to join our team at the forefront of multimodal large language model (LLM) research. You will be working alongside a world-class team of researchers and engineers to develop and advance the next generation of AI models that can seamlessly integrate and reason across different modalities such as text, images, audio, and video. Key responsibilities
Develop novel multimodal LLM architectures with a focus on memory : Design and implement new model architectures that effectively integrate diverse modalities like text, images, audio, and video. This includes researching innovative approaches to memory representation and retrieval, enabling these models to learn and reason from long-range dependencies and complex multimodal information. Investigate post-training techniques for planning and reasoning : Explore and develop methods to enhance the planning and reasoning capabilities of multimodal LLMs after initial training. This may include fine-tuning strategies, reinforcement learning techniques, or novel approaches to improve the agent's ability to solve complex tasks, make decisions, and interact with dynamic environments. Develop and evaluate applications of multimodal LLMs for personalized and agentic experiences : Translate research findings into practical applications by building and evaluating prototypes that showcase the potential of multimodal LLMs. Focus on creating personalized and agentic experiences, such as interactive assistants, creative tools, or educational platforms, that leverage the model's ability to understand and generate content across multiple modalities. About you
In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience: PhD in Computer Science, Statistics, or a related field Strong publication record in top machine learning conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, ECCV) Expertise in one or more of the following areas: natural language processing, computer vision, reinforcement learning In addition, the following would be an advantage: Experience with training, evaluating, or interpreting large language models Proven ability to design and execute independent research projects Excellent communication and collaboration skills Extensive experience with deep learning frameworks (e.g. PyTorch, JAX) and large-scale model training. The US base salary range for this full-time position is between $136,000 - $210,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
#J-18808-Ljbffr