Scale AI, Inc.
Senior/Staff Machine Learning Research Scientist Generative AI
Scale AI, Inc., San Francisco, California, United States, 94199
Scale's Generative AI ML team conducts research on models, supervision, and algorithms that advance frontier models for Scale's applied-ML teams and the broader AI community. Scale is uniquely positioned at the heart of the field of AI as an indispensable provider of training and evaluation data and end-to-end solutions for the ML lifecycle. You will work closely with Scale's Generative AI product team focused on accelerating AI adoption for some of the largest companies in the world.
At Scale, our research is driven by product needs. Your focus will be on developing new foundational models, algorithms, and forms of supervision for Generative AI. You will lead writing, publishing, and adoption of your work internally with applied teams. You will be involved end-to-end from the inception and planning of new research agendas. You'll be creating high quality datasets, implementing models and associated training and evaluation stacks, producing high caliber publications in the form of peer-reviewed journal articles, blogs, white papers, and internal presentations & documentation. If you are excited about shaping the future AI via fundamental innovations, we would love to hear from you!
You will:
Publish new methods that advance frontier models/LLMs via human in the loopRelease papers, datasets, and open source code that improve state of the art open source modelsEvaluate, adapt, and develop new state of the art language and/or multimodal foundation models
Ideally you'd have:
A track record of high-caliber publications in peer-reviewed machine learning venues (e.g. NeurIPS, ICLR, ICML, EMNLP, CVPR, AAAI etc.)Interest in capability and alignment researchAt least 3 to 5 years of model training and evaluation experienceStrong skills in NLP, LLMs and deep learningSolid background in algorithms, data structures, and object-oriented programming.Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment.Strong high-level programming skills (e.g., Python), frameworks and tools such as Pytorch lightning, kuberflow, TensorFlow, transformers, etc.Strong written and verbal communication skills to operate in a cross functional team environment and to broadcast your work efficiently and with splashA PhD in AI, Machine Learning, Computer Science, or related field
Nice to haves:
Experience in dealing with large scale AI problems, ideally in the generative-AI field.Demonstrated research expertise in post-training methods &/or next generation use cases for large language models including instruction tuning, RLHF, tool use, reasoning, agents, and multimodal, etc.
#J-18808-Ljbffr
At Scale, our research is driven by product needs. Your focus will be on developing new foundational models, algorithms, and forms of supervision for Generative AI. You will lead writing, publishing, and adoption of your work internally with applied teams. You will be involved end-to-end from the inception and planning of new research agendas. You'll be creating high quality datasets, implementing models and associated training and evaluation stacks, producing high caliber publications in the form of peer-reviewed journal articles, blogs, white papers, and internal presentations & documentation. If you are excited about shaping the future AI via fundamental innovations, we would love to hear from you!
You will:
Publish new methods that advance frontier models/LLMs via human in the loopRelease papers, datasets, and open source code that improve state of the art open source modelsEvaluate, adapt, and develop new state of the art language and/or multimodal foundation models
Ideally you'd have:
A track record of high-caliber publications in peer-reviewed machine learning venues (e.g. NeurIPS, ICLR, ICML, EMNLP, CVPR, AAAI etc.)Interest in capability and alignment researchAt least 3 to 5 years of model training and evaluation experienceStrong skills in NLP, LLMs and deep learningSolid background in algorithms, data structures, and object-oriented programming.Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment.Strong high-level programming skills (e.g., Python), frameworks and tools such as Pytorch lightning, kuberflow, TensorFlow, transformers, etc.Strong written and verbal communication skills to operate in a cross functional team environment and to broadcast your work efficiently and with splashA PhD in AI, Machine Learning, Computer Science, or related field
Nice to haves:
Experience in dealing with large scale AI problems, ideally in the generative-AI field.Demonstrated research expertise in post-training methods &/or next generation use cases for large language models including instruction tuning, RLHF, tool use, reasoning, agents, and multimodal, etc.
#J-18808-Ljbffr