OpenAI
Research Scientist - CoT, Science of Deep Learning
OpenAI, San Francisco, California, United States, 94199
Research Scientist - CoT, Science of Deep Learning
Reinforcement Learning - San Francisco
Chain-of-thought Interpretability Team - Research Science and Engineering
Monitoring models for misaligned or dangerous behavior is a crucial mitigation to our mission of bringing safe artificial general intelligence to the world. We have long been excited by the prospect of monitoring our models’
latent thinking
in addition to outputs that users see. With the advent of models that rely heavily on chain-of-thought (CoT) reasoning to solve complex tasks, we now have access to some of the models’ internal thinking in a far more legible form and could allow us to monitor their latent thinking for more complex behavior.
The Chain-of-thought Interpretability Team is working on technical approaches to determine whether model CoTs are monitorable, i.e. when they are faithful and legible, and what interventions may improve or degrade monitorability.
About the Role
In this role, you will develop innovative machine learning techniques and collaborate with peers across the organization to advance this critical pillar of OpenAI’s mission. We are looking for individuals with solid engineering skills who can write bug-free ML code and work in the complex code bases behind our state-of-the-art AI systems.
You will thrive in this role if you:
Are excited about OpenAI's mission and eager to move the needle on a critical component of building safe, beneficial AGI.
Are eager to study AI safety through a scientific lens.
Have a background in statistical machine learning, physics, mathematics, or another theoretically and empirically rigorous field.
Are passionate about building, running, and studying AI systems at the largest scales and at the forefront of the field.
Enjoy fast-paced and collaborative research environments focused on achieving the impossible.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them through our products. To achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
Compensation
$295K – $440K + Offers Equity
#J-18808-Ljbffr
Reinforcement Learning - San Francisco
Chain-of-thought Interpretability Team - Research Science and Engineering
Monitoring models for misaligned or dangerous behavior is a crucial mitigation to our mission of bringing safe artificial general intelligence to the world. We have long been excited by the prospect of monitoring our models’
latent thinking
in addition to outputs that users see. With the advent of models that rely heavily on chain-of-thought (CoT) reasoning to solve complex tasks, we now have access to some of the models’ internal thinking in a far more legible form and could allow us to monitor their latent thinking for more complex behavior.
The Chain-of-thought Interpretability Team is working on technical approaches to determine whether model CoTs are monitorable, i.e. when they are faithful and legible, and what interventions may improve or degrade monitorability.
About the Role
In this role, you will develop innovative machine learning techniques and collaborate with peers across the organization to advance this critical pillar of OpenAI’s mission. We are looking for individuals with solid engineering skills who can write bug-free ML code and work in the complex code bases behind our state-of-the-art AI systems.
You will thrive in this role if you:
Are excited about OpenAI's mission and eager to move the needle on a critical component of building safe, beneficial AGI.
Are eager to study AI safety through a scientific lens.
Have a background in statistical machine learning, physics, mathematics, or another theoretically and empirically rigorous field.
Are passionate about building, running, and studying AI systems at the largest scales and at the forefront of the field.
Enjoy fast-paced and collaborative research environments focused on achieving the impossible.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them through our products. To achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
Compensation
$295K – $440K + Offers Equity
#J-18808-Ljbffr