Anthropic Limited
Research Engineer, Societal Impacts
Anthropic Limited, San Francisco, California, United States, 94199
About the RoleAs a Research Engineer on the Societal Impacts team, you'll design and build critical infrastructure that enables and accelerates foundational research into how our AI systems impact people and society. Your work will directly contribute to our research publications, policy campaigns, safety systems, and products.
Read more about our team in our recruiting blog post.
Strong candidates will have a track record of running & designing experiments relating to machine learning systems, building data processing pipelines, architecting & implementing high-quality internal infrastructure, working in a fast-paced startup environment, and demonstrating an eagerness to develop their own research & technical skills. The ideal candidate will enjoy a mixture of running experiments, developing new tools & evaluation suites, working cross-functionally across multiple research and product teams, and striving for beneficial & safe uses for AI.
Responsibilities:
Design and implement scalable technical infrastructure that enables researchers to efficiently run experiments and evaluate AI systems
Architect systems that can handle uncertain and changing requirements while maintaining high standards of reliability
Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions
Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission
Interface with, and improve our internal technical infrastructure and tools
Generate net-new insights about the potential societal impact of systems being developed by Anthropic
Translate insights to inform Anthropic strategy, research, and public policy
You may be a good fit if you:
Have experience building and maintaining production-grade internal tools or research infrastructure
Take pride in writing clean, well-documented code in Python that others can build upon
Are comfortable making technical decisions with incomplete information while maintaining high engineering standards
Have experience with distributed systems and can design for scale and reliability
Have a track record of using technical infrastructure to interface effectively with machine learning models
Have experience deriving insights from imperfect data streams
Strong candidates may also have experience with:
Maintaining large, foundational infrastructure
Building simple interfaces that allow non-technical collaborators to evaluate AI systems
Working with and prioritizing requests from a wide variety of stakeholders, including research and product teams
Scaling and optimizing the performance of tools
Representative Projects:
Design and implement scalable infrastructure for running large-scale experiments on how people interact with our AI systems
Build robust monitoring systems that help us detect and understand potential misuse or unexpected behaviors
Create internal tools that help researchers, policy experts, and product teams quickly analyze dynamically changing AI system characteristics
Deadline to apply: None. Applications will be reviewed on a rolling basis.
#J-18808-Ljbffr
Read more about our team in our recruiting blog post.
Strong candidates will have a track record of running & designing experiments relating to machine learning systems, building data processing pipelines, architecting & implementing high-quality internal infrastructure, working in a fast-paced startup environment, and demonstrating an eagerness to develop their own research & technical skills. The ideal candidate will enjoy a mixture of running experiments, developing new tools & evaluation suites, working cross-functionally across multiple research and product teams, and striving for beneficial & safe uses for AI.
Responsibilities:
Design and implement scalable technical infrastructure that enables researchers to efficiently run experiments and evaluate AI systems
Architect systems that can handle uncertain and changing requirements while maintaining high standards of reliability
Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions
Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission
Interface with, and improve our internal technical infrastructure and tools
Generate net-new insights about the potential societal impact of systems being developed by Anthropic
Translate insights to inform Anthropic strategy, research, and public policy
You may be a good fit if you:
Have experience building and maintaining production-grade internal tools or research infrastructure
Take pride in writing clean, well-documented code in Python that others can build upon
Are comfortable making technical decisions with incomplete information while maintaining high engineering standards
Have experience with distributed systems and can design for scale and reliability
Have a track record of using technical infrastructure to interface effectively with machine learning models
Have experience deriving insights from imperfect data streams
Strong candidates may also have experience with:
Maintaining large, foundational infrastructure
Building simple interfaces that allow non-technical collaborators to evaluate AI systems
Working with and prioritizing requests from a wide variety of stakeholders, including research and product teams
Scaling and optimizing the performance of tools
Representative Projects:
Design and implement scalable infrastructure for running large-scale experiments on how people interact with our AI systems
Build robust monitoring systems that help us detect and understand potential misuse or unexpected behaviors
Create internal tools that help researchers, policy experts, and product teams quickly analyze dynamically changing AI system characteristics
Deadline to apply: None. Applications will be reviewed on a rolling basis.
#J-18808-Ljbffr