Logo
Anthropic Limited

Research Scientist / Research Engineer, Pretraining

Anthropic Limited, San Francisco, California, United States, 94199


Anthropic is at the forefront of AI research, dedicated to developing safe, ethical, and powerful artificial intelligence. Our mission is to ensure that transformative AI systems are aligned with human interests. We are seeking a Research Scientist / Research Engineer to join our Pretraining team, responsible for developing the next generation of large language models. In this role, you will work at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems.

Key Responsibilities:

Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development

Independently lead small research projects while collaborating with team members on larger initiatives

Design, run, and analyze scientific experiments to advance our understanding of large language models

Optimize and scale our training infrastructure to improve efficiency and reliability

Develop and improve dev tooling to enhance team productivity

Contribute to the entire stack, from low-level optimizations to high-level model design

Qualifications:

Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field

Strong software engineering skills with a proven track record of building complex systems

Expertise in Python and experience with deep learning frameworks (PyTorch preferred)

Familiarity with large-scale machine learning, particularly in the context of language models

Ability to balance research goals with practical engineering constraints

Strong problem-solving skills and a results-oriented mindset

Excellent communication skills and ability to work in a collaborative environment

Care about the societal impacts of your work

Preferred Experience:

Work on high-performance, large-scale ML systems

Familiarity with GPUs, Kubernetes, and OS internals

Experience with language modeling using transformer architectures

Knowledge of reinforcement learning techniques

Background in large-scale ETL processes

You'll thrive in this role if you:

Have significant software engineering experience

Are results-oriented with a bias towards flexibility and impact

Willingly take on tasks outside your job description to support the team

Enjoy pair programming and collaborative work

Are eager to learn more about machine learning research

Are enthusiastic to work at an organization that functions as a single, cohesive team pursuing large-scale AI research projects

Are working to align state of the art models with human values and preferences, understand and interpret deep neural networks, or develop new models to support these areas of research

View research and engineering as two sides of the same coin, and seek to understand all aspects of our research program as well as possible, to maximize the impact of your insights

Have ambitious goals for AI safety and general progress in the next few years, and you’re working to create the best outcomes over the long-term.

Sample Projects:

Optimizing the throughput of novel attention mechanisms

Comparing compute efficiency of different Transformer variants

Preparing large-scale datasets for efficient model consumption

Scaling distributed training jobs to thousands of GPUs

Designing fault tolerance strategies for our training infrastructure

Creating interactive visualizations of model internals, such as attention patterns

At Anthropic, we are committed to fostering a diverse and inclusive workplace. We strongly encourage applications from candidates of all backgrounds, including those from underrepresented groups in tech.

If you're excited about pushing the boundaries of AI while prioritizing safety and ethics, we want to hear from you!

#J-18808-Ljbffr