Logo
Tbwa Chiat/Day Inc

Founding Research Scientist San Francisco, CA

Tbwa Chiat/Day Inc, San Francisco, CA, United States


Goodfire’s mission is to advance humanity's understanding of AI by examining the inner workings of advanced AI models (or “AI Interpretability”). As an applied research lab, we bridge the gap between theoretical science and practical applications of interpretability.

We’re looking for agentic, mission-driven, kind, and thoughtful people to help us build the future of interpretability. If you believe understanding AI systems is critical for our future, join us!

Goodfire is a public benefit corporation based in San Francisco, and all roles are full time in person.

The role:

We are looking for a Founding Research Scientist to join our team and help develop robust, scalable systems for deploying interpretability techniques on large AI models. You will collaborate closely with our research team to translate novel interpretability methods into production-ready tools and work on scaling our infrastructure to handle increasingly large models and complex use cases.

Core responsibilities:

  • Conduct impactful research in the fields of mechanistic interpretability and model editing.
  • Develop novel techniques and algorithms for extracting, analyzing, visualizing, and manipulating the internal representations and decision-making processes of large AI models.
  • Collaborate with our engineering team to design and implement scalable, robust systems for applying interpretability and model editing techniques at scale.
  • Publish research findings in top-tier AI/ML conferences and journals, and present work at industry events and workshops.
  • Mentor and guide more junior research team members, fostering a culture of innovation, rigor, and collaboration.
  • Stay up-to-date with the latest developments in AI interpretability and model editing research, and contribute to the broader scientific community through open-source projects and community initiatives.

Who you are:

Goodfire is looking for experienced individuals who embody our values and share our deep commitment to making interpretability accessible. We care deeply about building a team who shares our values:

  • High agency: You are self-directed, proactive, and take ownership of your work, setting and accomplishing ambitious goals independently while collaborating effectively with others.
  • Constant improvement: You have deep intellectual curiosity and are always seeking to expand your knowledge and reflect on what you could be doing better.
  • Strong opinions, loosely held: You foster an environment where well-intentioned disagreement leads to reaching the best solutions. You argue strongly for what you believe and are not afraid to change your mind when you are wrong.
  • Deeply mission driven: You understand that the path to building a game-changing interpretability product will not be easy and are prepared to put in the hard work every day. You put the team before yourself and are fully committed to advancing our understanding of AI.
  • Thoughtful and pragmatic: You approach your work and interactions with others with nuance and humility. You think deeply about all angles of a problem, not just the one you advocate for. You operate with the understanding that not everything can be perfect.

If you share our values and have at least five years of relevant experience, we encourage you to apply and join us in shaping the future of how we design AI systems.

What we are looking for:

  • PhD in Computer Science, Machine Learning, or a related field, or equivalent experience.
  • Demonstrated research intuition for interpretability and model editing research.
  • Solid engineering skills, with proficiency in Python and experience with PyTorch or similar deep learning frameworks.
  • Demonstrated ability to collaborate with cross-functional teams, including product and engineering.
  • Demonstrated ability to communicate complex research ideas to diverse audiences.
  • Passion for AI interpretability and a commitment to responsible AI development.

Preferred qualifications:

  • Postdoctoral experience or industry research experience in interpretability.
  • Experience working in a fast-paced, early-stage startup environment.
  • Experience leading research projects and mentoring junior researchers.
  • Contributions to open-source AI/ML projects or research codebases.

This role offers market competitive salary, equity, and competitive benefits. More importantly, you'll have the opportunity to work on groundbreaking technology with a world-class team dedicated to ensuring a safe and beneficial future for humanity.

This role reports to our Chief Scientist.

#J-18808-Ljbffr