Logo
Anthropic Limited

Threat Investigator, Trust & Safety

Anthropic Limited, San Francisco, California, United States, 94199


Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

As a Threat Investigations Lead, you will be an Individual Contributor responsible for working with the team to build out Anthropic's threat intelligence program, focusing on developing novel detection techniques to identify and mitigate abuse of our products and services. This role requires creating and implementing processes, tools, and strategies to proactively detect adversarial actors, investigate incidents, and work cross-functionally to enhance our defenses against emerging risks in the rapidly evolving landscape of AI technology.

Your work will be essential in maintaining Anthropic's commitment to safe and beneficial AI as we continue to expand our product capabilities.

IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.

Responsibilities

Work with the team to build out our threat intelligence program, establishing processes, tools, and best practices

Analyze the deployment of our products and services to identify how these systems are being misused or abused, with a particular focus on influence operations

Develop abuse signals and tracking strategies to proactively detect adversarial actors

Study trends internally and in the broader ecosystem to anticipate how systems could be misused or manipulated for harm in the future, generating and publishing reports

Utilize the results of deep dive investigations to implement systematic changes to our safety approach to mitigate harm

Keep abreast of the latest industry risks, vulnerabilities, and issues related to the use of language models and generative AI; identify opportunities for improvement to our policies, controls, and enforcement mechanisms

You might thrive in this role if you:

Have experience in technical analysis and investigations, including skills in SQL and Python

Have experience building out threat intelligence programs

Have subject matter expertise in abusive user behavior detection, particularly in the context of influence operations

Can derive insights from large amounts of data to make key decisions and recommendations

Have experience on a trust and safety team and/or have worked closely with policy or content moderation

Have strong project management skills and the ability to build processes from the ground up

Possess excellent communication skills to collaborate with cross-functional teams

Deadline to apply: None. Applications will be reviewed on a rolling basis.

#J-18808-Ljbffr