Logo
Anthropic Limited

Product Policy Manager, Cyber Threats

Anthropic Limited, San Francisco, California, United States, 94199


As a Trust and Safety policy manager focused on cyber security risks, you will be responsible for helping develop and manage policies for our products and services that specifically address potential risks related to the misuse of AI for cyber threats. Safety is core to our mission and as a member of the team, you'll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful, and honest way, while mitigating risks related to the potential misuse of our AI technology for cyber attacks.

*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.

In this role, you will:

Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases, with a specific focus on preventing the misuse of our technology for cyber threats

Conduct regular reviews of existing policies to identify and address gaps and ambiguities related to cyber security risks

Iterate on and help build out our comprehensive harm framework, incorporating potential cyber threats

Develop deep subject matter expertise in cyber security risks and the potential role of AI in such threats

Update our policies based on feedback from our enforcement team and edge cases that you will review, particularly those related to cyber security

Educate and align internal stakeholders around our policies and our overall approach to product policy, emphasizing the importance of addressing cyber security risks

Partner with internal and external researchers to better understand our product's limitations and risks related to cyber threats, and adapt our policies based on such

Work closely with enforcement and detection teams, Security, and the Frontier Red Team to identify policy gaps based on violations and edge cases related to cyber security

Keep up to date with new and existing AI policy norms and standards, particularly those related to cyber security, and use these to inform our decision-making on policy areas

This role will require strong communication, analytical, and problem-solving skills to balance safety and innovation through well-crafted and clearly articulated policies. If you are passionate about developing policies to guide new technology and have expertise in cyber security risks, we want to hear from you!

You might thrive in this role if you:

Have a passion for or interest in artificial intelligence and ensuring it is developed and deployed safely

Have awareness of and an interest in Trust and Safety user policies

Have expertise in cyber security risks and an understanding of how AI technology could potentially contribute to such threats

Demonstrated expertise in stakeholder management, including identifying key stakeholders, building and maintaining strong relationships, and effectively communicating project goals and progress

Understand the challenges that exist in developing and implementing policies at scale

Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems while mitigating risks related to cyber security threats

#J-18808-Ljbffr