Logo
Lionheart Ventures

Product Policy Risk Manager, Trust & Safety

Lionheart Ventures, San Francisco, California, United States, 94199


As the Trust and Safety Product Policy Risk Manager, you will play a crucial role in assessing the implications of new product features and developing policies to ensure their safe and responsible deployment. You'll work closely with product and engineering teams to understand upcoming features, anticipate potential misuses or unintended consequences, and craft policies that balance innovation with responsibility. Your work will be essential in maintaining Anthropic's commitment to safe and beneficial AI as we continue to expand our product capabilities.

IMPORTANT CONTEXT ON THIS ROLE:

In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.

Responsibilities

Develop and maintain risk assessment frameworks to identify and evaluate potential safety risks associated with new product features and functionality

Create comprehensive policies and guidelines to address and mitigate identified risks related to new product launches and enhanced model capabilities.

Collaborate closely with a variety of stakeholders including product and engineering teams and the broader T&S team to leverage deep policy, enforcement, and engineering expertise

Analyze the potential for misuse, unintended consequences, and harmful outputs of new model capabilities

Craft policy recommendations that strike a balance between enabling innovation and ensuring responsible AI deployment

Work with the T&S enforcement team to develop clear guidelines for implementing new policies related to product features

Stay current on industry trends and emerging risks in AI development to proactively address potential issues

Contribute to regular reports on product policy risks and mitigations for senior leadership

Educate and align internal stakeholders on our product policies and overall approach to responsible AI development

You might thrive in this role if you:

Understand Trust and Safety policies and safety considerations associated with a wide range of product surfaces.

Have crisp written and verbal communication skills, with the ability to explain technical concepts to non-technical stakeholders

Have conducted risk evaluations of novel products in fast moving organizations

Demonstrated expertise collaborating with product and engineering teams to integrate safety considerations into product development

Have familiarity with AI ethics, responsible AI principles, and current debates surrounding AI safety and governance

Have the ability to think creatively about potential misuses of technology and develop innovative solutions to mitigate risks

Have shown strong project management skills with the ability to drive policy development processes from ideation to implementation

Deadline to apply: None. Applications will be reviewed on a rolling basis.

#J-18808-Ljbffr