Adobe
Public Policy Manager
Adobe, San Jose, California, United States, 95199
Our Company
Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.
We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!
Do you have a passion for public policy? At Adobe, we are committed to the responsible development of AI and we believe in the important role that industry and government play to ensure that AI public policy fosters innovation, while supporting safety.
In this role, you will craft and advance Adobe’s public policy work on AI trust and safety to address consumer harm. As part of the public policy team, you will work closely with our trust and safety team and ethical innovation team to develop public policy and drive program development. This position is responsible for understanding the landscape of AI trust and safety for consumers to advance technical solutions, law enforcement and government solutions, and partner with the ecosystem to build a foundation for responsible innovation public policy.
What you’ll do
Lead public policy development for AI trust and safety. Partner with Adobe’s Ethical Innovation, Trust and Safety, and engineering and research teams to drive public policy initiatives. Coordinate closely with the Global Public Policy and Government Relations Team to align with company objectives and advance policy objectives around the globe. Proactively build, identify and keep informed of global public policy developments related to consumer harm and AI safety; coordinate with team members to assess impact and provide updates. Develop messaging and content to support advocacy engagements and policymaking meetings. Coalition building with key policy stakeholders including companies, trade associations, and civil society groups. What you need to succeed
Background in law enforcement and public policy to address consumer harms. Experience with policymakers, law enforcement, civil society groups to address technology policy. New and good ideas! Knowledge of the AI trust and safety landscape and the technical aspects of risk mitigation practices and the ability to translate those practices into policy positions. A distinguished track record of success. Political competence with an understanding of the political, legislative, and decision-making processes of governments. Familiarity with current political, legal, regulatory and market trends impacting AI trust and safety and image abuse policy. Passion for developing and advancing public policy! Strong analytical approach to problem solving, ability to develop solutions to problems, able to deliver on time and work efficiently in high pressure. Suitable qualifications include bachelor’s degrees in public relations, communication, political sciences, public affairs, law, business, economics, engineering, business, or similar.
#J-18808-Ljbffr
Lead public policy development for AI trust and safety. Partner with Adobe’s Ethical Innovation, Trust and Safety, and engineering and research teams to drive public policy initiatives. Coordinate closely with the Global Public Policy and Government Relations Team to align with company objectives and advance policy objectives around the globe. Proactively build, identify and keep informed of global public policy developments related to consumer harm and AI safety; coordinate with team members to assess impact and provide updates. Develop messaging and content to support advocacy engagements and policymaking meetings. Coalition building with key policy stakeholders including companies, trade associations, and civil society groups. What you need to succeed
Background in law enforcement and public policy to address consumer harms. Experience with policymakers, law enforcement, civil society groups to address technology policy. New and good ideas! Knowledge of the AI trust and safety landscape and the technical aspects of risk mitigation practices and the ability to translate those practices into policy positions. A distinguished track record of success. Political competence with an understanding of the political, legislative, and decision-making processes of governments. Familiarity with current political, legal, regulatory and market trends impacting AI trust and safety and image abuse policy. Passion for developing and advancing public policy! Strong analytical approach to problem solving, ability to develop solutions to problems, able to deliver on time and work efficiently in high pressure. Suitable qualifications include bachelor’s degrees in public relations, communication, political sciences, public affairs, law, business, economics, engineering, business, or similar.
#J-18808-Ljbffr