Logo
Salesforce

Sr. Director, Responsible AI Testing, Evaluation, and Alignment

Salesforce, San Francisco, California, United States, 94199


About Salesforce

We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.

Salesforce's Office of Ethical and Humane Use seeks a Senior Director to lead our technical work in AI Testing, Evaluation, and Alignment. Working together with the VP of Responsible AI & Tech, the Chief Ethical & Humane Use Officer, and the entire Office of Ethical & Humane Use, you will lead a multi-functional team of Responsible AI engineers and research scientists. You will apply your technical skills to help uphold our principles for the responsible development and deployment of AI. The ideal candidate will have experience leading technical and/or applied research teams (with a special emphasis on AI), as well as direct experience with AI ethics, alignment, and/or safety.

Job Responsibilities:

Develop and run the Responsible AI team’s roadmap of contributions toward Salesforce’s overall Trusted AI/ML lifecycle, ensuring we are safely training, aligning, evaluating, implementing mitigations, deploying, and monitoring models.

With AI ethicists and AI researchers, ensure the responsible and safe development, evaluation, and maintenance of generative foundation, and predictive models.

Drive the workstream to identify, set, and evaluate standards of ML/AI models in collaboration with AI ethicists and AI research scientists; support monitoring them post-deployment for continual improvement.

Run ongoing ethical evaluation and testing (including adversarial testing and red teaming) of ML/AI models and products:

With the team, implement automation strategies for conducting AI Red Teaming and Testing and the development of tools, data, and pipelines to support that work.

With the team, build tools to enable testing to take place at scale. Enable product/engineering teams to perform their own tests through tooling and playbooks.

Lead the process of scoping, documenting, and performing testing with partner teams, including the implementation of mitigations identified during testing.

Lead development and testing of content safety strategy, including culturally critical localization and globalization efforts.

In collaboration with others, help establish mechanisms for ongoing monitoring, evaluation, and reporting of AI systems to ensure alignment with ethical standards and regulatory requirements.

Lead the creation of evaluation datasets for potential risks and harms of language and multimodal models.

As a player-coach, provide guidance, mentorship, and technical leadership to a multidisciplinary team of technical and socio-technical authorities.

As part of the Responsible AI leadership team, create the technical vision and implement the strategic roadmap to advance AI trust and safety at Salesforce.

Minimum Requirements:

A related technical degree required.

8+ years of relevant technical experience in Software Engineering Management, AI ethics, AI research, Data Science Management, or similar roles - of which, at least 2+ years are in a relevant leadership/people management role.

Proven leadership, organizational, and execution skills. Passion for developing pioneering AI ethics technology and deploying it through a multi-stakeholder approach.

Experience in applying ML and AI technologies in a Generative AI context responsibility/ethically; solid ML-focused engineering and research skills, particularly around using and training models and deploying them in systems.

Have proficiency in SQL, Python, and data analysis/data mining tools.

Experience working in a technical environment with a broad, cross-functional team to get results, define requirements, coordinate resources from other groups, and guide the team through key landmarks.

Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.

Preferred Requirements:

10+ years of relevant experience in AI ethics, AI research, Software Engineering Management, Data Science Management, Trust & Safety, or similar roles.

Advanced degree in Computer Science, Engineering, Information Systems, Data Science, Statistics, or a related field.

Proven track record of influential projects and publications in relevant fields and top-tier conferences and journals (such as NeurIPS, ICML, AAAI, CHI, FAccT).

#J-18808-Ljbffr