Logo
McKinsey & Company

Responsible AI Governance Lead, Tech and Data Risk

McKinsey & Company, Boston, Massachusetts, us, 02298


Risk & Compliance

Responsible AI Governance Lead, Tech and Data Risk

Job ID: 93767

Do you want to do work that matters, alongside supportive leaders who will help you grow faster than you ever thought possible? Are you a creative problem-solver who is energized by challenges?

You've come to the right place.

Who You'll Work With

You will be based in one of our North America offices and will work within the firm's Technology & Data Risk function as the Responsible AI (RAI) Governance Lead. You will be responsible for the design, implementation, and oversight of RAI practices across our firm ensuring alignment with global regulatory frameworks and firm values. This role reports directly to the Associate Director of Strategic Programs, Technology & Data Risk, who is responsible for leading strategic programs as part of the risk function. You will partner closely with Legal, Risk teams, Technology and Product teams to assist you in the development of the RAI Governance program.

Your impact within our firm

As the RAI Governance Lead, you will be responsible for the firm's RAI Governance Program, creating and maintaining policies and frameworks to ensure the responsible and compliant use of AI. You'll ensure that AI systems meet global regulatory standards as defined by our legal team and adapt policies to keep pace of evolving legal requirements. You will lead the implementation and integration of tools to support the RAI governance processes, including intake, risk assessment, monitoring, testing and compliance of AI use cases. You'll work closely with product, engineering, and legal teams to embed responsible AI practices into product development and daily operations.

You will manage the team responsible for performing risk assessments on AI systems and will set the strategy for AI risk management including MLOps, continuous testing to ensure we have a strong ability to evaluate and validate AI models for reliability, ethical compliance, and regulatory adherence. You will also play a key role in training teams across the firm, building awareness around the implications of AI use, and promoting accountability. Staying informed about global trends and regulatory changes, you'll translate new developments into practical strategies and actionable recommendations.

You'll define and monitor key performance metrics that ensure AI systems operate responsibly and are adhering to our standards. Conducting regular audits of models, algorithms, and datasets will help you confirm they meet the necessary standards. Additionally, you'll contribute to designing and implementing controls that mitigate potential AI risks and enhance trust.

Your qualifications and skills

Bachelor's degree required

7+ years experience within a risk or governance role

Expertise in AI ethics principles, privacy regulations (e.g., GDPR, CCPA), or industry standards for responsible AI

Deep understanding of the end-to-end AI/ML development lifecycle, including data collection, model development, deployment, and monitoring

Strong analytical skills, with the ability to interpret data and perform bias and fairness analysis on complex AI systems

Experience in responsible AI particularly AI testing and model validation

Demonstrated experience leading teams of technical experts, driving successful project outcomes and fostering team collaboration

Exceptional communication skills, with the ability to translate technical concepts to non-technical stakeholders and lead cross-functional teams

Proven ability to work effectively with cross-functional teams, including product, legal, compliance, and data science teams, to align on responsible AI objectives

#J-18808-Ljbffr