McKinsey & Company
Responsible AI Risk Assessment Manager, Tech and Data Risk
McKinsey & Company, Atlanta, GA
Who You'll Work With
You will be based in one of our North America offices and will work within the firm's Technology & Data Risk function. As the Responsible AI (RAI) Risk Assessment Manager, you will ensure the strong ability to evaluate and validate AI models for reliability, ethical compliance, and regulatory adherence. This includes not only technical skills in model testing and validation but also the capacity to communicate complex findings effectively to both technical and non-technical stakeholders.
This role reports directly to the RAI Governance Lead, Technology & Data Risk, who is responsible for the firm's global RAI Governance program as part of the Risk function. You will partner closely with Legal, Risk teams, Technology and Product teams to develop consistent and scalable RAI controls and testing methodologies.
Your impact within our firm
As the lead in risk assessing and testing AI models, you will ensure control implementations meet external regulations, internal standards, and best practices. Your role will involve defining the technical vision and strategic roadmap for AI controls and testing including continuous monitoring, evaluation, and reporting of AI systems. To ensure you stay ahead of the regulatory landscape, you will work closely with the legal team to quickly adapt approaches to reflect new requirements.
You will play a critical role in prioritizing areas for risk assessment and mitigation, guiding the responsible development and deployment of AI systems. You will conduct testing of AI models as part of the governance process as well as validating testing of models that internal teams have conducted, and additionally in response to market changes or new regulatory demands, provide actionable insights and recommendations for improvement. In collaboration with cross-functional teams, you will spearhead the development of tools, automation strategies, and data pipelines that support scalable AI risk management efforts and that empower product and engineering teams to leverage those tools and playbooks to enable their own independent risk mitigation.
You will assist in developing standardized reporting templates tailored to meet the needs of both technical data scientists and senior leadership to facilitate clear communication of results.
Your collaboration will extend to model owners and senior management, where you will present findings, assess their implications for risk management, and propose enhancements to AI models.
Your qualifications and skills
You will be based in one of our North America offices and will work within the firm's Technology & Data Risk function. As the Responsible AI (RAI) Risk Assessment Manager, you will ensure the strong ability to evaluate and validate AI models for reliability, ethical compliance, and regulatory adherence. This includes not only technical skills in model testing and validation but also the capacity to communicate complex findings effectively to both technical and non-technical stakeholders.
This role reports directly to the RAI Governance Lead, Technology & Data Risk, who is responsible for the firm's global RAI Governance program as part of the Risk function. You will partner closely with Legal, Risk teams, Technology and Product teams to develop consistent and scalable RAI controls and testing methodologies.
Your impact within our firm
As the lead in risk assessing and testing AI models, you will ensure control implementations meet external regulations, internal standards, and best practices. Your role will involve defining the technical vision and strategic roadmap for AI controls and testing including continuous monitoring, evaluation, and reporting of AI systems. To ensure you stay ahead of the regulatory landscape, you will work closely with the legal team to quickly adapt approaches to reflect new requirements.
You will play a critical role in prioritizing areas for risk assessment and mitigation, guiding the responsible development and deployment of AI systems. You will conduct testing of AI models as part of the governance process as well as validating testing of models that internal teams have conducted, and additionally in response to market changes or new regulatory demands, provide actionable insights and recommendations for improvement. In collaboration with cross-functional teams, you will spearhead the development of tools, automation strategies, and data pipelines that support scalable AI risk management efforts and that empower product and engineering teams to leverage those tools and playbooks to enable their own independent risk mitigation.
You will assist in developing standardized reporting templates tailored to meet the needs of both technical data scientists and senior leadership to facilitate clear communication of results.
Your collaboration will extend to model owners and senior management, where you will present findings, assess their implications for risk management, and propose enhancements to AI models.
Your qualifications and skills
- Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audience
- Experience in responsible AI
- Experience with modeling, experimentation, and causal inference
- Experience working with engineering and product teams to create tools, solutions, or automation
- Deep understanding of AI and ML models, including their potential ethical implications, biases, and regulatory considerations
- Proficiency in SQL, Python, and data analysis/data mining tools
- 7+ years of experience in data analytics, data science, or trust and safety
- Bachelor's degree or equivalent practical experience