salesforce
Lead Applied Research Scientist - Responsible AI
salesforce, San Francisco, California, United States, 94199
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts.Job Category
Software EngineeringJob Details
About Salesforce
We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.Salesforce's Office of Ethical and Humane Use is hiring an Applied Responsible AI Research Scientist to play a pivotal role in guiding the responsible development of cutting-edge artificial intelligence products. Working together with the Responsible AI & Tech team and in close partnership with both Salesforce AI Research and Frontier AI teams, they will deliver guidance, guardrails, and features that ensure the next generation of AI is designed, developed, and delivered in alignment with Salesforce’s ethical use and responsible AI principles.The ideal candidate will have experience in artificial intelligence and specifically the field of responsible / ethical AI including Generative AI. They will work across teams to implement responsible AI processes including but not limited to bias assessments, accuracy measurements, harms modeling, privacy (memorization, unlearning), security, and AI model trust and safety.Job Responsibilities
Develop strategy, alongside Salesforce AI Research, engineering, data science, and product management to create, develop, and ship cutting-edge generative AI capabilities for Salesforce customers while mitigating ethical risks and capturing ethical opportunities.
Identify potential negative consequences, then identify how those consequences might be mitigated and drive prioritization of those mitigations into a team’s roadmap. Conversely, identify positive ethical impacts in a roadmap/specification/design and ways to amplify them in the product.
Trust and safety, as well as CRM benchmarking against other models and different versions of the same model.
Develop solutions for real-world, large-scale problems.
As needed, lead teams to deliver on more complex pure and applied research projects.
Minimum Requirements:
Master's degree (or foreign degree equivalent) in Computer Science, Engineering, Information Systems, Data Science, Social or Applied Sciences, or a related field.
5-8 years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles. Additional experience researching responsible generative AI challenges and risk mitigations.
Expertise in one of the following areas: alignment, adversarial robustness, interpretability/explainability, or fairness in generative AI.
Proven leadership, organizational, and execution skills. Passion for developing cutting-edge AI ethics technology and deploying it through a multi-stakeholder approach.
Experience working in a technical environment with a broad, cross-functional team to drive results, define product requirements, coordinate resources from other groups (design, legal, etc.), and guide the team through key milestones.
Proven ability to implement, operate, and deliver results via innovation at a large scale.
Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.
Preferred Requirements:
8-10 years of relevant experience in AI ethics, AI research, security, Trust & Safety, or similar roles.
Advanced degree in Computer Science, Human-Computer Interaction, Engineering, Data Science or quantitative Social Sciences.
Published research on algorithmic fairness, accountability, and transparency, especially around detecting and mitigating bias or AI safety.
Full-time industry experience in deep learning research/product.
Strong experience building and applying machine learning models for business applications.
Strong programming skills.
Experience in implementing high-performance and large-scale deep learning systems.
Thoughtful about AI impacts and ethics.
Fantastic problem solver; ability to solve problems the world has not solved before.
Presented a paper at NeurIPS, FAccT, AIES, or similar conferences.
Works well under pressure, and is comfortable working in a fast-paced, ever-changing environment.
#J-18808-Ljbffr
Software EngineeringJob Details
About Salesforce
We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.Salesforce's Office of Ethical and Humane Use is hiring an Applied Responsible AI Research Scientist to play a pivotal role in guiding the responsible development of cutting-edge artificial intelligence products. Working together with the Responsible AI & Tech team and in close partnership with both Salesforce AI Research and Frontier AI teams, they will deliver guidance, guardrails, and features that ensure the next generation of AI is designed, developed, and delivered in alignment with Salesforce’s ethical use and responsible AI principles.The ideal candidate will have experience in artificial intelligence and specifically the field of responsible / ethical AI including Generative AI. They will work across teams to implement responsible AI processes including but not limited to bias assessments, accuracy measurements, harms modeling, privacy (memorization, unlearning), security, and AI model trust and safety.Job Responsibilities
Develop strategy, alongside Salesforce AI Research, engineering, data science, and product management to create, develop, and ship cutting-edge generative AI capabilities for Salesforce customers while mitigating ethical risks and capturing ethical opportunities.
Identify potential negative consequences, then identify how those consequences might be mitigated and drive prioritization of those mitigations into a team’s roadmap. Conversely, identify positive ethical impacts in a roadmap/specification/design and ways to amplify them in the product.
Trust and safety, as well as CRM benchmarking against other models and different versions of the same model.
Develop solutions for real-world, large-scale problems.
As needed, lead teams to deliver on more complex pure and applied research projects.
Minimum Requirements:
Master's degree (or foreign degree equivalent) in Computer Science, Engineering, Information Systems, Data Science, Social or Applied Sciences, or a related field.
5-8 years of relevant experience in AI ethics, AI research, Security, Trust & Safety, or similar roles. Additional experience researching responsible generative AI challenges and risk mitigations.
Expertise in one of the following areas: alignment, adversarial robustness, interpretability/explainability, or fairness in generative AI.
Proven leadership, organizational, and execution skills. Passion for developing cutting-edge AI ethics technology and deploying it through a multi-stakeholder approach.
Experience working in a technical environment with a broad, cross-functional team to drive results, define product requirements, coordinate resources from other groups (design, legal, etc.), and guide the team through key milestones.
Proven ability to implement, operate, and deliver results via innovation at a large scale.
Excellent written and oral communication skills, as well as interpersonal skills, including the ability to articulate technical concepts to both technical and non-technical audiences.
Preferred Requirements:
8-10 years of relevant experience in AI ethics, AI research, security, Trust & Safety, or similar roles.
Advanced degree in Computer Science, Human-Computer Interaction, Engineering, Data Science or quantitative Social Sciences.
Published research on algorithmic fairness, accountability, and transparency, especially around detecting and mitigating bias or AI safety.
Full-time industry experience in deep learning research/product.
Strong experience building and applying machine learning models for business applications.
Strong programming skills.
Experience in implementing high-performance and large-scale deep learning systems.
Thoughtful about AI impacts and ethics.
Fantastic problem solver; ability to solve problems the world has not solved before.
Presented a paper at NeurIPS, FAccT, AIES, or similar conferences.
Works well under pressure, and is comfortable working in a fast-paced, ever-changing environment.
#J-18808-Ljbffr