Logo
Scale AI, Inc.

Machine Learning Engineer, Fraud

Scale AI, Inc., San Francisco, CA, United States


About Scale


At Scale AI, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including: generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we're accelerating the abundance of frontier data to pave the road to Artificial General Intelligence (AGI), and building upon our prior model evaluation work with enterprise customers and governments, to deepen our capabilities and offerings for both public and private evaluations.


About This Role


This role will lead the development of trust & safety models to detect fraud & violations on our platform at large scale. The ideal candidate will have experience in industry working on trust & safety to detect misuse via account and behavioral signals. Successful candidates will be impact oriented, have strong foundations in machine learning, and experience in deploying ML services to production. This position requires not only expertise in classical machine learning but familiarity with neural networks and large language models, along with strong intuitions in regards to testing detection systems in the presence of extreme class imbalance. You will contribute to the future of AI by ensuring we deliver high quality data to leading foundation model builders by ensuring that the contributors on our platform are trustworthy and high quality.


Ideally you'd have:

  1. Practical experience deploying machine learning models to production in a microservices cloud environment.
  2. Familiarity with LLMs and proficiency in frameworks like scikit-learn, Pytorch, Jax, or Tensorflow. You should also be adept at interpreting research literature and quickly turning new ideas into prototypes.
  3. At least three years of experience addressing sophisticated ML problems, either in a research setting or product development.
  4. Strong written and verbal communication skills and the ability to operate cross-functionally.
  5. Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment.

Nice to have:

  1. Hands-on production experience developing models for detecting trust & safety violations.
  2. A track record of published research in top ML venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICML, ICLR, COLM, etc.)
  3. Hands-on experience with open source LLM fine-tuning or involvement in bespoke LLM fine-tuning projects using Pytorch/Jax.
#J-18808-Ljbffr