TikTok
Lead Researcher, Large Language Models, TikTok Trust and Safety
TikTok, San Jose, CA
Responsibilities
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and its offices include New York, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.
Trust and Safety RD team is fast growing and responsible for building machine learning models and systems to identify and defend internet abuse and fraud on our platform. Our mission is to protect billions of users and publishers across the globe every day. We embrace state-of-the-art machine learning technologies and scale them to detect and improve the tremendous amount of data generated on the platform. With the continuous efforts from our team, TikTok is able to provide the best user experience and bring joy to everyone in the world.
We are looking for researchers in the Large Language Model (LLM) domain who are going to conduct research on single modality and multi modality LLM pretraining and applications including in-context learning (ICL), supervised fine-tuning (SFT) and reinforcement learning based alignment. We are looking forward to applying LLM in trust and safety business scenarios so that we can protect our users and creators with the best moderation quality and cost efficiency. There are no doubt a lot of unsolved problems in the LLM domain which could have a huge impact on industry and academia. In the Trust & Safety team, we have real applications, resources and patience for technology incubation.
Your main responsibilities will be
- Lead the incubation of next-generation, high-capacity LLM solutions for the Trust & Safety business
- Identify research problems and dive deep for innovative solutions
- Work closely with cross-functional teams to plan and implement projects harnessing LLMs for diverse purposes and vertical domains
- Extend the insights and impact from industry to academia
Qualifications
Minimum Qualifications
- Ph.D in Computer Science, Data Science, Artificial Intelligence, or a related field
- Strong understanding of cutting-edge LLM research (e.g., long context, multi modality, alignment research, agent ecosystem, etc.) and possess practical expertise in effectively implementing these advanced systems as a plus
- Proficiency in programming languages such as Python, Rust, or C++ and a track record of working with deep learning frameworks (e.g., pytorch, deepspeed, megatron, vllm, etc.).
- Strong understanding of distributed computing framework & performance tuning and verification for training/finetuning/inference; Being familiar with PEFT, RL, MoE, CoT or Langchain is a plus.
Preferred Qualification
- Excellent problem-solving skills and a creative mindset to address complex AI challenges. Demonstrated ability to drive research projects from idea to implementation, producing tangible outcomes.
- Published research papers or contributions to the LLM community would be a significant plus.
- Experience with inference tuning and Inference acceleration. Have a deep understanding of GPU and/or other AI accelerators, experience with large scale AI networks, pytorch 2.0 and similar technologies.
- Experience with evaluation of AI systems, LLM application & agent development is desirable.
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
TikTok is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2.
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and its offices include New York, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible.
Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day.
To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve.
Join us.
Trust and Safety RD team is fast growing and responsible for building machine learning models and systems to identify and defend internet abuse and fraud on our platform. Our mission is to protect billions of users and publishers across the globe every day. We embrace state-of-the-art machine learning technologies and scale them to detect and improve the tremendous amount of data generated on the platform. With the continuous efforts from our team, TikTok is able to provide the best user experience and bring joy to everyone in the world.
We are looking for researchers in the Large Language Model (LLM) domain who are going to conduct research on single modality and multi modality LLM pretraining and applications including in-context learning (ICL), supervised fine-tuning (SFT) and reinforcement learning based alignment. We are looking forward to applying LLM in trust and safety business scenarios so that we can protect our users and creators with the best moderation quality and cost efficiency. There are no doubt a lot of unsolved problems in the LLM domain which could have a huge impact on industry and academia. In the Trust & Safety team, we have real applications, resources and patience for technology incubation.
Your main responsibilities will be
- Lead the incubation of next-generation, high-capacity LLM solutions for the Trust & Safety business
- Identify research problems and dive deep for innovative solutions
- Work closely with cross-functional teams to plan and implement projects harnessing LLMs for diverse purposes and vertical domains
- Extend the insights and impact from industry to academia
Qualifications
Minimum Qualifications
- Ph.D in Computer Science, Data Science, Artificial Intelligence, or a related field
- Strong understanding of cutting-edge LLM research (e.g., long context, multi modality, alignment research, agent ecosystem, etc.) and possess practical expertise in effectively implementing these advanced systems as a plus
- Proficiency in programming languages such as Python, Rust, or C++ and a track record of working with deep learning frameworks (e.g., pytorch, deepspeed, megatron, vllm, etc.).
- Strong understanding of distributed computing framework & performance tuning and verification for training/finetuning/inference; Being familiar with PEFT, RL, MoE, CoT or Langchain is a plus.
Preferred Qualification
- Excellent problem-solving skills and a creative mindset to address complex AI challenges. Demonstrated ability to drive research projects from idea to implementation, producing tangible outcomes.
- Published research papers or contributions to the LLM community would be a significant plus.
- Experience with inference tuning and Inference acceleration. Have a deep understanding of GPU and/or other AI accelerators, experience with large scale AI networks, pytorch 2.0 and similar technologies.
- Experience with evaluation of AI systems, LLM application & agent development is desirable.
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
TikTok is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2.