AIML - Sr Machine Learning Performance Engineer, Siri and Information Intelligen
Apple, Seattle, WA, United States
AIML - Sr Machine Learning Performance Engineer, Siri and Information Intelligence
Seattle, Washington, United States
Machine Learning and AI
The Siri team in the AIML group at Apple is seeking an exceptional Machine Learning Engineer to lead efforts in identifying bottlenecks and optimizing our model inference stack. In this highly collaborative role, you will be at the center of multiple initiatives to accelerate and optimize LLMs and other ML models used by Siri. This position involves consulting with multiple product teams to determine the appropriate foundation model (On Device vs Server) for their use cases and to help them achieve their accuracy and performance targets. Your work will directly impact Siri's performance and efficiency, enhancing the overall user experience. You will be expected to bring innovative ideas and a problem-solving mindset to tackle the unique challenges associated with optimizing complex ML models.
Description
As a Machine Learning Performance Engineer, you will play a critical role in ensuring the efficiency and scalability of Siri's machine learning models. You will work closely with diverse teams to diagnose performance issues and develop innovative solutions that enhance model performance:
- Analyze and optimize the performance of machine learning models and systems used by Siri.
- Develop and implement strategies for model tuning, parameter optimization, and efficient resource usage.
- Conduct performance benchmarking and develop tooling and metrics to measure model performance in terms of compute, memory, and latency.
- Collaborate with feature and product teams to consult on modeling decisions to achieve Siri performance objectives.
- Collaborate with hardware and software teams to integrate research findings into product implementation.
Minimum Qualifications
- Strong understanding of Transformer and LLM architectures.
- Strong understanding of Operating System, Compiler, and Computer Architecture fundamentals. Expertise in optimizing software to take advantage of underlying hardware architecture.
- Experience in analyzing, identifying, and optimizing performance bottlenecks.
Key Qualifications
Preferred Qualifications
- Strong plus if you have expertise in optimizing model architectures for on-device inference.
- Strong plus if you have previously worked with modeling pipeline teams in model deployment and promotion pipelines.
- Creative, collaborative, and product-focused.
- Excellent communication skills.