oilandgas.org.uk
Machine Learning Research Engineer, Natural Language Generation (NLG)
oilandgas.org.uk, Cupertino, California, United States, 95014
SummaryOn the Input Experience NLP team, we build the language models that underpin intelligent text input across Apple platforms, from keyboard auto correction to the Writing Tools and Smart Reply features announced at WWDC 2024. We believe that generative AI is an incredibly promising technology that can help people communicate effectively and express themselves clearly, and we have only just begun to incorporate this technology into our products. On our team, you will help build the future and have a voice in what shape it takes.
We are looking for a Machine Learning Research Engineer to help deliver scalable, multilingual NLP solutions that empower our users to use intelligent text input in their language of choice. You will build and refine the training and evaluation pipelines that define our slice of Apple Intelligence, driving the focused iteration that makes the user experience magical. You will join an ambitious, organized, and collaborative team in a unique position to integrate the latest innovations from the ML community and work on features that reach everyday users, including your family and friends. You'll work closely with teams across Apple, collaborating on human interfaces, user studies, internationalization, ML technologies, system integration, and more.
DescriptionAs a Machine Learning Research Engineer on our team, you will build and iteratively refine model pipelines that enable multilingual text input experiences on Apple products. You will conduct experiments and create prototypes for new approaches to improve the quality of our models and add new dimensions to their intelligence, in consideration of specific linguistic requirements and design considerations. Finally, you will implement the building blocks and infrastructure that bring these innovations into our production pipelines, and contribute to evaluating metrics for measuring forward progress.
KEY RESPONSIBILITIES:Development and maintenance of modeling pipelines that scale to multiple languages and production deploymentDefinition of robust automated evaluation metrics to facilitate hillclimbing model qualityFailure analysis to understand shortcomings of our modelsResearch into techniques for improving model behaviorCuration and synthesis of representative training and evaluation dataImplementation of experiments and simulations to assess the value of model changesCollaboration with language experts and QA to refine modeling approach in consideration of language-specific requirements
#J-18808-Ljbffr
We are looking for a Machine Learning Research Engineer to help deliver scalable, multilingual NLP solutions that empower our users to use intelligent text input in their language of choice. You will build and refine the training and evaluation pipelines that define our slice of Apple Intelligence, driving the focused iteration that makes the user experience magical. You will join an ambitious, organized, and collaborative team in a unique position to integrate the latest innovations from the ML community and work on features that reach everyday users, including your family and friends. You'll work closely with teams across Apple, collaborating on human interfaces, user studies, internationalization, ML technologies, system integration, and more.
DescriptionAs a Machine Learning Research Engineer on our team, you will build and iteratively refine model pipelines that enable multilingual text input experiences on Apple products. You will conduct experiments and create prototypes for new approaches to improve the quality of our models and add new dimensions to their intelligence, in consideration of specific linguistic requirements and design considerations. Finally, you will implement the building blocks and infrastructure that bring these innovations into our production pipelines, and contribute to evaluating metrics for measuring forward progress.
KEY RESPONSIBILITIES:Development and maintenance of modeling pipelines that scale to multiple languages and production deploymentDefinition of robust automated evaluation metrics to facilitate hillclimbing model qualityFailure analysis to understand shortcomings of our modelsResearch into techniques for improving model behaviorCuration and synthesis of representative training and evaluation dataImplementation of experiments and simulations to assess the value of model changesCollaboration with language experts and QA to refine modeling approach in consideration of language-specific requirements
#J-18808-Ljbffr