Logo
Apple Inc.

Multimodal and Generative ML Engineer - Health Sensing

Apple Inc., Cupertino, California, United States, 95014


Multimodal and Generative ML Engineer - Health Sensing

The Health Sensing team builds outstanding technologies to support our users in living their healthiest, happiest lives by providing them with objective, accurate, and timely information about their health and well-being. As part of the larger Sensor SW & Prototyping team, we take a multimodal approach, using a variety of sensors across HW platforms, such as camera, PPG, and natural languages.DescriptionIn this role, you will be at the forefront of developing, evaluating and improving multimodal and generative models for real-world health/wellbeing applications on their objective quality and alignment with human intent and perception, such as truthfulness, adaptability, and model generalizability. You will work on data and evaluation pipeline of both human and synthetic data for model evaluation, leverage ML technologies such as reinforcement learning with human feedback and adversarial models.Minimum QualificationsEnhance multimodal capabilities and adapt pre-trained models for new tasksWork across the entire ML development cycle, such as developing and managing data from various endpoints, managing ML training jobs with large datasets, and building efficient and scalable model evaluation pipelinesAnalyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, creating insight/interpretability tools to understand and predict failure modesCollaborate with algorithm engineers to build reliable end-to-end pipelines for long-term projectsWork cross-functionally to apply algorithms to real-world applications with designers, clinical experts, and engineering teams across HW and SWAbility to independently run and analyze ML experiments for real improvementsPreferred QualificationsExpertise in deep learning (e.g. hands-on training and evaluation experience with transformers)Experience incorporating multiple modalities into large language models (LLMs)Proficiency in Python and ML frameworks (e.g., PyTorch, TensorFlow)Ability to write clean, performant code and collaborate using standard software development practicesEducation & ExperienceBS and a minimum of 3 years relevant industry experienceAdditional RequirementsAt Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $143,100 and $264,200, and your base pay will depend on your skills, qualifications, experience, and location.Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.Apple is an equal opportunity employer that is committed to inclusion and diversity. We take affirmative action to ensure equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.

#J-18808-Ljbffr