AIML - Machine Learning Researcher, Post-Training for Foundation Models
at Apple
Location
Cupertino, United States of America
Compensation
$181k–$318k USD
Type
full time
Posted
3 months ago
Tailor your résumé to this role in 30 seconds.
Free account · ATS keyword check · per-job bullet rewrite by Claude.
Job description
We build frontier foundation models that power intelligent experiences at Apple. Our team works across the full training lifecycle: including pre-training foundation models, and developing mid-training approaches that bridge general capability and task-specific performance. What makes our work distinct is that we're engineering models specifically for Apple silicon and optimized for experiences that are private, personal, and deeply integrated into the OS. We're solving frontier problems in reward modeling to resist reward hacking, handling sparse and delayed rewards in agentic settings, and aligning models reliably across the spectrum from open-ended creative tasks to precise, action-taking workflows. If you're drawn to hard problems where the research and the product are inseparable, this is the team.
Recipe Development: Design and iterate on end-to-end post-training recipes, combining SFT, Reinforcement Learning and reasoning regimes to achieve specific model behaviors and capabilities.
Algorithm Research: Develop and implement novel algorithms for preference optimization, model steering, and safety.
Data Strategy: Research methods for high-quality human and synthetic data generation, automated data filtering, and curriculum learning to improve instruction following and reasoning capabilities.
Evaluation: Design robust evaluation frameworks to measure model helpfulness, factuality, and utility, moving beyond static benchmarks to capture real-world performance.
Collaboration: Work closely with pre-training teams to inform architecture choices and with product teams to understand user requirements.
Demonstrated expertise in deep learning with a focus on LLMs, post-training, or reinforcement learning, backed by a strong publication record or real world experiences and accomplishments in these or closely related domains.
Proficient programming skills in Python and one of the deep learning frameworks such as JAX or PyTorch.
PhD or equivalent practical experience, in Computer Science, Machine Learning, or a related technical field.
Proven track record in post-training: Specialization in post-training algorithms, techniques, and best practices for large foundation models with proven track record.
Post-training data: Deep experiences with human data labeling, synthetic data generation and data quality assessment for foundation models; Evaluation methodologies: Deep experience in evaluating data and training recipe and deeply understand the model building iterative process and life cycle.
Reasoning Research: Experience in improving model performance on reasoning tasks (math, coding, logic).
Scale & Systems: Experience training SOTA large models at scale and familiarity with distributed training challenges, and understand the trade-offs.
Strong communication and collaborative skills: Strong communication skills and a passion for collaboration within and across teams.
We are a group of engineers and researchers responsible for building foundation models at Apple. Within this group, the Post-Training work streams focus on transforming powerful pre-trained checkpoints into helpful, high-quality models that power billions of Apple products. We are looking for researchers who are passionate about foundation model post-training, including Supervised Fine-Tuning (SFT), Reinforcement Learning, with experiences in core capabilities such as instruction following, tool use, deep thinking and reasoning.
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant
At Apple, we believe accessibility is a fundamental human right. You’ll find that idea reflected in everything here — in our culture, our benefits and our digital tools. By welcoming as many perspectives as possible, we help you build a career where you feel like you belong.
Learn about accessibility in Apple’s workplace
Learn about reasonable accommodations for job applicants
Apple accepts applications to this posting on an ongoing basis.