applinity

AI Research Scientist, VLM (vision language models)

at Meta

Location

Bellevue, WA; Menlo Park, CA

Type

full time

Posted

10/22/2024

Tailor your résumé to this role in 30 seconds.

Free account · ATS keyword check · per-job bullet rewrite by Claude.

Tailor my résuméApply on company site

Job description

• Lead, collaborate, and execute on research that pushes forward the state of the art in multimodal reasoning and generation research. • Work towards long-term ambitious research goals, while identifying intermediate milestones. • Directly contribute to experiments, including designing experimental details, writing reusable code, running evaluations, and organizing results. • Work with a large team. • Contribute to publications and open-sourcing efforts. • Mentor other team members. Play a significant role in healthy cross-functional collaboration. • Prioritize research that can be applied to Meta's product development.

Responsibilities

  • Push state of the art in multimodal generative AI
  • Explore new techniques for advanced reasoning and multimodal understanding for AI Assistants
  • Mentor and work with AI/ML engineers to find a path from research to production

Minimum Qualifications

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • A PhD in AI, computer science, or related technical fields
  • Publications in machine learning, computer vision, NLP, speech
  • Experience writing software and executing complex experiments involving large AI models and datasets
  • Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment First (joint) author publications experience at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL).
  • Direct experience in generative AI and LLM research.
  • Fluent in Python and PyTorch (or equivalent)