M

Research Engineer, Autonomy

Menlo Research · Singapore · Not Specified

Posted 05 Nov 2025

Quick Summary

  • Train and adapt large-scale VLA & VLMs that predict multi-modal futures.
  • Deploy models into real-time humanoid and mobile robots.
  • Evaluate and scale pipelines to measure generalization and safety.

Full Description

About Us

We are working on embodied intelligence. Our mission is to scale general-purpose autonomy for real world problems (the 3Ds), through large-scale learning, multi-modal data, and robust control.


We are looking for passionate engineers and scientists who thrive at the intersection of machine learning, robotics, and systems engineering, and want to see their research come alive in real robots.



Role Overview

You will lead development of the algorithms and architectures that enable our robots to interact with and reason about the physical world. This role bridges foundational model research and real-time robotics. You will design learning systems that power whole-body locomotion, dexterous manipulation, and embodied understanding.


Responsibilities

  • Train and adapt large-scale VLA & VLMs that predict multi-modal futures (video, proprioception, audio, actions)
  • Deploy models into real-time humanoid and mobile robots
  • Evaluate and scale pipelines to measure generalization and safety
  • Collaborate with locomotion, simulation and hardware teams to bridge sim-to-real transfer
  • Publish and open source datasets, models, papers in parallel


Preferred Qualifications

  • BS/MS/PhD in Robotics, AI/Computer Science, or related field
  • Proficiency in Python and C++, and deep learning frameworks (PyTorch / JAX)
  • Deep experience in GenAI, RL/IL, control, or multimodal learning
  • Understanding of scaling laws, evaluation metrics, and scaling training for large models
  • Familiarity with real-robot systems, sensing, and embedded control integration
  • Familiarity with industry SOTA and latest research, e.g. Gr00t, Pi0, etc


Bonus Skills

  • Experience with transformer-based control policies or diffusion policy learning.
  • Work on humanoid locomotion, manipulation, or whole-body coordination.
  • Prior open-source or research contributions in robotics, control, or deep learning.

Ready to apply?

This role is still accepting applications

Apply on company's site