Part of the advantage of humanoid robots is in their ability to interact with the environment: to climb stairs, to sit down, to navigate rough terrain, and so on. In VideoMimic, the authors produced a pipeline which let them:
Scan a scene into simulation using an iPhone
Train a whole-body control policy for manipulation and locomotion
Use this policy to control a humanoid robot as operated in diverse, real-world environments.
Abstract:
How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably, the simplest way is to just show them—casually capture a human motion video and feed it to humanoids. We introduce VideoMimic, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills—all from a single policy, conditioned on the environment and global root commands. VideoMimic offers a scalable path towards teaching humanoids to operate in diverse real-world environments.












