On their own, vision-language-action models are powerful tools for general robot skills that show impressive generalization. However, they don’t achieve useful levels of reliability on valuable manipulation tasks.
Wenli Xiao teaches us one way to achieve this reliability: Probe, Learn, Distill. By freezing the VLA and learning residual actors, specialized policies which predict actions on top of what the underlying VLA predicts. Rollouts from these residual actors can then be distilled back into the generalist VLA.
Watch Episode #52 of RoboPapers with Michael Cho and Chris Paxton to learn more!
Abstract:
Supervised fine-tuning (SFT) has become the de facto post-training strategy for large vision-language-action (VLA) models, but its reliance on costly human demonstrations limits scalability and generalization. We propose Probe, Learn, Distill (PLD), a three-stage plug-and-play framework that improves VLAs through residual reinforcement learning (RL) and distribution-aware data collection. In Stage 1 (specialist acquisition), we freeze the VLA backbone and train lightweight residual actors via off-policy RL. These specialists take over in states where the base policy fails, thereby probing failure regions of the VLA generalist. In Stage 2 (data collection), we employ a hybrid rollout scheme that biases residual interventions toward states frequently visited by the base policy, aligning collected trajectories with the generalist’s deployment distribution while capturing recovery behaviors. In Stage 3 (fine-tuning), these curated trajectories are distilled back into the generalist with standard SFT, applicable to both flow-matching and autoregressive heads. We evaluate PLD across diverse settings: it achieves a near-saturated 99% task success rate on the LIBERO benchmark, delivers over 50% performance gains in SimplerEnv, and demonstrates a 100% success rate on real-world Franka arm and YAM arm dexterous manipulation tasks. We further provide ablations showing that residual policy probing and distribution-aware replay are key to collecting deployment-aligned data that improves VLAs’ capabilities on both seen and unseen tasks. Our results demonstrate that RL-generated, policy-aligned data can surpass teleoperation-only demonstrations, offering a scalable path toward self-improving VLA models.
Project Page: https://www.wenlixiao.com/self-improve-VLA-PLD
ArXiV: https://arxiv.org/abs/2511.00091
Thread on X: https://x.com/_wenlixiao/status/1984307913247375428









