RESample: A Robust Data Augmentation Framework via Exploratory Sampling for Robotic Manipulation

📰 ArXiv cs.AI

arXiv:2510.17640v3 Announce Type: replace-cross Abstract: Vision-Language-Action (VLA) models have demonstrated remarkable performance on complex tasks through imitation learning in recent robotic manipulation works. Based on large-scale and high-quality demonstration datasets, existing imitation learning method arms VLA models acquired with strong capabilities. However, these datasets that predominantly consist of successful trajectories, are costly to collect and often limited in distribution,

Published 13 Apr 2026
Read full paper → ← Back to Reads