Libra-VLA: Achieving Learning Equilibrium via Asynchronous Coarse-to-Fine Dual-System
📰 ArXiv cs.AI
arXiv:2604.24921v1 Announce Type: cross Abstract: Vision-Language-Action (VLA) models are a promising paradigm for generalist robotic manipulation by grounding high-level semantic instructions into executable physical actions. However, prevailing approaches typically adopt a monolithic generation paradigm, directly mapping visual-linguistic features to high-frequency motor commands in a flat, non-hierarchical fashion. This strategy overlooks the inherent hierarchy of robotic manipulation, where
DeepCamp AI