AffordGen: Generating Diverse Demonstrations for Generalizable Object Manipulation with Afford Correspondence

📰 ArXiv cs.AI

arXiv:2604.10579v1 Announce Type: cross Abstract: Despite the recent success of modern imitation learning methods in robot manipulation, their performance is often constrained by geometric variations due to limited data diversity. Leveraging powerful 3D generative models and vision foundation models (VFMs), the proposed AffordGen framework overcomes this limitation by utilizing the semantic correspondence of meaningful keypoints across large-scale 3D meshes to generate new robot manipulation tra

Published 14 Apr 2026
Read full paper → ← Back to Reads