PhyCo: Learning Controllable Physical Priors for Generative Motion

📰 ArXiv cs.AI

arXiv:2604.28169v1 Announce Type: cross Abstract: Modern video diffusion models excel at appearance synthesis but still struggle with physical consistency: objects drift, collisions lack realistic rebound, and material responses seldom match their underlying properties. We present PhyCo, a framework that introduces continuous, interpretable, and physically grounded control into video generation. Our approach integrates three key components: (i) a large-scale dataset of over 100K photorealistic s

Published 1 May 2026
Read full paper → ← Back to Reads