What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models
📰 ArXiv cs.AI
arXiv:2601.06165v2 Announce Type: replace-cross Abstract: Current vision-language benchmarks predominantly feature well-structured questions with clear, explicit prompts. However, real user queries are often informal and underspecified. Users naturally leave much unsaid, relying on images to convey context. We introduce HAERAE-Vision, a benchmark of 653 real-world visual questions from Korean online communities (0.76% survival from 86K candidates), each paired with an explicit rewrite, yielding
DeepCamp AI