Aligned Vector Quantization for Edge-Cloud Collabrative Vision-Language Models
📰 ArXiv cs.AI
arXiv:2411.05961v2 Announce Type: replace-cross Abstract: Vision Language Models (VLMs) are central to Visual Question Answering (VQA) systems and are typically deployed in the cloud due to their high computational demands. However, this cloud-only approach underutilizes edge computational resources and requires significant bandwidth for transmitting raw images. In this paper, we introduce an edge-cloud collaborative VQA system, called LLaVA-AlignedVQ, which features a novel Aligned Vector Quant
DeepCamp AI