MobileLLM-Flash: Latency-Guided On-Device LLM Design for Industry Scale Deployment

📰 ArXiv cs.AI

arXiv:2603.15954v2 Announce Type: replace-cross Abstract: Real-time AI experiences call for on-device large language models (OD-LLMs) optimized for efficient deployment on resource-constrained hardware. The most useful OD-LLMs produce near-real-time responses and exhibit broad hardware compatibility, maximizing user reach. We present a methodology for designing such models using hardware-in-the-loop architecture search under mobile latency constraints. This system is amenable to industry-scale d

Published 29 Apr 2026
Read full paper → ← Back to Reads