WebLLM: A High-Performance In-Browser LLM Inference Engine
📰 ArXiv cs.AI
arXiv:2412.15803v2 Announce Type: replace-cross Abstract: Advancements in large language models (LLMs) have unlocked remarkable capabilities. While deploying these models typically requires server-grade GPUs and cloud-based inference, the recent emergence of smaller open-source models and increasingly powerful consumer devices have made on-device deployment practical. The web browser as a platform for on-device deployment is universally accessible, provides a natural agentic environment, and con
DeepCamp AI