Fast LLM Inference From Scratch (using CUDA)

📰 Hacker News · homarp

Fast LLM Inference From Scratch (using CUDA). 57 comments, 344 points on Hacker News.

Published 14 Dec 2024
Read full article → ← Back to Reads