This Is How Reasoning LLMs Really Work

Tales Of Tensors · Advanced ·🧠 Large Language Models ·2mo ago
Modern language models don’t just predict the next token anymore — they reason. In this video, we visualize what actually happens inside a reasoning LLMs like OpenAI's o1, o3 etc. Instead of producing an answer immediately, the model explores multiple internal paths, checks its own reasoning, backtracks from dead ends, and only then commits to a final answer. You’ll see how this internal “thought tree” works, why some branches fail, how self-verification improves accuracy, and how techniques like test-time compute and reinforcement learning shape these behaviors. We also discuss why reasonin…
Watch on YouTube ↗ (saves to browser)

Chapters (4)

Comparing Standard vs. Reasoning Models
0:52 Emergent Strategies and Backtracking
1:18 System 1 vs. System 2 Thinking in LLMs
2:23 Internal Tokens an
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)