Prompt Learning: How We Made Claude Code 20% Better Without Changing the Model
Your coding agent forgets everything between sessions. What if you could systematically figure out what to tell it?
Laurie Voss (Head of DevRel at Arize) walks through Prompt Learning — a technique that uses your own git history and failure data to generate better instructions for Claude Code's CLAUDE.md file. No fine-tuning, no new tools, no architecture changes. Just better prompts, derived from evidence.
Results on SWE-Bench Lite:
Cross-repo: 40% → 45% (+5 percentage points)
Django-specific: +10.87 percentage points (~20% relative improvement)
GPT-4.1 with optimized prompts nearly matche…
Watch on YouTube ↗
(saves to browser)
DeepCamp AI