GEPA Explained!

Weaviate vector database · Beginner ·📄 Research Papers Explained ·9mo ago
GEPA is a SUPER exciting advancement for DSPy and a new generation of optimization algorithms re-imagined with LLMs! Starting with the title of the paper, the authors find that Reflective Prompt Evolution can outperform Reinforcement Learning!! Using LLMs to write and refine prompts (for another LLM to complete a task) is outperforming (!!) highly targeted gradient descent updates using cutting-edge RL algorithms such as GRPO!! GEPA makes three key innovations in how exactly we use LLMs to propose prompts for LLMs -- (1) Pareto-Optimal Candidate Selection, (2) Reflective Prompt Mutation, and (3) System-Aware Merging for optimizing Compound AI Systems. The authors further present how GEPA can be used for training at test-time, one of the most exciting directions AI is evolving in! I hope you enjoy this review of the paper! Please let us know if you have any questions or inspired insihgts, and we would be more than happy to discuss them with you! LInks: GEPA: https://arxiv.org/abs/2507.19457 Announcement thread from Lakshya A. Agrawal on Twitter: https://x.com/LakshyAAAgrawal/status/1949867947867984322 DSPy 3.0 -- and DSPy at Databricks by Omar Khattab: https://www.youtube.com/watch?v=grIuzesOwwU DSPy on GitHub: https://github.com/stanfordnlp/dspy The Unreasonable Effectiveness of Eccentric Automatic Prompts: https://arxiv.org/abs/2402.10949 Large Language Models as Optimizers: https://arxiv.org/abs/2309.03409 MIPRO: https://arxiv.org/abs/2406.11695 Compound AI Systems: https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/ Chapters 0:00 Prompts vs. RL 2:38 LLMs as Optimizers 6:05 Pareto-Optimal Candidate Selection 11:15 Reflective Prompt Evolution 16:13 GEPA Algorithm 18:46 Experimental Results 26:50 Inference-Time Search 29:18 DSPy 3.0
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

The ABCs of reading medical research and review papers these days
Learn to critically evaluate medical research papers by accepting nothing at face value, believing no one blindly, and checking everything
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Learn to manage research paper tabs efficiently and apply meta-research techniques to improve productivity
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Learn to set up a Karpathy-style wiki for your research field to organize and share knowledge effectively
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
Scientific knowledge may be stuck in a local minimum, hindering optimal progress, and understanding this concept is crucial for advancing research
ArXiv cs.AI

Chapters (8)

Prompts vs. RL
2:38 LLMs as Optimizers
6:05 Pareto-Optimal Candidate Selection
11:15 Reflective Prompt Evolution
16:13 GEPA Algorithm
18:46 Experimental Results
26:50 Inference-Time Search
29:18 DSPy 3.0
Up next
Microsoft Research Forum | Season 2, Episode 4
Microsoft Research
Watch →