Before LLMs, There Was Evolution

Martin Andrews · Advanced ·📄 Research Papers Explained ·10mo ago
Are we thinking about AI all wrong? While the world focuses on *training* models, a powerful, older paradigm of *evolving* them holds the key to the next wave of innovation. In this video, we're diving into the foundational principles of Genetic Algorithms (GAs) and Genetic Programming (GP). This isn't just a history lesson. I'm taking you back to the topic of my PhD research to uncover the timeless mechanics that are now being combined with LLMs to create truly novel solutions. We'll go under the hood to understand the "digital DNA" that allows code to rewrite itself, and how the explosive power of "crossover" enables a search for solutions that's exponentially better than random guessing. We'll break down the core concepts every AI builder should know, and debunk some common myths along the way. ### WANT TO GO DEEPER? This is the first video in my series on Evolutionary AI. Subscribe so you don't miss the next episodes where we connect these ideas to cutting-edge papers like AlphaEvolve! ### SOCIALS * https://github.com/mdda * https://sg.linkedin.com/in/martinandrews * https://x.com/mdda123 ### Previous Video in Series * https://youtu.be/98mkDuE4Q10 #AI #GeneticAlgorithms #EvolutionaryAI #AIExplained #TechExplained
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

The ABCs of reading medical research and review papers these days
Learn to critically evaluate medical research papers by accepting nothing at face value, believing no one blindly, and checking everything
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Learn to manage research paper tabs efficiently and apply meta-research techniques to improve productivity
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Learn to set up a Karpathy-style wiki for your research field to organize and share knowledge effectively
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
Scientific knowledge may be stuck in a local minimum, hindering optimal progress, and understanding this concept is crucial for advancing research
ArXiv cs.AI
Up next
Microsoft Research Forum | Season 2, Episode 4
Microsoft Research
Watch →