Your Agent Can Now Train Models — Merve Noyan, Hugging Face
Open-source models have caught up. GLM 5.1 is leading the Artificial Analysis intelligence index over closed models, and the gap is closing fast with each release cycle. The practical upside beyond benchmarks: full weight access means you can quantize, fine-tune, and deploy to edge devices or browsers without data leaving your infrastructure.
@MerveNoyan walks through the Hugging Face ecosystem built around this: inference providers that route to the fastest or cheapest option per model, benchmark datasets for filtering by SWE-bench or AIME scores directly on Hub, a traces repository type for storing and exploring agent sessions, and skills that plug into coding agents. The closer is a live demo where she asks Claude Code to fine-tune a vision-language model on a dataset by name. The agent calculates VRAM requirements, selects an instance, and kicks off the job. What used to be a day of napkin math is now a prompt.
Speaker info:
- https://x.com/mervenoyann
- https://www.linkedin.com/in/merve-noyan-28b1a113a/
- https://github.com/merveenoyan
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Related AI Lessons
⚡
⚡
⚡
⚡
Mastering Vector Similarity: The Essential Guide for Generative AI Interview Prep and AI Careers
Medium · LLM
How Aara Reads: The Secret Language Beneath the Words
Medium · AI
What Do We Mean When We Say “AI”?
Medium · AI
I Built the Same B2B Document Extractor Twice: Rules vs. LLM
Towards Data Science
🎓
Tutor Explanation
DeepCamp AI