It's 2026, and We're Still Talking Evals

MLOps.community · Intermediate ·📋 Product Management ·2d ago
Maggie Konstanty is an AI Product Manager at Prosus, one of the world's largest consumer internet companies, where she builds and evaluates AI agents for food ordering and ecommerce at scale. She's been inside the messy reality of LLM evaluation longer than most — and her take is unfiltered. It's 2026, and We're Still Talking Evals // MLOps Podcast #372 with Maggie Konstanty, AI Product Manager at Prosus 🧪 Why accuracy metrics lie — Maggie breaks down why "95% accurate" tells you almost nothing about whether your agent is actually working in the real world, and what to measure instead. 🏗️ Pre-ship vs. production evals — Your eval suite before launch will not survive first contact with real users. Maggie explains the structural disconnect and how to close the gap. 👻 The silent failure: user drop-off — Users who are unhappy don't complain — they just leave. Discover why drop-off analytics are one of the most underutilized eval signals in production. 🎯 Instruction to fail: the 20-evaluator trap — Setting up 20 types of evaluators not connected to your product goal is a fast path to wasted time. How to design evals that are tied to real outcomes. 🍽️ The "surprise me" edge case — A real example from Prosus's food ordering agent and what it reveals about how users actually behave vs. how PMs imagine they do. 🤖 LLM-as-a-judge: the limits — Why Maggie doesn't lean on LLM-as-a-judge for accuracy measurement, and what approaches she uses instead for production-grade evaluation. 🛠️ Arize/Phoenix & eval tooling critique — A candid take on the current state of eval platforms, why she spent a whole day fighting the UI, and why mature teams often go back to custom code. 🧬 Eval as team DNA — Evals aren't a launch checklist. Maggie makes the case that they need to be a constant practice embedded in team culture — and why alignment on "what good looks like" is harder than any technical implementation. 🔢 When to stop optimizing — What happens when your eval score ap
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

The 59th Attempt: When Your "Knowledge Management" System Becomes an Exercise in Meta-Futility
Learn how to avoid meta-futility in knowledge management systems and improve your productivity
Dev.to AI
The 56th Attempt: When Your "Advanced" Knowledge System Feels Like Running in Circles
Improve your knowledge management system by reflecting on failures and iterating, even if it feels like running in circles
Dev.to AI
The 54th Attempt: When Your "Knowledge Management" System Becomes an Existential Performance Art
Learn to reflect on your knowledge management system and identify areas for improvement to avoid creative documentation of failure
Dev.to AI
Shopify Section Schema Patterns Editors Actually Love
Learn how to optimize Shopify section schema patterns for better editor experience and increased productivity
Dev.to AI
Up next
Managerial Decisions & Pricing Strategy Mastery
Coursera
Watch →