Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)🎙️ How to build better AI products through data-driven error analysis, evaluation frameworks, and systematic quality improvement
Why is this in your inbox? Because How I AI, hosted by Claire Vo, is part of the Lenny’s Podcast Network. Every Monday, we share a 30- to 45-minute episode with a new guest demoing a practical, impactful way they’ve learned to use AI in their work or life. No pontificating—just specific and actionable advice. Prefer to skip future episode drops? Unsubscribe from How I AI podcast notifications here. Brought to you by:GoFundMe Giving Funds—One account. Zero hassle. Persona—Trusted identity verification for any use case Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems to implement data-driven quality improvement processes that identify and fix the most common errors. Using real examples from client work with Nurture Boss (an AI assistant for property managers), Hamel walks through practical techniques that product managers can implement immediately to dramatically improve their AI products. What you’ll learn:
Where to find Hamel Husain:Website: https://hamel.dev/ Twitter: https://twitter.com/HamelHusain Course: https://maven.com/parlance-labs/evals GitHub: https://github.com/hamelsmu Where to find Claire Vo:ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ In this episode, we cover:(00:00) Introduction to Hamel Husain |