Evaluating Large Language Model Outputs: A Practical Guide
This course addresses evaluating Large Language Models (LLMs), starting with foundational evaluation methods, exploring advanced techniques with Vertex AI's tools like Automatic Metrics and AutoSxS, and forecasting the evolution of generative AI evaluation.
This course is ideal for AI Product Managers looking to optimize LLM applications, Data Scientists interested in advanced AI model evaluation techniques, AI Ethicists and Policy Makers focused on responsible AI deployment, and Academic Researchers studying the impact of generative AI across various domains.
A basic understanding of artifi…
Watch on Coursera ↗
(saves to browser)
DeepCamp AI