Decision Making and Reinforcement Learning
Skills:
RL Foundations85%
This course is an introduction to sequential decision making and reinforcement learning. We start with a discussion of utility theory to learn how preferences can be represented and modeled for decision making. We first model simple decision problems as multi-armed bandit problems in and discuss several approaches to evaluate feedback. We will then model decision problems as finite Markov decision processes (MDPs), and discuss their solutions via dynamic programming algorithms. We touch on the notion of partial observability in real problems, modeled by POMDPs and then solved by online planning methods. Finally, we introduce the reinforcement learning problem and discuss two paradigms: Monte Carlo methods and temporal difference learning. We conclude the course by noting how the two paradigms lie on a spectrum of n-step temporal difference methods. An emphasis on algorithms and examples will be a key part of this course.
Watch on Coursera ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: RL Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Why AI Search (GEO/AEO) Is Eating Traditional SEO — And What Agencies Must Do Now
Dev.to AI
Every Telegram conversation becomes a qualified lead. BizNode captures name, email, and business details automatically while...
Dev.to AI
My 4 favorite Android Auto settings are seriously useful - but hidden by default
ZDNet
How We Generate 100+ Product Feeds From 300k SKUs Without Hitting the Database
Dev.to · Peter Y
🎓
Tutor Explanation
DeepCamp AI