PLAY PODCASTS
[MINI] Markov Decision Processes

[MINI] Markov Decision Processes

Data Skeptic · Kyle Polich and Linh Da Tran

January 26, 201820m 24s

Show Notes

Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples. Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.