Markov decision process
Markov decision processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via dynamic programming and reinforcement learning. MDPs were known at least as early as the 1950s (cf. Bellman 1957). A core body of research on Markov decision processes resulted from Ronald A. Howard's book published in 1960, Dynamic Programming and Markov Processes. They are used in a wide area of disciplines, including robotics, automated control, economics, and manufacturing.
More precisely, a Markov Decision Process is a discrete time stochastic control process.  At each time step, the process is in some state  , and the decision maker may choose any action
, and the decision maker may choose any action  that is available in state
 that is available in state  .  The process responds at the next time step by randomly moving into a new state
.  The process responds at the next time step by randomly moving into a new state  , and giving the decision maker a corresponding reward
, and giving the decision maker a corresponding reward  .
.
The probability that the process moves into its new state  is influenced by the chosen action.  Specifically, it is given by the state transition function
 is influenced by the chosen action.  Specifically, it is given by the state transition function  .  Thus, the next state
.  Thus, the next state  depends on the current state
 depends on the current state  and the decision maker's action
 and the decision maker's action  .  But given
.  But given  and
 and  , it is conditionally independent of all previous states and actions; in other words, the state transitions of an MDP process satisfies the Markov property.
, it is conditionally independent of all previous states and actions; in other words, the state transitions of an MDP process satisfies the Markov property.
Markov decision processes are an extension of Markov chains; the difference is the addition of actions (allowing choice) and rewards (giving motivation). Conversely, if only one action exists for each state and all rewards are the same (e.g., zero), a Markov decision process reduces to a Markov chain.
Definition

A Markov decision process is a 5-tuple  , where
, where
-   is a finite set of states, is a finite set of states,
-   is a finite set of actions (alternatively, is a finite set of actions (alternatively, is the finite set of actions available from state is the finite set of actions available from state ), ),
-   is the probability that action is the probability that action in state in state at time at time will lead to state will lead to state at time at time , ,
 is the immediate reward (or expected immediate reward) received after transition to state is the immediate reward (or expected immediate reward) received after transition to state from state from state , ,
![\gamma \in [0,1]](../I/m/ee133ef1a28ae6f8bcf231e9dca9c222.png) is the discount factor, which represents the difference in importance between future rewards and present rewards. is the discount factor, which represents the difference in importance between future rewards and present rewards.
(Note: The theory of Markov decision processes does not state that  or
 or  are finite, but the basic algorithms below assume that they are finite.)
 are finite, but the basic algorithms below assume that they are finite.)
Problem
The core problem of MDPs is to find a "policy" for the decision maker: a function  that specifies the action
 that specifies the action  that the decision maker will choose when in state
 that the decision maker will choose when in state  .  Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain.
.  Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain.
The goal is to choose a policy  that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:
 that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:
 (where we choose (where we choose ) )
where  is the discount factor and satisfies
 is the discount factor and satisfies  . (For example,
. (For example,  when the discount rate is r.)
 when the discount rate is r.)   is typically close to 1.
 is typically close to 1.
Because of the Markov property, the optimal policy for this particular problem can indeed be written as a function of  only, as assumed above.
 only, as assumed above.
Algorithms
MDPs can be solved by linear programming or dynamic programming. In what follows we present the latter approach.
Suppose we know the state transition function  and the reward function
 and the reward function  , and we wish to calculate the policy that maximizes the expected discounted reward.
, and we wish to calculate the policy that maximizes the expected discounted reward.
The standard family of algorithms to calculate this optimal policy requires storage for two arrays indexed by state: value  , which contains real values, and policy
, which contains real values, and policy  which contains actions. At the end of the algorithm,
 which contains actions. At the end of the algorithm,  will contain the solution and
 will contain the solution and  will contain the discounted sum of the rewards to be earned (on average) by following that solution from state
 will contain the discounted sum of the rewards to be earned (on average) by following that solution from state  .
.
The algorithm has the following two kinds of steps, which are repeated in some order for all the states until no further changes take place. They are defined recursively as follows:
Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.
Notable variants
Value iteration
In value iteration (Bellman 1957), which is also called backward induction,
the  function is not used; instead, the value of
 function is not used; instead, the value of  is calculated within
 is calculated within  whenever it is needed. Lloyd Shapley's 1953 paper on stochastic games included as a special case the value iteration method for MDPs, but this was recognized only later on.[1]
 whenever it is needed. Lloyd Shapley's 1953 paper on stochastic games included as a special case the value iteration method for MDPs, but this was recognized only later on.[1]
Substituting the calculation of  into the calculation of
 into the calculation of  gives the combined step:
 gives the combined step:
where  is the iteration number.  Value iteration starts at
 is the iteration number.  Value iteration starts at  and
 and  as a guess of the value function.  It then iterates, repeatedly computing
 as a guess of the value function.  It then iterates, repeatedly computing  for all states
 for all states  , until
, until  converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem).
 converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem).
Policy iteration
In policy iteration (Howard 1960), step one is performed once, and then step two is repeated until it converges. Then step one is again performed once and so on.
Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations.
This variant has the advantage that there is a definite stopping condition: when the array  does not change in the course of applying step 1 to all states, the algorithm is completed.
 does not change in the course of applying step 1 to all states, the algorithm is completed.
Modified policy iteration
In modified policy iteration (van Nunen, 1976; Puterman and Shin 1978), step one is performed once, and then step two is repeated several times. Then step one is again performed once and so on.
Prioritized sweeping
In this variant, the steps are preferentially applied to states which are in some way important - whether based on the algorithm (there were large changes in  or
 or  around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
 around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
Extensions and generalizations
A Markov decision process is a stochastic game with only one player.
Partial observability
The solution above assumes that the state  is known when action is to be taken; otherwise
 is known when action is to be taken; otherwise  cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.
 cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.
A major advance in this area was provided by Burnetas and Katehakis in "Optimal adaptive policies for Markov decision processes".[2] In this work a class of adaptive policies that possess uniformly maximum convergence rate properties for the total expected finite horizon reward, were constructed under the assumptions of finite state-action spaces and irreducibility of the transition law. These policies prescribe that the choice of actions, at each state and time period, should be based on indices that are inflations of the right-hand side of the estimated average reward optimality equations.
Reinforcement learning
If the probabilities or rewards are unknown, the problem is one of reinforcement learning (Sutton and Barto, 1998).
For this purpose it is useful to define a further function, which corresponds to taking the action  and then continuing optimally (or according to whatever policy one currently has):
 and then continuing optimally (or according to whatever policy one currently has):
While this function is also unknown, experience during learning is based on  pairs (together with the outcome
 pairs (together with the outcome  ); that is, "I was in state
); that is, "I was in state  and I tried doing
 and I tried doing  and
 and  happened"). Thus, one has an array
 happened"). Thus, one has an array  and uses experience to update it directly. This is known as Q‑learning.
 and uses experience to update it directly. This is known as Q‑learning.
Reinforcement learning can solve Markov decision processes without explicit specification of the transition probabilities; the values of the transition probabilities are needed in value and policy iteration. In reinforcement learning, instead of explicit specification of the transition probabilities, the transition probabilities are accessed through a simulator that is typically restarted many times from a uniformly random initial state. Reinforcement learning can also be combined with function approximation to address problems with a very large number of states.
Category theoretic interpretation
Other than the rewards, a Markov decision process  can be understood in terms of Category theory. Namely, let
 can be understood in terms of Category theory. Namely, let  denote the free monoid with generating set  A. Let Dist denote the Kleisli category of the Giry monad. Then a functor
 denote the free monoid with generating set  A. Let Dist denote the Kleisli category of the Giry monad. Then a functor  encodes both the set S of states and the probability function P.
 encodes both the set S of states and the probability function P.
In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result  a context-dependent Markov decision process, because moving from one object to another in
 a context-dependent Markov decision process, because moving from one object to another in  changes the set of available actions and the set of possible states.
 changes the set of available actions and the set of possible states.
Continuous-time Markov Decision Process
In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, for Continuous-time Markov Decision Processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov Decision Process, Continuous-time Markov Decision Process can better model the decision making process for a system that has continuous dynamics, i.e., the system dynamics is defined by partial differential equations (PDEs).
Definition
In order to discuss the continuous-time Markov Decision Process, we introduce two sets of notations:
If the state space and action space are finite,
-   : State space; : State space;
-   : Action space; : Action space;
-   : : , transition rate function; , transition rate function;
-   : : , a reward function. , a reward function.
If the state space and action space are continuous,
-   : State space.; : State space.;
-   : Space of possible control; : Space of possible control;
-   : : , a transition rate function; , a transition rate function;
-   : : , a reward rate function such that , a reward rate function such that , where , where is the reward function we discussed in previous case. is the reward function we discussed in previous case.
Problem
Like the Discrete-time Markov Decision Processes, in Continuous-time Markov Decision Process we want to find the optimal policy or control which could give us the optimal expected integrated reward:
Where 
Linear programming formulation
If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes an ergodic continuous-time Markov Chain under a stationary policy. Under this assumption, although the decision maker can make a decision at any time at the current state, he could not benefit more by taking more than one action. It is better for him to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,(for detail check Corollary 3.14 of Continuous-Time Markov Decision Processes), if our optimal value function  is independent of state i, we will have the following inequality:
 is independent of state i, we will have the following inequality:
If there exists a function  , then
, then  will be the smallest g satisfying the above equation. In order to find
 will be the smallest g satisfying the above equation. In order to find  , we could use the following linear programming model:
, we could use the following linear programming model:
- Primal linear program(P-LP)
- Dual linear program(D-LP)
 is a feasible solution to the D-LP if
 is a feasible solution to the D-LP if  is
nonnative and satisfied the constraints in the D-LP problem. A
feasible solution
 is
nonnative and satisfied the constraints in the D-LP problem. A
feasible solution  to the D-LP is said to be an optimal
solution if
 to the D-LP is said to be an optimal
solution if
for all feasible solution y(i,a) to the D-LP.
Once we found the optimal solution  , we could use those optimal solution to establish the optimal policies.
, we could use those optimal solution to establish the optimal policies.
Hamilton-Jacobi-Bellman equation
In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solving Hamilton-Jacobi-Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulate our problem
D( ) is the terminal reward function,
) is the terminal reward function,  is the
system state vector,
  is the
system state vector,  is the system control vector we try to
find. f(
  is the system control vector we try to
find. f( ) shows how the state vector change over time.
Hamilton-Jacobi-Bellman equation is as follows:
) shows how the state vector change over time.
Hamilton-Jacobi-Bellman equation is as follows:
We could solve the equation to find the optimal control  , which could give us the optimal value
, which could give us the optimal value 
Application
Continuous-time Markov decision processes have applications in queueing systems, epidemic processes, and population processes.
Alternative notations
The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factor  or
 or  , while the other focuses on minimization problems from engineering and navigation, using the terms control, cost, cost-to-go, and calling the discount factor
, while the other focuses on minimization problems from engineering and navigation, using the terms control, cost, cost-to-go, and calling the discount factor  . In addition, the notation for the transition probability varies.
. In addition, the notation for the transition probability varies.
| in this article | alternative | comment | 
|---|---|---|
| action   | control   | |
| reward   | cost  |  is the negative of  | 
| value   | cost-to-go  |  is the negative of  | 
| policy   | policy   | |
| discounting factor   | discounting factor   | |
| transition probability   | transition probability   | 
In addition, transition probability is sometimes written  ,
,  or, rarely,
 or, rarely, 
Constrained Markov Decision Processes
Constrained Markov Decision Processes (CMDPs) are extensions to Markov Decision Process(MDPs). There are three fundamental differences between MDPs and CMDPs. [3]
- There are multiple costs incurred after applying an action instead of one.
- CMDPs are solved with Linear Programs only, and Dynamic programming does not work.
- The final policy depends on the starting state.
There are a number of applications for CMDPs. It is recently being used in motion planning scenarios in robotics. [4]
See also
- Probabilistic automata
- Quantum finite automata
- Partially observable Markov decision process
- Dynamic programming
- Bellman equation for applications to economics.
- Hamilton–Jacobi–Bellman equation
- Optimal control
- Recursive economics
- Mabinogion sheep problem
- Stochastic games
- Q-learning
Notes
- ↑ Lodewijk Kallenberg, Finite state and action MDPs, in Eugene A. Feinberg, Adam Shwartz (eds.) Handbook of Markov decision processes: methods and applications, Springer, 2002, ISBN 0-7923-7459-2
- ↑ Burnetas, A. N.; Katehakis, M. N. (1997). "Optimal Adaptive Policies for Markov Decision Processes". Mathematics of Operations Research 22: 222. doi:10.1287/moor.22.1.222.
- ↑ Altman, Eitan. Constrained Markov decision processes. Vol. 7. CRC Press, 1999.
- ↑ Feyzabadi, S.; Carpin, S., "Risk-aware path planning using hierarchical constrained Markov Decision Processes," Automation Science and Engineering (CASE), 2014 IEEE International Conference on , vol., no., pp.297,303, 18-22 Aug. 2014
References
- R. Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics 6, 1957.
- R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957. Dover paperback edition (2003), ISBN 0-486-42809-5.
- Ronald A. Howard Dynamic Programming and Markov Processes, The M.I.T. Press, 1960.
- D. Bertsekas. Dynamic Programming and Optimal Control. Volume 2, Athena, MA, 1995.
- Burnetas, A.N. and M. N. Katehakis. "Optimal Adaptive Policies for Markov Decision Processes, Mathematics of Operations Research, 22,(1), 1995.
- E.A. Feinberg and A. Shwartz (eds.) Handbook of Markov Decision Processes, Kluwer, Boston, MA, 2002.
- C. Derman. Finite state Markovian decision processes, Academic Press, 1970.
- M. L. Puterman. Markov Decision Processes. Wiley, 1994.
- H.C. Tijms. A First Course in Stochastic Models. Wiley, 2003.
- Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998.
- J.A. E. E van Nunen. A set of successive approximation methods for discounted Markovian decision problems. Z. Operations Research, 20:203-208, 1976.
- S. P. Meyn, 2007. Control Techniques for Complex Networks, Cambridge University Press, 2007. ISBN 978-0-521-88441-9. Appendix contains abridged Meyn & Tweedie.
- S. M. Ross. 1983. Introduction to stochastic dynamic programming. Academic press
- X. Guo and O. Hernández-Lerma. Continuous-Time Markov Decision Processes, Springer, 2009.
- M. L. Puterman and Shin M. C. Modified Policy Iteration Algorithms for Discounted Markov Decision Problems, Management Science 24, 1978.
External links
- MDP Toolbox for MATLAB, GNU Octave, Scilab and R The Markov Decision Processes (MDP) Toolbox.
- MDP Toolbox for Matlab - An excellent tutorial and Matlab toolbox for working with MDPs.
- MDP Toolbox for Python A package for solving MDPs
- Reinforcement Learning An Introduction by Richard S. Sutton and Andrew G. Barto
- SPUDD A structured MDP solver for download by Jesse Hoey
- Learning to Solve Markovian Decision Processes by Satinder P. Singh
- Optimal Adaptive Policies for Markov Decision Processes by Burnetas and Katehakis (1997).




![\max \quad \mathbb{E}_u[\int_0^{\infty}\gamma^t r(x(t),u(t)))dt|x_0]](../I/m/28d29c319ccff1957fa54c52e28e69e5.png)




![\begin{align} V(x(0),0)=&\text{max}_u\int_0^T r(x(t),u(t))dt+D[x(T)]\\
s.t.\quad & \frac{dx(t)}{dt}=f[t,x(t),u(t)]
\end{align}](../I/m/e8cef3df2a56db17d0d6d9d5185a3281.png)
