Markov renewal process

In probability and statistics a Markov renewal process is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chain, Poisson process, and renewal process can be derived as a special case of an MRP (Markov renewal process).

Definition

Consider a state space \mathrm{S}. Consider a set of random variables (X_n,T_n), where T_n are the jump times and X_n are the associated states in the Markov chain (see Figure). Let the inter-arrival time, \tau_n=T_n-T_{n-1}. Then the sequence (X_n,T_n) is called a Markov renewal process if

\Pr(\tau_{n+1}\le t, X_{n+1}=j|(X_0, T_0), (X_1, T_1),\ldots, (X_n=i, T_n))
 =\Pr(\tau_{n+1}\le t, X_{n+1}=j|X_n=i)\, \forall n \ge1,t\ge0, i,j \in \mathrm{S}

Relation to other stochastic processes

  1. If we define a new stochastic process Y_t:=X_n for t \in [T_n,T_{n+1}), then the process Y_t is called a semi-Markov process. Note the main difference between an MRP and a semi-Markov process is that the former is defined as a two-tuple of states and times, whereas the latter is the actual random process that evolves over time and any realisation of the process has a defined state for any given time. The entire process is not Markovian, i.e., memoryless, as happens in a CTMC. Instead the process is Markovian only at the specified jump instants. This is the rationale behind the name, Semi-Markov.[1][2][3] (See also: hidden semi-Markov model.)
  2. A semi-Markov process (defined in the above bullet point) where all the holding times are exponentially distributed is called a continuous time Markov chain/process (CTMC). In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a CTMC.
    \Pr(\tau_{n+1}\le t, X_{n+1}=j|(X_0, T_0), (X_1, T_1),\ldots, (X_n=i, T_n))=\Pr(\tau_{n+1}\le t, X_{n+1}=j|X_n=i)
    =\Pr(X_{n+1}=j|X_n=i)(1-e^{-\lambda_i t}), \text{ for all } n \ge1,t\ge0, i,j \in \mathrm{S}
  3. The sequence X_n in the MRP is a discrete-time Markov chain. In other words, if the time variables are ignored in the MRP equation, we end up with a DTMC.
    \Pr(X_{n+1}=j|X_0, X_1, \ldots, X_n=i)=\Pr(X_{n+1}=j|X_n=i)\, \forall n \ge1, i,j \in \mathrm{S}
  4. If the sequence of \taus are independent and identically distributed, and if their distribution does not depend on the state X_n, then the process is a renewal process. So, if the states are ignored and we have a chain of iid times, then we have a renewal process.
    \Pr(\tau_{n+1}\le t|T_0, T_1, \ldots, T_n)=\Pr(\tau_{n+1}\le t)\, \forall n \ge1, \forall t\ge0

See also

References and Further Reading

  1. Medhi, J. (1982). Stochastic processes. New York: Wiley & Sons. ISBN 978-0-470-27000-4.
  2. Ross, Sheldon M. (1999). Stochastic processes. (2nd ed.). New York [u.a.]: Routledge. ISBN 978-0-471-12062-9.
  3. Barbu, Vlad Stefan; Limnios, Nikolaos (2008). Semi-Markov chains and hidden semi-Markov models toward applications : their use in reliability and DNA analysis. New York: Springer. ISBN 978-0-387-73171-1.
This article is issued from Wikipedia - version of the Monday, January 25, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.