Renewal theory

Renewal theory is the branch of probability theory that generalizes Poisson processes for arbitrary holding times. Applications include calculating the best strategy for replacing worn-out machinery in a factory and comparing the long-term benefits of different insurance policies.

Renewal processes

Introduction

A renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent identically distributed holding times at each integer i (exponentially distributed) before advancing (with probability 1) to the next integer:i+1. In the same informal spirit, we may define a renewal process to be the same thing, except that the holding times take on a more general distribution. (Note however that the independence and identical distribution (IID) property of the holding times is retained).

Formal definition

Sample evolution of a renewal process with holding times Si and jump times Jn.

Let S_1 , S_2 , S_3 , S_4 , S_5, \ldots be a sequence of positive independent identically distributed random variables such that

 0 < \mathbb{E}[S_i] < \infty.

We refer to the random variable S_i as the "ith" holding time. \mathbb{E}[S_i] is the expectation of S_i.

Define for each n > 0 :

 J_n = \sum_{i=1}^n S_i,

each J_n referred to as the "nth" jump time and the intervals

[J_n,J_{n+1}]

being called renewal intervals.

Then the random variable (X_t)_{t\geq0} given by

 X_t = \sum^{\infty}_{n=1} \mathbb{I}_{\{J_n \leq t\}}=\sup \left\{\, n: J_n \leq t\, \right\}

(where \mathbb{I} is the indicator function) represents the number of jumps that have occurred by time t, and is called a renewal process.

Interpretation

If one considers events occurring at random times, one may choose to think of the holding times \{ S_i : i \geq 1 \} as the random time elapsed between two subsequent events. For example, if the renewal process is modelling the breakdown of different machines, then the holding times represent the time between one machine breaking down before another one does.

Renewal-reward processes

Sample evolution of a renewal-reward process with holding times Si, jump times Jn and rewards Wi

Let W_1, W_2, \ldots be a sequence of IID random variables (rewards) satisfying

\mathbb{E}|W_i| < \infty.\,

Then the random variable

Y_t = \sum_{i=1}^{X_t}W_i

is called a renewal-reward process. Note that unlike the S_i, each W_i may take negative values as well as positive values.

The random variable Y_t depends on two sequences: the holding times S_1, S_2, \ldots and the rewards W_1, W_2, \ldots These two sequences need not be independent. In particular, W_i may be a function of S_i.

Interpretation

In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" W_1,W_2,\ldots (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.

An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as S_i. Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" W_i are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and Y_t records the total financial "reward" at time t.

Properties of renewal processes and renewal-reward processes

We define the renewal function as the expected value of the number of jumps observed up to some time t:

m(t) = \mathbb{E}[X_t].\,

The elementary renewal theorem

The renewal function satisfies

\lim_{t \to \infty} \frac{1}{t}m(t) = 1/\mathbb{E}[S_1].

Proof

Below, you find that the strong law of large numbers for renewal processes tell us that

\lim_{t \to \infty} \frac {X_t}{t} = \frac{1}{\mathbb{E}[S_1]}.

To prove the elementary renewal theorem, it is sufficient to show that \left\{\frac{X_t}{t}; t \geq 0\right\} is uniformly integrable.

To do this, consider some truncated renewal process where the holding times are defined by \overline{S_n} = a \mathbb{I}\{S_n > a\} where a is a point such that 0 < F(a) = p < 1 which exists for all non-deterministic renewal processes. This new renewal process  \overline{X_t} is an upper bound on  X_t and its renewals can only occur on the lattice  \{na; n \in \mathbb{N} \} . Furthermore, the number of renewals at each time is geometric with parameter p. So we have


\begin{align}
\overline{X_t} &\leq \sum_{i=1}^{[at]} \mathrm{Geometric}(p) \\
\mathbb{E}\left[\,\overline{X_t}^2\,\right] &\leq C_1 t + C_2 t^2 \\
P\left(\frac{X_t}{t} > x\right) &\leq \frac{E\left[X_t^2\right]}{t^2x^2} \leq \frac{E\left[\overline{X_t}^2\right]}{t^2x^2} \leq \frac{C}{x^2}.
\end{align}

The elementary renewal theorem for renewal reward processes

We define the reward function:

g(t) = \mathbb{E}[Y_t].\,

The reward function satisfies

\lim_{t \to \infty} \frac{1}{t}g(t) = \frac{\mathbb{E}[W_1]}{\mathbb{E}[S_1]}.

The renewal equation

The renewal function satisfies

m(t) = F_S(t) + \int_0^t m(t-s) f_S(s)\, ds

where F_S is the cumulative distribution function of S_1 and f_S is the corresponding probability density function.

Proof of the renewal equation

We may iterate the expectation about the first holding time:
m(t) = \mathbb{E}[X_t] = \mathbb{E}[\mathbb{E}(X_t \mid S_1)]. \,
But by the Markov property
\mathbb{E}(X_t \mid S_1=s) = \mathbb{I}_{\{t \geq s\}} \left( 1 + \mathbb{E}[X_{t-s}]  \right). \,
So

\begin{align}
m(t) & {} = \mathbb{E}[X_t] \\[12pt]
& {} = \mathbb{E}[\mathbb{E}(X_t \mid S_1)] \\[12pt]
& {} =  \int_0^\infty \mathbb{E}(X_t \mid S_1=s) f_S(s)\, ds \\[12pt]
& {} = \int_0^\infty \mathbb{I}_{\{t \geq s\}} \left( 1 + \mathbb{E}[X_{t-s}] \right) f_S(s)\, ds \\[12pt]
& {} = \int_0^t \left( 1 + m(t-s) \right) f_S(s)\, ds \\[12pt]
& {} =  F_S(t) + \int_0^t  m(t-s) f_S(s)\, ds,
\end{align}
as required.

Asymptotic properties

(X_t)_{t\geq0} and (Y_t)_{t\geq0} satisfy

 \lim_{t \to \infty} \frac{1}{t} X_t = \frac{1}{\mathbb{E}S_1} (strong law of large numbers for renewal processes)
 \lim_{t \to \infty} \frac{1}{t} Y_t = \frac{1}{\mathbb{E}S_1} \mathbb{E}W_1 (strong law of large numbers for renewal-reward processes)

almost surely.

Proof

First consider (X_t)_{t\geq0}. By definition we have:
J_{X_t} \leq t \leq J_{X_t+1}
for all t \geq 0 and so

\frac{J_{X_t}}{X_t} \leq \frac{t}{X_t} \leq \frac{J_{X_t+1}}{X_t}
for all t 0.
Now since 0< \mathbb{E}S_i < \infty we have:
X_t \to \infty
as t \to \infty almost surely (with probability 1). Hence:
\frac{J_{X_t}}{X_t} = \frac{J_n}{n} = \frac{1}{n}\sum_{i=1}^n S_i \to \mathbb{E}S_1
almost surely (using the strong law of large numbers); similarly:
\frac{J_{X_t+1}}{X_t} = \frac{J_{X_t+1}}{X_t+1}\frac{X_t+1}{X_t} = \frac{J_{n+1}}{n+1}\frac{n+1}{n}  \to \mathbb{E}S_1\cdot 1
almost surely.
Thus (since t/X_t is sandwiched between the two terms)

\frac{1}{t} X_t \to \frac{1}{\mathbb{E}S_1}
almost surely.
Next consider (Y_t)_{t\geq0}. We have
\frac{1}{t}Y_t = \frac{X_t}{t} \frac{1}{X_t} Y_t \to \frac{1}{\mathbb{E}S_1}\cdot\mathbb{E}W_1
almost surely (using the first result and using the law of large numbers on Y_t).

The inspection paradox

A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size.

Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0:

 \mathbb{P}(S_{X_t+1} > x) \geq \mathbb{P}(S_1>x) = 1-F_S(x)

where FS is the cumulative distribution function of the IID holding times Si.

Proof of the inspection paradox

The renewal interval determined by the random point t (shown in red) is stochastically larger than the first renewal interval.

Observe that the last jump-time before t is J_{X_t}; and that the renewal interval containing t is S_{X_t+1}. Then


\begin{align}
\mathbb{P}(S_{X_t+1}>x) & {} = \int_0^\infty \mathbb{P}(S_{X_t+1}>x \mid J_{X_t} = s) f_S(s) \, ds \\[12pt]
& {} = \int_0^\infty \mathbb{P}(S_{X_t+1}>x | S_{X_t+1}>t-s) f_S(s)\, ds \\[12pt]
& {} =  \int_0^\infty \frac{\mathbb{P}(S_{X_t+1}>x \, , \, S_{X_t+1}>t-s)}{\mathbb{P}(S_{X_t+1}>t-s)} f_S(s) \, ds \\[12pt]
& {} = \int_0^\infty \frac{ 1-F(\max \{ x,t-s \})  }{1-F(t-s)} f_S(s) \, ds \\[12pt]
& {} = \int_0^\infty \min \left\{\frac{ 1-F(x)  }{1-F(t-s)},\frac{ 1-F(t-s)  }{1-F(t-s)}\right\} f_S(s) \, ds \\[12pt]
& {} = \int_0^\infty \min \left\{\frac{ 1-F(x)  }{1-F(t-s)},1\right\} f_S(s) \, ds \\[12pt]
& {} \geq 1-F(x) \\[12pt]
& {} = \mathbb{P}(S_1>x)
\end{align}

as required.

Superposition

The superposition of independent renewal processes is not generally a renewal process, but it can be described within a larger class of processes called the Markov-renewal processes.[1] However, the cumulative distribution function of the first inter-event time in the superposition process is given by[2]

R(t) = 1 - \sum_{k=1}^K \frac{\alpha_k}{\sum_{l=1}^K \alpha_l} (1-R_k(t)) \prod_{j=1,j\neq k}^{K} \alpha_j \int_t^\infty (1-R_j(u))\text{d}u

where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k.[3]

Example applications

Example 1: use of the strong law of large numbers

Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.

What is his optimal replacement policy?

Solution

The lifetime of the n machines can be modeled as n independent concurrent renewal-reward processes, so it is sufficient to consider the case n=1. Denote this process by (Y_t)_{t \geq 0}. The successive lifetimes S of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.

If Eric decides at the start of a machine's life to replace it at time 0 < t < 2 but the machine happens to fail before that time then the lifetime S of the machine is uniformly distributed on [0, t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:


\begin{align}
\mathbb{E}S & = \mathbb{E}[S \mid \mbox{fails before } t] \cdot \mathbb{P}[\mbox{fails before } t] + \mathbb{E}[S \mid \mbox{does not fail before } t] \cdot \mathbb{P}[\mbox{does not fail before } t] \\
& = \frac{t}{2}\left(0.5t\right) + \frac{2-t}{2}\left( t \right)
\end{align}

and the expected cost W per machine is:


\begin{align}
\mathbb{E}W & = \mathbb{E}(W \mid \text{fails before } t) \cdot \mathbb{P}(\text{fails before } t) + \mathbb{E}(W \mid \text{does not fail before } t)\cdot \mathbb{P}(\text{does not fail before } t) \\
& = \frac{t}{2}( 2600 ) + \frac{2-t}{2} ( 200 ) = 1200t + 200.
\end{align}

So by the strong law of large numbers, his long-term average cost per unit time is:


\frac{1}{t} Y_t \simeq \frac{\mathbb{E}W}{\mathbb{E}S}
= \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 }

then differentiating with respect to t:


\frac{\partial}{\partial t} \frac{ 4(1200t + 200) }{ t^2 + 4t - 2t^2 } = 4\frac{ (4t - t^2)(1200) - (4 - 2t)(1200t + 200) }{ (t^2 + 4t - 2t^2)^2 },

this implies that the turning points satisfy:


\begin{align}
0 & = (4t - t^2)(1200) - (4 - 2t)(1200t + 200) 
= 4800t - 1200t^2 -4800t - 800 + 2400t^2 + 400t \\
& = -800 + 400t + 1200t^2,
\end{align}

and thus


0 = 3t^2 + t - 2 = (3t -2)(t+1).

We take the only solution t in [0, 2]: t = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as t tends to zero, meaning that the cost is decreasing as t increases, until the point 2/3 where it starts to increase.

See also

References

  1. Çinlar, Erhan (1969). "Markov Renewal Theory". Advances in Applied Probability (Applied Probability Trust) 1 (2): 123–187. JSTOR 1426216.
  2. Lawrence, A. J. (1973). "Dependency of Intervals Between Events in Superposition Processes". Journal of the Royal Statistical Society. Series B (Methodological) 35 (2): 306–315. JSTOR 2984914. Retrieved Nov 15, 2012. formula 4.1
  3. Choungmo Fofack, Nicaise; Nain, Philippe; Neglia, Giovanni; Towsley, Don. "Analysis of TTL-based Cache Networks". Proceedings of 6th International Conference on Performance Evaluation Methodologies and Tools. Retrieved Nov 15, 2012.
This article is issued from Wikipedia - version of the Thursday, March 17, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.