Bühlmann model

In credibility theory, a branch of study in actuarial science, the Bühlmann model is a random effects model (or "variance components model" or hierarchical linear model) used in to determine the appropriate premium for a group of insurance contracts. The model is named after Hans Bühlmann who first published a description in 1967.[1]

Model description

Consider i risks which generate random losses for which historical data of m recent claims are available (indexed by j). A premium for the ith risk is to be determined based on the expected value of claims. A linear estimator which minimizes the mean square error is sought. Write

Note: m(\vartheta) and s^2(\vartheta) are functions of random parameter \vartheta

The Bühlmann model is the solution for the problem:

\underset{a_{i0},a_{i1},...,a_{im}}{\operatorname{arg\,min}} \operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi \right)^2\right ]

where a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij} is the estimator of premium \Pi and arg min represents the parameter values which minimize the expression.

Model solution

The solution for the problem is:

Z\bar{X}_i+(1-Z)\mu

where:

Z=\frac{1}{1+\frac{\sigma^2}{v^2m}}

We can give this result the interpretation, that Z part of the premium is based on the information that we have about the specific risk, and (1-Z) part is based on the information that we have about the whole population.

Proof

The following proof is slightly different from the one in the original paper. It is also more general, because it considers all linear estimators, while original proof considers only estimators based on average claim.[2]

Lemma: The problem can be stated alternatively as:

f=\operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-m(\vartheta)\right )^2\right ]\rightarrow min

Proof:

\operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-m(\vartheta)\right )^2\right ] =\operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right )^2\right ]+\operatorname E\left [  \left ( m(\vartheta)-\Pi\right )^2\right ]+2E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right ) \left ( m(\vartheta)-\Pi\right )\right ] =\operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right )^2\right ]+\operatorname E\left [  \left ( m(\vartheta)-\Pi\right )^2\right ]

The last equation follows from the fact that \operatorname E\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right ) \left ( m(\vartheta)-\Pi\right )\right ] \operatorname E_{\Theta}\left \{  E_X\left [  \left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right ) \left ( m(\vartheta)-\Pi\right ) |  X_{i1},X_{i2},X_{im}\right ]\right \} =\left ( a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-\Pi\right )\operatorname E_{\Theta}\left \{  \operatorname E_X\left [   \left ( m(\vartheta)-\Pi\right ) |  X_{i1},X_{i2},X_{im}\right ]\right \}=0

We are using here the law of total expectation and the fact, that \Pi=\operatorname E(m(\vartheta)|X_{i1},X_{i2},...X_{im})

In our previous equation, we decompose minimized function in the sum of two expressions. The second expression does not depend on parameters used in minimization. Therefore, minimizing the function is the same as minimizing the first part of the sum.

Let us find critical points of the function

\frac{1}{2}\frac{\partial f}{\partial a_{01}}=

\operatorname E\left [   a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}-m(\vartheta)\right ]=a_{i0}+\sum_{j=1}^{m}a_{ij}\operatorname E(X_{ij})-\operatorname E(m(\vartheta))=a_{i0}-\left (\sum_{j=1}^{m}a_{ij}-1  \right )\mu

a_{i0}=\left (\sum_{j=1}^{m}a_{ij}-1  \right )\mu

For k\neq 0 we have:

\frac{1}{2}\frac{\partial f}{\partial a_{ik}}=\operatorname E\left [ X_{ik}\left ( a_{i0} +\sum_{j=1}^{m}a_{ij}X_{ij}-m(\vartheta)\right ) \right ] =\operatorname E\left [ X_{ik} \right ]a_{i0}+\sum_{j=1, j\neq k}^{m}a_{ij}\operatorname E[X_{ik}X_{ij}]+a_{ik}\operatorname E[X^2_{ik}]-\operatorname E[X_{ik}m(\vartheta)]=0

We can simplify derivative, noting that:

\operatorname E[X_{ij}X_{ik}]=\operatorname E[\operatorname E[X_{ij}X_{ik}|\vartheta]]=\operatorname E[cov(X_{ij}X_{ik}|\vartheta)+\operatorname E(X_{ij}|\vartheta)\operatorname E(X_{ik}|\vartheta)]=\operatorname E[(m(\vartheta))^2]=v^2+\mu^2

and

\operatorname E[X^2_{ik}]=\operatorname E[\operatorname E[X^2_{ik}|\vartheta]]=\operatorname E[s^2(\vartheta)+(m(\vartheta))^2]=\sigma^2+v^2+\mu^2

and

\operatorname E[X_{ik}m(\vartheta)]=\operatorname E[\operatorname E[X_{ik}m(\vartheta)|\Theta_i]=\operatorname E[(m(\vartheta))^2]=v^2+\mu^2

Taking above equations and inserting into derivative, we have:

\frac{1}{2}\frac{\partial f}{\partial a_{ik}} =\left ( 1-\sum_{j=1}^{m}a_{ij} \right )\mu^2+\sum_{j=1,j\neq k}^{m}a_{ij}(v^2+\mu^2)+a_{ik}(\sigma^2+v^2+\mu^2)-(v^2+\mu^2)=a_{ik}\sigma^2-\left ( 1-\sum_{j=1}^{m}a_{ij} \right )v^2=0

\sigma^2a_{ik}=v^2\left (1-  \sum_{j=1}^{m} a_{ij}\right)

Right side doesn't depend on k. Therefore all a_{ik} are constant

a_{i1}=a_{i2}=...=a_{im}=\frac{v^2}{\sigma^2+mv^2}

From the solution for a_{i0} we have

a_{i0}=(1-ma_{ik})\mu=\left ( 1-\frac{mv^2}{\sigma^2+mv^2} \right )\mu

Finally, the best estimator is

a_{i0}+\sum_{j=1}^{m}a_{ij}X_{ij}=\frac{mv^2}{\sigma^2+mv^2}\bar{X_i}+\left ( 1-\frac{mv^2}{\sigma^2+mv^2} \right )\mu=Z\bar{X_i}+(1-Z)\mu

References

  1. Bühlmann, Hans (1967). "Experience rating and credibility" (PDF) 4 (3). ASTIN Bulletin: 99207.
  2. Proof can be found on this site: Schmidli, Hanspeter. "Lecture notes on Risk Theory" (PDF). Institute of Mathematics, University of Cologne. Archived from the original (PDF) on August 11, 2013.
This article is issued from Wikipedia - version of the Sunday, April 17, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.