Projection pursuit regression

In statistics, projection pursuit regression (PPR) is a statistical model developed by Jerome H. Friedman and Werner Stuetzle which is an extension of additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables.

Model overview

The model consists of linear combinations of non-linear transformations of linear combinations of explanatory variables. The basic model takes the form

Y=\beta_0 + \sum_{j=1}^r f_j (\beta_j'x) + \varepsilon ,

where x is a column vector containing a particular row of the design matrix X which contains p explanatory variables (columns) and n observations (row). Here Y is a particular observation variable (identifying the row being considered) to be predicted, {βj} is a collection of r vectors (each a unit vector of length p) which contain the unknown parameters. Finally r is the number of modelled smoothed non-parametric functions to be used as constructed explanatory variables. The value of r is found through cross-validation or a forward stage-wise strategy which stops when the model fit cannot be significantly improved. For large values of r and an appropriate set of functions fj, the PPR model is considered a universal estimator as it can estimate any continuous function in Rp.

Thus this model takes the form of the basic additive model but with the additional βj component; making it fit \beta_j 'x rather than the actual inputs x. The vector \beta_j 'X is the projection of X onto the unit vector βj, where the directions βj are chosen to optimize model fit. The functions fj are unspecified by the model and estimated using some flexible smoothing method; preferably one with well defined second derivatives to simplify computation. This allows the PPR to be very general as it fits non-linear functions fj of any class of linear combinations in X. Due to the flexibility and generality of this model, it is difficult to interpret the fitted model because each input variable has been entered into the model in a complex and multifaceted way. Thus the model is far more useful for prediction than creating a model to understand the data.

Model estimation

For a given set of data (y_i ,x_i ), the goal is to minimize the error function

S=\sum_{i=1}^n \left[ y_i - \sum_{j=1}^r f_j (\beta_j 'x_i) \right]^2 ,

over the functions f_j and vectors \beta_j. After estimating the smoothing functions f_j, one generally uses the Gauss–Newton iterated convergence technique to solve for \beta_j; provided that the functions f_j are twice differentiable.

It has been shown that the convergence rate, the bias and the variance are affected by the estimation of \beta_j and f_j. It has also been shown that \beta_j converges at an order of n^\frac{1}{2}, while \beta_j converges at a slightly worse order.

Advantages of PPR estimation

Disadvantages of PPR estimation

Extensions of PPR

PPR vs neural networks (NN)

Both projection pursuit regression and neural networks models project the input vector onto a one-dimensional hyperplane and then apply a nonlinear transformation of the input variables that are then added in a linear fashion. Thus both follow the same steps to overcome the curse of dimensionality. The main difference is that the functions f_j being fitted in PPR can be different for each combination of input variables and are estimated one at a time and then updated with the weights, whereas in NN these are all specified upfront and estimated simultaneously.

Thus, PPR estimation is more straightforward than NN and the transformations of variables in PPR is data driven whereas in NN, these transformations are fixed.

See also

References

    This article is issued from Wikipedia - version of the Wednesday, January 27, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.