Luus–Jaakola

In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function.[1] In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generates a sequence of points that converges to an optimal solution (when one exists). However, when applied to a twice continuously differentiable function, the LJ heuristic is a proper iterative method, that generates a sequence that has a convergent subsequence; for this class of problems, Newton's method is recommended and enjoys a quadratic rate of convergence, while no convergence rate analysis has been given for the LJ heuristic.[1] In practice, the LJ heuristic has been recommended for functions that need be neither convex nor differentiable nor locally Lipschitz: The LJ heuristic does not use a gradient or subgradient when one be available, which allows its application to non-differentiable and non-convex problems.

Proposed by Luus and Jaakola,[2] LJ generates a sequence of iterates. The next iterate is selected from a sample from a neighborhood of the current position using a uniform distribution. With each iteration, the neighborhood decreases, which forces a subsequence of iterates to converge to a cluster point.[1]

Luus has applied LJ in optimal control,[3] transformer design,[4] metallurgical processes,[5] and chemical engineering.[6]

Motivation

When the current position x is far from the optimum the probability is 1/2 for finding an improvement through uniform random sampling.
As we approach the optimum the probability of finding further improvements through uniform sampling decreases towards zero if the sampling-range d is kept fixed.

At each step, the LJ heuristic maintains a box from which it samples points randomly, using a uniform distribution on the box. For a unimodal function, the probability of reducing the objective function decreases as the box approach a minimum. The picture displays a one-dimensional example.

Heuristic

Let f: n  be the fitness or cost function which must be minimized. Let x  n designate a position or candidate solution in the search-space. The LJ heuristic iterates the following steps:

Convergence

Nair proved a convergence analysis. For twice continuously differentiable functions, the LJ heuristic generates a sequence of iterates having a convergent subsequence.[1] For this class of problems, Newton's method is the usual optimization method, and it has quadratic convergence (regardless of the dimension of the space, which can be a Banach space, according to Kantorovich's analysis).

The worst-case complexity of minimization on the class of unimodal functions grows exponentially in the dimension of the problem, according to the analysis of Yudin and Nemirovsky, however. The Yudin-Nemirovsky analysis implies that no method can be fast on high-dimensional problems that lack convexity:

"The catastrophic growth [in the number of iterations needed to reach an approximate solution of a given accuracy] as [the number of dimensions increases to infinity] shows that it is meaningless to pose the question of constructing universal methods of solving ... problems of any appreciable dimensionality 'generally'. It is interesting to note that the same [conclusion] holds for ... problems generated by uni-extremal [that is, unimodal] (but not convex) functions."[7]

When applied to twice continuously differentiable problems, the LJ heuristic's rate of convergence decreases as the number of dimensions increases.[8]

See also

References

  1. 1 2 3 4 Nair, G. Gopalakrishnan (1979). "On the convergence of the LJ search method". Journal of Optimization Theory and Applications 28 (3): 429434. doi:10.1007/BF00933384. MR 543384.
  2. Luus, R.; Jaakola, T.H.I. (1973). "Optimization by direct search and systematic reduction of the size of search region". American Institute of Chemical Engineers Journal (AIChE) 19 (4): 760766. doi:10.1002/aic.690190413.
  3. Bojkov, R.; Hansel, B.; Luus, R. (1993). "Application of direct search optimization to optimal control problems". Hungarian Journal of Industrial Chemistry 21: 177185.
  4. Spaans, R.; Luus, R. (1992). "Importance of search-domain reduction in random optimization". Journal of Optimization Theory and Applications 75: 635638. doi:10.1007/BF00940497. MR 1194836.
  5. Papangelakis, V.G.; Luus, R. (1993). "Reactor optimization in the pressure oxidization process". Proc. Int. Symp. on Modelling, Simulation and Control of Metallurgical Processes. pp. 159171.
  6. Lee, Y.P.; Rangaiah, G.P.; Luus, R. (1999). "Phase and chemical equilibrium calculations by direct search optimization". Computers & Chemical Engineering 23 (9): 11831191. doi:10.1016/s0098-1354(99)00283-5.
  7. Nemirovsky & Yudin (1983, p. 7)

    Page 7 summarizes the later discussion of Nemirovksy & Yudin (1983, pp. 36–39): Nemirovsky, A. S.; Yudin, D. B. (1983). Problem complexity and method efficiency in optimization. Wiley-Interscience Series in Discrete Mathematics (Translated by E. R. Dawson from the (1979) Russian (Moscow: Nauka) ed.). New York: John Wiley & Sons, Inc. pp. xv+388. ISBN 0-471-10345-4. MR 702836.

  8. Nair (1979, p. 433)

Nemirovsky, A. S.; Yudin, D. B. (1983). Problem complexity and method efficiency in optimization. Wiley-Interscience Series in Discrete Mathematics (Translated by E. R. Dawson from the (1979) Russian (Moscow: Nauka) ed.). New York: John Wiley & Sons, Inc. pp. xv+388. ISBN 0-471-10345-4. MR 702836. 

This article is issued from Wikipedia - version of the Tuesday, April 19, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.