Adaptive stepsize

In numerical analysis, some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration) use an adaptive stepsize in order to control the errors of the method and to ensure stability properties such as A-stability. Romberg's method is an example of a numerical integration method which uses an adaptive stepsize.

Example

For simplicity, the following example uses the simplest integration method, the Euler method; in practice, higher-order methods such as Runge–Kutta methods are preferred due to their superior convergence and stability properties.

Consider the initial value problem

 y'(t) = f(t,y(t)), \qquad y(a)=y_a

where y and f may denote vectors (in which case this equation represents a system of coupled ODEs in several variables).

We are given the function f(t,y) and the initial conditions (a, ya), and we are interested in finding the solution at t=b. Let y(b) denote the exact solution at b, and let yb denote the solution that we compute. We write y_b + \epsilon = y(b), where \epsilon is the error in the numerical solution.

For a sequence (tn) of values of t, with tn = a + nh, the Euler method gives approximations to the corresponding values of y(tn) as

y_{n+1}^{(0)}=y_n+hf(t_n,y_n)

The local truncation error of this approximation is defined by

\tau_{n+1}^{(0)}=y(t_{n+1}) - y_{n+1}^{(0)}

and by Taylor's theorem, it can be shown that (provided f is sufficiently smooth) the local truncation error is proportional to the square of the step size:

\tau_{n+1}^{(0)}=ch^2

where c is some constant of proportionality.

We have marked this solution and its error with a (0).

The value of c is not known to us. Let us now apply Euler's method again with a different step size to generate a second approximation to y(tn+1). We get a second solution, which we label with a (1). Take the new step size to be one half of the original step size, and apply two steps of Euler's method. This second solution is presumably more accurate. Since we have to apply Euler's method twice, the local error is (in the worst case) twice the original error.

y_{n+\frac{1}{2}}=y_n+\frac{h}{2}f(t_n,y_n)
y_{n+1}^{(1)}=y_{n+\frac{1}{2}}+\frac{h}{2}f(t_{n+\frac{1}{2}},y_{n+\frac{1}{2}})
\tau_{n+1}^{(1)}=c\left(\frac{h}{2}\right)^2+c\left(\frac{h}{2}\right)^2=2c\left(\frac{h}{2}\right)^2=\frac{1}{2}ch^2=\frac{1}{2}\tau_{n+1}^{(0)}
y_{n+1}^{(1)} + \tau_{n+1}^{(1)}=y(t+h)

Here, we assume error factor c is constant over the interval [t, t+h]. In reality its rate of change is proportional to y^{(3)}(t). Subtracting solutions gives the error estimate:

 y_{n+1}^{(1)}-y_{n+1}^{(0)} = \tau_{n+1}^{(1)}

This local error estimate is third order accurate.

The local error estimate can be used to decide how stepsize h should be modified to achieve the desired accuracy. For example, if a local tolerance of tol is allowed, we could let h evolve like:

 h \rightarrow 0.9 \times h \times \frac{tol}{|\tau_{n+1}^{(1)}|}

The 0.9 is a safety factor to ensure success on the next try. This should, in principle give an error of about 0.9 \times tol in the next try. If |\tau_{n+1}^{(1)}| < tol, we consider the step successful, and the error estimate is used to improve the solution:

 y_{n+1}^{(2)} = y_{n+1}^{(1)} + \tau_{n+1}^{(1)}

This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps. This technique is called Richardson extrapolation.

Beginning with an initial stepsize of h=b-a, this theory facilitates our controllable integration of the ODE from point a to b, using an optimal number of steps given a local error tolerance. A drawback is that the step size may become prohibitively small, especially when using the low-order Euler method.

Similar methods can be developed for higher order methods, such as the 4th order Runge-Kutta method. Also, a global error tolerance can be achieved by scaling the local error to global scope.

Embedded error estimates

Adaptive stepsize methods that use a so-called 'embedded' error estimate include the Runge–Kutta–Fehlberg, Cash–Karp and Dormand–Prince methods. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates.

References

    Further reading

    This article is issued from Wikipedia - version of the Saturday, February 06, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.