Von Neumann stability analysis

In numerical analysis, von Neumann stability analysis (also known as Fourier stability analysis) is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations.[1] The analysis is based on the Fourier decomposition of numerical error and was developed at Los Alamos National Laboratory after having been briefly described in a 1947 article by British researchers Crank and Nicolson.[2] This method is an example of explicit time integration where the function that defines governing equation is evaluated at the current time. Later, the method was given a more rigorous treatment in an article[3] co-authored by John von Neumann.

Numerical stability

The stability of numerical schemes is closely associated with numerical error. A finite difference scheme is stable if the errors made at one time step of the calculation do not cause the errors to be magnified as the computations are continued. A neutrally stable scheme is one in which errors remain constant as the computations are carried forward. If the errors decay and eventually damp out, the numerical scheme is said to be stable. If, on the contrary, the errors grow with time the numerical scheme is said to be unstable. The stability of numerical schemes can be investigated by performing von Neumann stability analysis. For time-dependent problems, stability guarantees that the numerical method produces a bounded solution whenever the solution of the exact differential equation is bounded. Stability, in general, can be difficult to investigate, especially when the equation under consideration is nonlinear.

In certain cases, von Neumann stability is necessary and sufficient for stability in the sense of Lax–Richtmyer (as used in the Lax equivalence theorem): The PDE and the finite difference scheme models are linear; the PDE is constant-coefficient with periodic boundary conditions and has only two independent variables; and the scheme uses no more than two time levels.[4] Von Neumann stability is necessary in a much wider variety of cases. It is often used in place of a more detailed stability analysis to provide a good guess at the restrictions (if any) on the step sizes used in the scheme because of its relative simplicity.

Illustration of the method

The von Neumann method is based on the decomposition of the errors into Fourier series. To illustrate the procedure, consider the one-dimensional heat equation


  \frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2}

defined on the spatial interval L, which can be discretized[5] as


  \quad (1) \qquad u_j^{n + 1} = u_j^{n} + r \left(u_{j + 1}^n - 2 u_j^n + u_{j - 1}^n \right)

where

r = \frac{\alpha\, \Delta t}{\Delta x^2}

and the solution u_j^{n} of the discrete equation approximates the analytical solution u(x,t) of the PDE on the grid.

Define the round-off error \epsilon_j^n as


  \epsilon_j^n = N_j^n - u_j^n

where u_j^n is the solution of the discretized equation (1) that would be computed in the absence of round-off error, and N_j^n is the numerical solution obtained in finite precision arithmetic. Since the exact solution u_j^n must satisfy the discretized equation exactly, the error \epsilon_j^n must also satisfy the discretized equation.[6] Here we assumed that N_j^n satisfies the equation, too (this is only true in machine precision). Thus


  \quad (2) \qquad \epsilon_j^{n + 1} = \epsilon_j^n + r \left(\epsilon_{j + 1}^n - 2 \epsilon_j^n + \epsilon_{j - 1}^n \right)

is a recurrence relation for the error. Equations (1) and (2) show that both the error and the numerical solution have the same growth or decay behavior with respect to time. For linear differential equations with periodic boundary condition, the spatial variation of error may be expanded in a finite Fourier series, in the interval L, as


  \quad (3) \qquad \epsilon(x) = \sum_{m=1}^{M} A_m e^{ik_m x}

where the wavenumber k_m = \frac{\pi m}{L} with m = 1,2,\ldots,M and M = L/\Delta x. The time dependence of the error is included by assuming that the amplitude of error A_m is a function of time. Since the error tends to grow or decay exponentially with time, it is reasonable to assume that the amplitude varies exponentially with time; hence


  \quad (4) \qquad \epsilon(x,t) = \sum_{m=1}^{M} e^{at} e^{ik_m x}

where a is a constant.

Since the difference equation for error is linear (the behavior of each term of the series is the same as series itself), it is enough to consider the growth of error of a typical term:


  \quad (5) \qquad \epsilon_m(x,t) = e^{at} e^{ik_m x}

The stability characteristics can be studied using just this form for the error with no loss in generality. To find out how error varies in steps of time, substitute equation (5) into equation (2), after noting that


\begin{align}
 \epsilon_j^n & = e^{at} e^{ik_m x} \\
 \epsilon_j^{n+1} & = e^{a(t+\Delta t)} e^{ik_m x} \\
 \epsilon_{j+1}^n & = e^{at} e^{ik_m (x+\Delta x)} \\
 \epsilon_{j-1}^n & = e^{at} e^{ik_m (x-\Delta x)},
\end{align}

to yield (after simplification)


  \quad (6) \qquad e^{a\Delta t} = 1 + \frac{\alpha \Delta t}{\Delta x^2} \left(e^{ik_m \Delta x} + e^{-ik_m \Delta x} -  2\right).

Using the identities


  \qquad \cos(k_m \Delta x) = \frac{e^{ik_m \Delta x} + e^{-ik_m \Delta x}}{2} \qquad \text{and} \qquad \sin^2\frac{k_m \Delta x}{2} = \frac{1 - \cos(k_m \Delta x)}{2}

equation (6) may be written as


  \quad (7) \qquad e^{a\Delta t} = 1 - \frac{4\alpha \Delta t}{\Delta x^2} \sin^2 (k_m \Delta x/2)

Define the amplification factor


  G \equiv \frac{\epsilon_j^{n+1}}{\epsilon_j^n}

The necessary and sufficient condition for the error to remain bounded is that \vert G \vert \leq 1. However,


  \quad (8) \qquad G = \frac{e^{a(t+\Delta t)} e^{ik_m x}}{e^{at} e^{ik_m x}} = e^{a\Delta t}

Thus, from equations (7) and (8), the condition for stability is given by


  \quad (9) \qquad \left\vert 1 - \frac{4\alpha \Delta t}{\Delta x^2} \sin^2 (k_m \Delta x/2) \right\vert \leq 1

Note that the term \frac{4\alpha \Delta t}{\Delta x^2} \sin^2 (k_m \Delta x/2) is always positive. Thus, to satisfy Equation (9):


  \quad (10) \qquad  \frac{4\alpha \Delta t}{\Delta x^2} \sin^2 (k_m \Delta x/2) \leq 2

For the above condition to hold at all \sin^2 (k_m \Delta x/2), we have


  \quad (11) \qquad \frac{\alpha \Delta t}{\Delta x^2} \leq \frac{1}{2}

Equation (11) gives the stability requirement for the FTCS scheme as applied to one-dimensional heat equation. It says that for a given \Delta x, the allowed value of \Delta t must be small enough to satisfy equation (10).

References

  1. Analysis of Numerical Methods by E. Isaacson, H. B. Keller
  2. Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of Heat Conduction Type", Proc. Camb. Phil. Soc. 43: 5067, doi:10.1007/BF02127704
  3. Charney, J. G.; Fjørtoft, R.; von Neumann, J. (1950), "Numerical Integration of the Barotropic Vorticity Equation", Tellus 2: 237254, doi:10.1111/j.2153-3490.1950.tb00336.x
  4. Smith, G. D. (1985), Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed., pp. 6768
  5. in this case, using the FTCS discretization scheme
  6. Anderson, J. D., Jr. (1994). Computational Fluid Dynamics: The Basics with Applications. McGraw Hill.
This article is issued from Wikipedia - version of the Monday, February 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.