Exponential integrator

Not to be confused with exponential integral of exponential functions.

Exponential integrators are a class of numerical methods for the solution of partial and ordinary differential equations. This large class of methods from numerical analysis is based on the exact integration of the linear part of the initial value problem described later in this article. Because the linear part is integrated exactly, this can help to mitigate the stiffness of a differential equation. Exponential integrators can be constructed to be explicit or implicit for numerical ordinary differential equations or serve as the time integrator for numerical partial differential equations.

Background

Dating back to at least the 1960s, these methods were recognized by Certain[1] and Pope.[2] As of late exponential integrators have become an active area of research. Originally developed for solving stiff differential equations, the methods have been used to solve partial differential equations including hyperbolic as well as parabolic problems[3] such as the heat equation.

Introduction

We consider initial value problems of the form,

u'(t) = L u(t) + N(u(t) ), \qquad u(t_0)=u_0, \qquad \qquad (1)

where L is composed of linear terms, and  N is composed of the non-linear terms. These problems can come from a more typical initial value problem

u'(t) = f(u(t)), \qquad u(t_0)=u_0,

after linearizing locally about a fixed or local state u^*:

 L = \frac{\partial f}{\partial u}(u^*); \qquad N = f(u) - L u.

Here, \frac{\partial f}{\partial u} refers to the partial derivative of f with respect to u (the Jacobian of f).

Exact integration of this problem from time 0 to a later time t can be performed using matrix exponentials to define an integral equation for the exact solution:[4]

 u(t) = e^{L t } u_0 + \int_{0}^{t} e^{ L (t-\tau) } N\left( u\left( \tau \right) \right)\, d\tau. \qquad (2)

This is similar to the exact integral used in the Picard–Lindelöf theorem. In the case of N\equiv 0, this formulation is the exact solution to the linear differential equation.

Numerical methods require a discretization of equation (2). They can be based on Runge-Kutta discretizations,[5][6] linear multistep methods or a variety of other options.

Exponential Rosenbrock methods

Exponential Rosenbrock methods were shown to be very efficient in solving large systems of stiff ordinary differential equations. These integrators are constructed based on a continuous linearization of (1) along the numerical solution u_n


u'(t)=L_{n} u(t)+ N_n(u(t)), \qquad u(t_0)=u_0, \qquad (3)

where  
L_{n}=\frac{\partial f}{\partial u}(u_n),  N_n (u)=f(u)-L_{n} u.
This procedure enjoys the benefit  
\frac{\partial N_n}{\partial u}(u_n)= 0
that helps to improve the stability when integrating the nonlinearity N(u(t)). Again, applying the variation-of-constants formula (2) gives the exact solution at time t_{n+1} as


u(t_{n+1})= e^{h_n L_n}u(t_n) + \int_{0}^{h_n} e^{(h_n-\tau) L_n}  N_n ( u(t_n+\tau ))  d\tau.  \qquad (4)

The idea is to approximate the integral in (4) by some quadrature rule with nodes c_i and weights b_i(h_n L_n) (1\leq i\leq s). This yields the following class of explicit exponential Rosenbrock methods, see :[4]


U_{ni}= e^{c_i h_n L_n}u_n +h_n \sum_{j=1}^{i-1}a_{ij}(h_n L_n) N_n( U_{nj}),

u_{n+1} = e^{h_n L_n}u_n +  h_n \sum_{i=1}^{s}b_{i}(h_n L_n)N_n(U_{ni}).

By introducing the difference D_{ni}=N_n (U_{ni})-N_n (u_n), they can be reformulated in a more efficient way for implementation (see also [4]) as


U_{ni}= u_n + c_i h_n \varphi _{1} ( c_i h_n L_n)f(u_n) +h_n \sum_{j=2}^{i-1}a_{ij}(h_n L_n) D_{nj},

u_{n+1}= u_n + h_n \varphi _{1} ( h_n L_n)f(u_n) + h_n \sum_{i=2}^{s}b_{i}(h_n L_n) D_{ni}.

Moreover, in [7] it is shown that this reformulation offers a new and simple way to analyze the local errors and thus to derive the stiff order conditions for exponential Rosenbrock methods up to order 5. With the help of this new technique together with an extension of the B-series concept, a theory for deriving the stiff order conditions for exponential Rosenbrock integrators of arbitrary order has been finally given in.[8] For example, in that work the stiff order conditions for exponential Rosenbrock methods up to order 6 have been stated as in the following table:


\begin{array}{ |c|c|c| }
\hline
No. & \text{Stiff order condition} & \text{Order} \\
\hline
1&\sum_{i=2}^{s} b_i (Z)c^2_i=2\varphi_3 (Z) &3 \\
 \hline
2&\sum_{i=2}^{s} b_i (Z)c^3_i=6\varphi_4 (Z) &4 \\
\hline
3&\sum_{i=2}^{s} b_i (Z)c^4_i=24\varphi_5 (Z) &5 \\
4&\sum_{i=2}^{s} b_i (Z)c_i  K \psi _{3,i}(Z)=0 &5 \\
\hline
5&\sum_{i=2}^{s}b_{i}(Z)c_i^5 = 120\varphi_6(Z) & 6\\
6&\sum_{i=2}^{s}b_{i}(Z) c_i^2 M \psi_{3,i}(Z)=0 & 6 \\
7&\sum_{i=2}^{s}b_{i}(Z) c_i K \psi_{4,i}(Z)=0 & 6\\
\hline
\end{array}
Here Z, K, and M denote arbitrary square matrices.

Examples

See also: the first-order exponential integrator for more details.

First-order forward Euler exponential integrator

The simplest method is based on a forward Euler time discretization. It can be realized by holding the term \mathcal{N}( y(\tau) ) \approx \mathcal{N}( y(0) ) constant over the whole interval. Exact integration of e^{ L (t-\tau) } then results in the

y(t) = e^{L t}y_0 + L^{-1} (e^{L t} - 1) \mathcal{N}( y( t_0 ) ).

Of course, this process can be repeated over small intervals to serve as the basis of a single-step numerical method.

In general, one defines a sequence of functions,

 \varphi_0( z ) = e^z; \qquad \varphi_1( z ) = \frac{e^z - 1}{z}, \qquad \varphi_2(z) = \frac{e^z - 1-z}{z^2}.

that show up in these methods. Usually, these linear operators are not computed exactly, but a Krylov subspace iterative method can be used to efficiently compute the multiplication of these operators times vectors.[9] See references for further details of where these functions come from.[5][10]

Fourth-order ETDRK4 method of Cox and Mathews

Cox and Mathews[11] describe a fourth-order method exponential time differencing (ETD) method that they used Maple to derive.

We use their notation, and assume that the unknown function is u, and that we have a known solution u_n at time t_n. Furthermore, we'll make explicit use of a possibly time dependent right hand side: \mathcal{N} = \mathcal{N}( u, t ).

Three stage values are first constructed:


a_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( u_n, t_n )

b_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( a_n, t_n + h/2 )

c_n = e^{ L h / 2 } a_n + L^{-1} \left( e^{Lh/2} - I \right) \left( 2 \mathcal{N}( b_n, t_n + h/2 ) - \mathcal{N}(u_n,t_n) \right)

The final update is given by,


u_{n+1} = e^{L h} u_n + h^{-2} L^{-3} \left\{ 
\left[ -4 - Lh + e^{Lh} \left( 4 - 3 L h + (L h)^2 \right)  \right] \mathcal{N}( u_n, t_n ) +
2 \left[ 2 + L h + e^{Lh} \left( -2 + L h \right) \right] \left( \mathcal{N}( a_n, t_n+h/2 ) + \mathcal{N}( b_n, t_n + h / 2 ) \right) +
\left[ -4 - 3L h - (Lh)^2 + e^{Lh} \left(4 - Lh \right) \right] \mathcal{N}( c_n, t_n + h )
\right\}.

If implemented naively, the above algorithm suffers from numerical instabilities due to floating point round-off errors.[12] To see why, consider the first function,

 \varphi_1( z ) = \frac{ 1 - e^z }{ z },

which is present in the first-order Euler method, as well as all three stages of ETDRK4. For small values of z, this function suffers from numerical cancellation errors. However, these numerical issues can be avoided by evaluating the  \varphi_1 function via a contour integral approach [12] or by a Padé approximant.[13]

Applications

Exponential integrators are used for the simulation of stiff scenarios in scientific and visual computing, for example in molecular dynamics,[14] for VLSI circuit simulation,[15] and in computer graphics.[16] They are also applied in the context of hybrid monte carlo methods.[17] In these applications, exponential integrators show the advantage of large time stepping capability and high accuracy. To accelerate the evaluation of matrix functions in such complex scenarios, exponential integrators are often combined with Krylov subspace projection methods.

See also

Notes

  1. Certain (1960)
  2. Pope (1963)
  3. Hochbruck and Ostermann, (2006)
  4. 1 2 3 Hochbruck and Ostermann, (2010)
  5. 1 2 Cox and Mathews (2002)
  6. Tokman (2006, 2011)
  7. Luan and Osterman (2014a)
  8. Luan and Osterman (2013)
  9. Tokman (2006, 2010)
  10. Hochbruck and Ostermann (2010)
  11. Cox and Mathews 2002
  12. 1 2 Kassam and Trefethen (2005)
  13. Berland et al. (2007)
  14. Michels and Desbrun (2015)
  15. Zhuang et al. (2014)
  16. Michels et al. (2014)
  17. Chao et al. (2015)

References

External links

This article is issued from Wikipedia - version of the Monday, April 18, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.