Dynamic mode decomposition

Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter Schmid in 2008. Given a time series of data, DMD computes a set of modes each of which is associated with a fixed oscillation frequency and decay/growth rate. For linear systems in particular, these modes and frequencies are analogous to the normal modes of the system, but more generally, they are approximations of the modes and eigenvalues of the composition operator (also called the Koopman operator). Due to the intrinsic temporal behaviors associated with each mode, DMD differs from dimensionality reduction methods such as principal component analysis, which computes orthogonal modes that lack predetermined temporal behaviors. Because its modes are not orthogonal, DMD-based representations can be less parsimonious than those generated by PCA. However, they can also be more physically meaningful because each mode is associated with a damped (or driven) sinusoidal behavior in time.

Overview

Dynamic mode decomposition was first introduced by Schmid as a numerical procedure for extracting dynamical features from flow data.[1]

The data takes the form of a snapshot sequence

 V_1^N = \{v_1, v_2, \dots, v_N\},

where  v_i\in \mathbb{R}^M is the i-th snapshot of the flow field, and V_1^N\in\mathbb{R}^{M\times N} is a data matrix whose columns are the individual snapshots. The subscript and superscript denote the index of the snapshot in the first and last columns respectively. These snapshots are assumed to be related via a linear mapping

 v_{i+1} = A v_i,

that remains approximately the same over the duration of the sampling period. Written in matrix form, this implies that

 V_{2}^N = A V_{1}^{N-1} + re_{N-1}^T,

where r is the vector of residuals that accounts for behaviors that cannot be described completely by A, e_{N-1}=\{0,0,\ldots,1\}\in\mathbb{R}^{N-1}, V_1^{N-1}=\{v_1, v_2, \dots, v_{N-1}\}, and V_2^{N}=\{v_2, v_3, \dots, v_{N}\}. Regardless of the approach, the output of DMD is the eigenvalues and eigenvectors of A, which are referred to as the DMD eigenvalues and DMD modes respectively.

Algorithm

There are two methods for obtaining these eigenvalues and modes. The first is Arnoldi-like, which is useful for theoretical analysis due to its connection with Krylov methods. The second is a singular value decomposition (SVD) based approach that is more robust to noise in the data and to numerical errors.

The Arnoldi Approach

In fluids applications, the size of a snapshot, M, is assumed to be much larger than the number of snapshots N, so there are many equally valid choices of A. The original DMD algorithm picks A so that each of the snapshots in V_2^N can be written as the linear combination of the snapshots in V_1^{N-1}. Because most of the snapshots appear in both data sets, this representation is error free for all snapshots except v_N, which is written as

 v_N = a_1 v_1 + a_2 v_2 + \dots + a_{N-1}v_{N-1} + r = V_1^{N-1}a + r,

where a={a_1, a_2, \dots, a_{N-1}} is a set of coefficients DMD must identify and r is the residual. In total,

 V_{2}^N = A V_1^{N-1} = V_1^{N-1} S + re_{N-1}^T,

where S is the companion matrix

S=\begin{pmatrix}
0 & 0 & \dots & 0 & a_1 \\
1 & 0 & \dots & 0 & a_2 \\
0 & 1 & \dots & 0 & a_3 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & 1 & a_{N-1}
\end{pmatrix}.

The vector a can be computed by solving a least squares problem, which minimizes the overall residual. In particular if we take the QR decomposition of V_1^{N-1} = QR, then a = R^{-1}Q^Tv_N.

In this form, DMD is a type of Arnoldi method, and therefore the eigenvalues of S are approximations of the eigenvalues of A. Furthermore, if y is an eigenvector of S, then V_1^{N-1}y is an approximate eigenvector of A. The reason an eigendecomposition is performed on S rather than A is because S is much smaller than A, so the computational cost of DMD is determined by the number of snapshots rather than the size of a snapshot.

The SVD-Based Approach

Instead of computing the companion matrix  S , the SVD-based approach yields the matrix \tilde S that is related to A via a similarity transform. To do this, assume we have the SVD of V_1^{N-1} = U\Sigma W^T. Then

 V_{2}^N = A V_1^{N-1} + re_{N-1}^T = AU\Sigma W^T + re_{N-1}^T.

Equivalent to the assumption made by the Arnoldi-based approach, we choose A such that the snapshots in V_2^N can be written as the linear superposition of the columns in U, which is equivalent to requiring that they can be written as the superposition of POD modes. With this restriction, minimizing the residual requires that it is orthogonal to the POD basis (i.e., U^Tr = 0). Then multiplying both sides of the equation above by U^T yields  U^TV_2^N = U^T A U\Sigma W^T , which can be manipulated to obtain

 U^T A U = U^TV_2^N W \Sigma^{-1} \equiv \tilde S.

Because A and \tilde S are related via similarity transform, the eigenvalues of S are the eigenvalues of A, and if y is an eigenvector of \tilde S, then Uy is an eigenvector of A.

In summary, the SVD-based approach is as follows:

  1. Split the time series of data in V_1^N into the two matrices V_1^{N-1} and V_2^N.
  2. Compute the SVD of V_1^{N-1} = U\Sigma W^T.
  3. Form the matrix \tilde S =  U^TV_2^N W \Sigma^{-1}, and compute its eigenvalues \lambda_i and eigenvectors y_i.
  4. The i-th DMD eigenvalues is the \lambda_i and the i-th DMD mode is the Uy_i.

The advantage of the SVD-based approach over the Arnoldi-like approach is that noise in the data and numerical truncation issues can be compensated for by truncating the SVD of V_1^{N-1}. As noted in [1] accurately computing more than the first couple modes and eigenvalues can be difficult on experimental data sets without this truncation step.

Theoretical and Algorithmic Advancements

Since its inception in 2010, a considerable amount of work has focused on understanding and improving DMD. One of the first analyses of DMD by Rowley et al.[2] established the connection between DMD and the Koopman operator, and helped to explain the output of DMD when applied to nonlinear systems. Since then, a number of modifications have been developed that either strengthen this connection further or enhance the robustness and applicability of the approach.

In addition to the algorithms listed here, similar application-specific techniques have been developed. For example, like DMD, Prony's method represents a signal as the superposition of damped sinusoids. In climate science, linear inverse modeling is also strongly connected with DMD.[11] For a more comprehensive list, see Tu et al.[5]

Examples

Trailing edge of a profile

Fig 1 Trailing edge Vortices (Entropy)

The wake of an obstacle in the flow may develop a Kármán vortex street. The Fig.1 shows the shedding of a vortex behind the trailing edge of a profile. The DMD-analysis was applied to 90 sequential Entropy fields (animated gif (1.9MB) )and yield an approximated eigenvalue-spectrum as depicted below. The analysis was applied to the numerical results, without referring to the governing equations. The profile is seen in white. The white arcs are the processor boundaries, since the computation was performed on a parallel computer using different computational blocks.

Roughly a third of the spectrum was highly damped (large, negative \lambda_r) and is not shown. The dominant shedding mode is shown in the following pictures. The image to the left is the real part, the image to the right, the imaginary part of the eigenvector.

Again, the entropy-eigenvector is shown in this picture. The acoustic contents of the same mode is seen in the bottom half of the next plot. The top half corresponds to the entropy mode as above.

Synthetic example of a traveling pattern

The DMD analysis assumes a pattern of the form 
q(x_1,x_2,x_3, \ldots)=e^ {c x_1 }\hat q(x_2,x_3,\ldots)
where  x_1 is any of the independent variables of the problem, but has to be selected in advance. Take for example the pattern


q(x,y,t)=e^{-i \omega t} \hat q (x,t) e^{-(y/b)^2} \Re \left\{  e^{i (k x - \omega t)}    \right\} + \text{random noise}

With the time as the preselected exponential factor.

A sample is given in the following figure with  \omega = 2\pi /0.1 ,  b=0.02 and  k = 2\pi/ b . The left picture shows the pattern without, the right with noise added. The amplitude of the random noise is the same as that of the pattern.

A DMD analysis is performed with 21 synthetically generated fields using a time interval  \Delta t =1/90\text{ s}, limiting the analysis to  f =45\text{ Hz}.

The spectrum is symmetric and shows three almost undamped modes (small negative real part), whereas the other modes are heavily damped. Their numerical values are  \omega_1=-0.201, \omega_{2/3}=-0.223 \pm i 62.768 respectively. The real one corresponds to the mean of the field, whereas  \omega_{2/3} corresponds to the imposed pattern with  f = 10\text{ Hz} . Yielding a relative error of −1/1000. Increasing the noise to 10 times the signal value yields about the same error. The real and imaginary part of one of the latter two eigenmodes is depicted in the following figure.

See also

Several other decompositions of experimental data exist. If the governing equations are available, an eigenvalue decomposition might be feasible.

References

  1. 1 2 P.J. Schmid. "Dynamic mode decomposition of numerical and experimental data." Journal of Fluid Mechanics 656.1 (2010): 5–28.
  2. C.W. Rowley, I Mezic, S. Bagheri, P. Schlatter, and D.S. Henningson, "Spectral analysis of nonlinear flows." Journal of Fluid Mechanics 641 (2009): 85-113
  3. K.K. Chen, J.H. Tu, and C.W. Rowley, "Variants of dynamic mode decomposition: boundary condition, Koopman, and Fourier analyses." Journal of Nonlinear Science 22 (2012): 887-915.
  4. A. Wynn, D. S. Pearson, B. Ganapathisubramani and P. J. Goulart, "Optimal mode decomposition for unsteady flows." Journal of Fluid Mechanics 733 (2013): 473-503
  5. 1 2 Tu, Rowley, Luchtenburg, Brunton, and Kutz (December 2014). "On Dynamic Mode Decomposition: Theory and Applications". American Institute of Mathematical Sciences. doi:10.3934/jcd.2014.1.391.
  6. M.R. Jovanovic, P.J. Schmid, and J.W. Nichols, "Sparsity-promoting dynamic mode decomposition." Physics of Fluids 26 (2014)
  7. J.N. Kutz, X. Fu, and S.L. Brunton, "Multi-resolution dynamic mode decomposition." arXiv preprint arXiv:1506.00564 (2015).
  8. M.O. Williams , I.G. Kevrekidis, C.W. Rowley, "A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition." Journal of Nonlinear Science 25 (2015): 1307-1346.
  9. J.L. Proctor, S.L. Brunton, and J.N. Kutz, "Dynamic mode decomposition with control." arXiv preprint arXiv:1409.6358 (2014).
  10. M.S. Hemati, C.W. Rowley, E.A. Deem, and L.N. Cattafesta, "De-Biasing the Dynamic Mode Decomposition for Applied Koopman Spectral Analysis of Noisy Datasets." arXiv preprint arXiv:1502.03854 (2015).
  11. Penland, Magorian, Cecile, Theresa (1993). "Prediction of Niño 3 Sea Surface Temperatures Using Linear Inverse Modeling". J. Climate 6.
This article is issued from Wikipedia - version of the Saturday, April 02, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.