Stationary phase approximation

In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to oscillatory integrals

I(k) = \int g(x) e^{i k f(x)} \, dx

taken over n-dimensional space ℝn where i is the imaginary unit. Here f and g are real-valued smooth functions. The role of g is to ensure convergence; that is, g is a test function. The large real parameter k is considered in the limit as k \to \infty .

This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.[1]

Basics

The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.

An example

Consider a function

f(x,t) = \frac{1}{2\pi} \int_{\mathbb R} F(\omega) e^{i [k(\omega) x - \omega t]} \, d\omega.

The phase term in this function, ϕ = k(ω) xω t, is stationary when

\frac{d}{d\omega}\mathopen{}\left(k(\omega) x - \omega t\right)\mathclose{} = 0

or equivalently,

\frac{d k}{d\omega} = \frac{t}{x}.

Solutions to this equation yield dominant frequencies ω0 for some x and t. If we expand ϕ as a Taylor series about ω0 and neglect terms of order higher than (ωω0)2,

\phi = \left[k(\omega_0) x - \omega_0 t\right] + \frac{1}{2} x k''(\omega_0) (\omega - \omega_0)^2 + \cdots

where k″ denotes the second derivative of k. When x is relatively large, even a small difference (ωω0) will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we double the real contribution from the positive frequencies of the transform to account for the negative frequencies,

f(x, t) \approx \frac{1}{2\pi} \cdot 2 \operatorname{Re} \left\{ e^{i \left[k(\omega_0) x - \omega_0 t\right]} \left|F(\omega_0)\right| \int_{\mathbb R} e^{\frac{1}{2} i x k''(\omega_0) (\omega - \omega_0)^2} \, d\omega \right\}.

This integrates to

f(x, t) \approx \frac{\left|F(\omega_0)\right|}{\pi} \sqrt{\frac{2\pi}{x \left|k''(\omega_0)\right|}} \cos\left[k(\omega_0) x - \omega_0 t \pm \frac{\pi}{4}\right].

Reduction steps

The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann-Lebesgue lemma.

The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by

(x_1^2 + x_2^2 + \cdots + x_j^2) - (x_{j + 1}^2 + x_{j + 2}^2 + \cdots + x_n^2).

The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take

g(x) = \prod_i h(x_i),

then Fubini's theorem reduces I(k) to a product of integrals over the real line like

J(k) = \int h(x) e^{i k f(x)} \, dx

with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate.

In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques. See for example Airy function.

One-dimensional case

The essential statement is this one:

\int_{-1}^1 e^{i k x^2} \, dx = \sqrt{\frac{\pi}{k}} e^{i \pi / 4} + \mathcal O \mathopen{}\left(\frac{1}{k}\right)\mathclose{}.

In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range [−∞, ∞]. Therefore it is the question of estimating away the integral over, say, [1, ∞].[2]

This is the model for all one-dimensional integrals I(k) with f having a single non-degenerate critical point at which f has second derivative > 0. In fact the model case has second derivative 2 at 0. In order to scale using k, observe that replacing k by c k where c is constant is the same as scaling x by √c. It follows that for general values of f″(0) > 0, the factor √(π / k) becomes

\sqrt{\frac{2 \pi}{k f''(0)}}.

For f″(0) < 0 one uses the complex conjugate formula, as mentioned before.

See also

References

Notes

  1. Courant, Richard; Hilbert, David (1953), Methods of mathematical physics 1 (2nd revised ed.), New York: Interscience Publishers, p. 474, OCLC 505700
  2. See for example Jean Dieudonné, Infinitesimal Calculus, p. 119.

External links

This article is issued from Wikipedia - version of the Sunday, January 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.