Green's function

This article is about the classical approach to Green's functions. For a modern discussion, see fundamental solution.

In mathematics, a Green's function is the impulse response of an inhomogeneous differential equation defined on a domain, with specified initial conditions or boundary conditions.

Through the superposition principle for linear operator problems, the convolution of a Green's function with an arbitrary function f (x) on that domain is the solution to the inhomogeneous differential equation for f (x). In other words, given a linear ODE, L(solution) = source, one can first solve L(green) = δs, for each s, and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of L.

Green's functions are named after the British mathematician George Green, who first developed the concept in the 1830s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.

Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators.

Definition and uses

A Green's function, G(x,s), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean spacen, at a point s, is any solution of

LG(x,s)=\delta(s-x),

 

 

 

 

(1)

where δ is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form

Lu(x)=f(x).

 

 

 

 

(2)

If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions.

Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, the Green's function of the Hamiltonian is a key concept with important links to the concept of density of states.

As a side note, the Green's function as used in physics is usually defined with the opposite sign, instead, that is,

LG(x,s)=-\delta(x-s).

This definition does not significantly change any of the properties of the Green's function.

If the operator is translation invariant, that is, when L has constant coefficients with respect to x, then the Green's function can be taken to be a convolution operator, that is,

G(x,s)=G(x-s).

In this case, the Green's function is the same as the impulse response of linear time-invariant system theory.

Motivation

See also: Spectral theory

Loosely speaking, if such a function G can be found for the operator L, then, if we multiply the equation (1) for the Green's function by f (s), and then integrate with respect to s, we obtain,

\int L G(x,s)f(s) \, ds = \int \delta(x-s)f(s) \, ds = f(x).

The right-hand side is now given by the equation (2) to be equal to L u(x), thus

Lu(x)=\int LG(x,s) f(s) \, ds.

Because the operator L = L(x) is linear and acts on the variable x alone (not on the variable of integration s), one may take the operator L outside of the integration on the right-hand side, yielding

Lu(x)=L\left(\int G(x,s) f(s) \,ds\right),

which suggests

u(x)=\int G(x,s) f(s) \,ds.

 

 

 

 

(3)

Thus, one may obtain the function u(x) through knowledge of the Green's function in equation (1) and the source term on the right-hand side in equation (2). This process relies upon the linearity of the operator L.

In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f (x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation (1). For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator L.

Not every operator L admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation (3) may be quite difficult to evaluate. However the method gives a theoretically exact result.

This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ(x−s)); and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.

Green's functions for solving inhomogeneous boundary value problems

The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams; the term Green's function is often further used for any correlation function.

Framework

Let L be the Sturm–Liouville operator, a linear differential operator of the form

L=\dfrac{d}{dx}\left[p(x) \dfrac{d}{dx}\right]+q(x)

and let D be the boundary conditions operator

Du= \begin{cases}
 \alpha_1 u'(0)+\beta_1 u(0) \\
 \alpha_2 u'(l)+\beta_2 u(l) ~.
\end{cases}

Let f(x) be a continuous function in [0, l]. Further suppose that the problem

\begin{align}
 Lu &= f \\
 Du &= 0
\end{align}

is regular, i.e., only the trivial solution exists for the homogeneous problem.

Theorem

There is one and only one solution u(x) that satisfies

 \begin{align}
 Lu & = f\\
 Du & = 0,
\end{align}

and it is given by

u(x)=\int_0^\ell f(s) G(x,s) \, ds~,

where G(x,s) is a Green's function satisfying the following conditions:

  1. G(x,s) is continuous in x and s.
  2. For x \ne s, L G(x, s)=0.
  3. For s \ne 0, D G(x, s)=0.
  4. Derivative "jump": G'(s_{+0}, s)-G'(s_{-0}, s)=1 / p(s).
  5. Symmetry: G(x,s) = G(s, x).

Advanced and retarded Green's functions

Sometimes the Green's function can be split into a sum of two functions. One with the variable positive (+) and the other with the variable negative (-). These are the advanced and retarded Green's functions, and when the equation under study depends on time, one of the parts is causal and the other anti-causal. In these problems usually the causal part is the important one.

Finding Green's functions

Eigenvalue expansions

If a differential operator L admits a set of eigenvectors Ψn(x) (i.e., a set of functions Ψn and scalars λn such that n = λn Ψn ) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues.

"Complete" means that the set of functions { Ψn } satisfies the following completeness relation,

\delta(x-x')=\sum_{n=0}^\infty \Psi_n^\dagger(x) \Psi_n(x').

Then the following holds,

G(x, x')=\sum_{n=0}^\infty \dfrac{\Psi_n^\dagger(x) \Psi_n(x')}{\lambda_n},

where \dagger represents complex conjugation.

Applying the operator L to each side of this equation results in the completeness relation, which was assumed.

The general study of the Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.

There are several other methods for finding Green's functions, including the method of images, separation of variables, and Laplace transforms (Cole 2011).

Table of Green's functions

The following table gives an overview of Green's functions of frequently appearing differential operators, where r = x²+y²+z² , ρ = x²+y² , θ(t) is the Heaviside step function, Jν(z) is a Bessel function, and Iν(z) is a modified Bessel function of the first kind.[1] Where time (t) appears in the first column, the advanced (causal) Green's function is listed.

Differential Operator L Green's Function G Example of application
\partial_t + \gamma \theta(t)\mathrm e^{-\gamma t}
\left(\partial_t + \gamma \right)^2 \theta(t)t\mathrm e^{-\gamma t}
\partial_t^2 + 2\gamma\partial_t + \omega_0^2 \theta(t)\mathrm e^{-\gamma t}~\frac{\sin(\omega t)}{\omega}   with   \omega=\sqrt{\omega_0^2-\gamma^2} 1D damped harmonic oscillator
Δ2D\partial_x^2 + \partial_y^2 \frac{1}{2 \pi}\ln \rho 2D Poisson equation
  Δ ² ≡ \partial_x^2 + \partial_y^2 + \partial_z^2 \frac{-1}{4 \pi r} Poisson equation
Helmholtz operator   Δ + k² \frac{-\mathrm e^{-ikr}}{4 \pi r} stationary 3D Schrödinger equation for free particle
\partial_t^2 - c^2\partial_x^2 \frac{1}{2c}\theta(t - |x/c|) 1D wave equation
\partial_t^2 - c^2(\partial_x^2 + \partial_y^2) \frac{1}{2\pi c\sqrt{c^2t^2 - \rho^2}}\theta(t - \rho/c) 2D wave equation
D'Alembert operator   \square = \frac{1}{c^2}\partial_t^2-\Delta \frac{\delta(t-\frac{r}{c})}{4 \pi r} 3D wave equation
\partial_t - k\partial_x^2 \theta(t)\left(\frac{1}{4\pi kt}\right)^{1/2}\mathrm e^{-x^2/4kt} 1D diffusion
\partial_t - k(\partial_x^2 + \partial_y^2) \theta(t)\left(\frac{1}{4\pi kt}\right)\mathrm e^{-(x^2+y^2)/4kt} 2D diffusion
\partial_t - k\Delta \theta(t)\left(\frac{1}{4\pi kt}\right)^{3/2}\mathrm e^{-r^2/4kt} 3D diffusion
\frac{1}{c^2}\partial_t^2 - \partial_x^2+\mu^2 \frac{1}{2}\left[\left(1-\sin{\mu ct}\right)(\delta(ct-x)+\delta(ct+x))+\mu\theta(ct - |x|)J_0\left(\mu u\right)\right], \, u=\sqrt{c^2t^2-x^2} 1D Klein–Gordon equation
\frac{1}{c^2}\partial_t^2 - (\partial_x^2+\partial_y^2)+\mu^2 \frac{1}{4\pi}\left[(1+\cos{(\mu ct)})\frac{\delta(ct-\rho)}{\rho}+\mu^2\theta(ct - \rho)\operatorname{sinc}{(\mu u)}\right], \, u=\sqrt{c^2t^2-\rho^2} 2D Klein–Gordon equation
\square+\mu^2 \frac{1}{4\pi}\left[(1-\mu ct+\sin{(\mu ct)})\frac{\delta(ct-r)}{r^2}+\mu^2\theta(ct - r)\frac{J_1\left(\mu u\right)}{u}\right], \, u=\sqrt{c^2t^2-r^2} 3D Klein–Gordon equation
\partial_t^2 + 2\gamma\partial_t - c^2\partial_x^2 \frac{1}{2}e^{-\gamma t} \left[\delta(ct-x)+\delta(ct+x)+\theta(ct - |x|)\left(\frac{\gamma}{c}I_0\left(\frac{\gamma u}{c}\right)+\frac{\gamma t}{u}I_1\left(\frac{\gamma u}{c}\right)\right)\right], \, u=\sqrt{c^2t^2-x^2} telegrapher's equation
\partial_t^2 + 2\gamma\partial_t - c^2(\partial_x^2+\partial_y^2) \frac{e^{-\gamma t}}{4\pi} \left[(1+e^{-\gamma t}+3\gamma t)\frac{\delta(ct-\rho)}{\rho}+\theta(ct - \rho)\left(\frac{\gamma\sinh\left(\frac{\gamma u}{c}\right)}{cu}+\frac{3\gamma t\cosh\left(\frac{\gamma u}{c}\right)}{u^2}-\frac{3ct\sinh\left(\frac{\gamma u}{c}\right)}{u^3}\right)\right], \, u=\sqrt{c^2t^2-\rho^2} 2D relativistic heat conduction
\partial_t^2 + 2\gamma\partial_t - c^2\Delta \frac{e^{-\gamma t}}{20\pi} \left[\left(8-3e^{-\gamma t}+2\gamma t+4\gamma^2t^2\right)\frac{\delta(ct-r)}{r^2}+\frac{\gamma^2}{c}\theta(ct - r)\left(\frac{1}{cu}I_1\left(\frac{\gamma u}{c}\right)+\frac{4 t}{u^2}I_2\left(\frac{\gamma u}{c}\right)\right)\right], \, u=\sqrt{c^2t^2-r^2} 3D relativistic heat conduction

Green's functions for the Laplacian

Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities.

To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem),

\int_V \nabla \cdot \vec A\ dV=\int_S \vec A \cdot d\hat\sigma ~.

Let \vec A=\phi\nabla\psi-\psi\nabla\phi and substitute into Gauss' law.

Compute \nabla\cdot\vec A and apply the product rule for the ∇ operator,

\begin{align}
 \nabla\cdot\vec A &=\nabla\cdot(\phi\nabla\psi \;-\; \psi\nabla\phi)\\
 &=(\nabla\phi)\cdot(\nabla\psi) \;+\; \phi\nabla^2\psi \;-\; (\nabla\phi)\cdot(\nabla\psi) \;-\; \psi\nabla^2\phi\\
 &=\phi\nabla^2\psi \;-\; \psi\nabla^2\phi ~.
\end{align}

Plugging this into the divergence theorem produces Green's theorem,

\int_V (\phi\nabla^2\psi-\psi\nabla^2\phi) dV=\int_S (\phi\nabla\psi-\psi\nabla\phi)\cdot d\hat\sigma.

Suppose that the linear differential operator L is the Laplacian, ∇², and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds,

L G(x,x')=\nabla^2 G(x,x')=\delta(x-x').

Let \psi=G in Green's second identity, see Green's identities. Then,

\int_V \left[ \phi(x') \delta(x-x')-G(x,x') {\nabla'}^2\phi(x')\right]\ d^3x' = \int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'.

Using this expression, it is possible to solve Laplace's equation ∇²φ(x) = 0 or Poisson's equation ∇²φ(x) =−ρ(x), subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for φ(x) everywhere inside a volume where either (1) the value of φ(x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of φ(x) is specified on the bounding surface (Neumann boundary conditions).

Suppose the problem is to solve for φ(x) inside the region. Then the integral

\int\limits_V {\phi(x')\delta(x-x')\ d^3x'}

reduces to simply φ(x) due to the defining property of the Dirac delta function and we have

\phi(x)=-\int_V G(x,x') \rho(x')\ d^3x'+\int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'.

This form expresses the well-known property of harmonic functions, that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere.

In electrostatics, φ(x) is interpreted as the electric potential, ρ(x) as electric charge density, and the normal derivative \nabla\phi(x')\cdot d\hat\sigma' as the normal component of the electric field.

If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that G(x,x') vanishes when either x or x′ is on the bounding surface. Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, the Green's function is chosen such that its normal derivative vanishes on the bounding surface, as it would seem to be the most logical choice. (See Jackson J.D. classical electrodynamics, page 39). However, application of Gauss's theorem to the differential equation defining the Green's function yields

\int_S \nabla' G(x,x') \cdot d\hat\sigma' = \int_V \nabla'^2 G(x,x') d^3x' = \int_V \delta (x-x') d^3x' = 1   ~,

meaning the normal derivative of G(x,x') cannot vanish on the surface, because it must integrate to 1 on the surface. (Again, see Jackson J.D. classical electrodynamics, page 39 for this and the following argument).

The simplest form the normal derivative can take is that of a constant, namely 1/S, where S is the surface area of the surface. The surface term in the solution becomes

\int_S \phi(x')\nabla' G(x,x')\cdot d\hat\sigma' = \langle\phi\rangle_S

where \langle\phi\rangle_S is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.

With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is

G(x,x')=-\dfrac{1}{4 \pi |x-x'|}.

Supposing that the bounding surface goes out to infinity and plugging in this expression for the Green's function finally yields the standard expression for electric potential in terms of electric charge density as

\phi(x)=\int_V \dfrac{\rho(x')}{4 \pi \varepsilon |x-x'|} \, d^3x' ~.

Further information: Poisson's equation

Example

Example. Find the Green function for the following problem:
\begin{align}
  Lu & = u'' + k^2 u = f(x)\\
  u(0)& = 0, \quad u\left(\tfrac{\pi}{2k}\right) = 0.
 \end{align}

First step: The Green's function for the linear operator at hand is defined as the solution to

g''(x,s) + k^2 g(x,s) = \delta(x-s).

If x\ne s, then the delta function gives zero, and the general solution is

g(x,s)=c_1 \cos kx+c_2 \sin kx.

For x<s, the boundary condition at x=0 implies

g(0,s)=c_1 \cdot 1+c_2 \cdot 0=0, \quad c_1 = 0
if x < s and s \ne \tfrac{\pi}{2k}.

For x>s, the boundary condition at x=\tfrac{\pi}{2k} implies

g\left(\tfrac{\pi}{2k},s\right) = c_3 \cdot 0+c_4 \cdot 1=0, \quad c_4 = 0

The equation of g(0,s)=0 is skipped for similar reasons.

To summarize the results thus far:

g(x,s)= \begin{cases}
  c_2 \sin kx, & \text{for }x<s\\
  c_3 \cos kx, & \text{for }s<x
 \end{cases}

Second step: The next task is to determine c_{2} and c_{3}.

Ensuring continuity in the Green's function at x=s implies

c_2 \sin ks=c_3 \cos ks

One can ensure proper discontinuity in the first derivative by integrating the defining differential equation from x=s-\epsilon to x=s+\epsilon and taking the limit as \epsilon goes to zero:

c_3 \cdot \left(-k \sin ks\right)-c_2 \cdot \left( k \cos ks\right )=1

The two (dis)continuity equations can be solved for c_{2} and c_{3} to obtain

c_2 = -\frac{\cos ks}{k}  \quad;\quad c_3 = -\frac{\sin ks}{k}

So the Green's function for this problem is:

g(x,s)=\begin{cases}
  -\frac{\cos ks}{k} \sin kx, & x<s\\
  -\frac{\sin ks}{k} \cos kx, & s<x
 \end{cases}

Further examples

\begin{align}
G(x, y, x_0, y_0) =\dfrac{1}{2\pi} &\left[\ln\sqrt{(x-x_0)^2+(y-y_0)^2}-\ln\sqrt{(x+x_0)^2+(y-y_0)^2} \right. \\
&\left. - \ln\sqrt{(x-x_0)^2+(y+y_0)^2}+ \ln\sqrt{(x+x_0)^2+(y+y_0)^2}\right]
\end{align}    ~.

See also

References

  • S. S. Bayin (2006), Mathematical Methods in Science and Engineering, Wiley, Chapters 18 and 19.
  • Eyges, Leonard, The Classical Electromagnetic Field, Dover Publications, New York, 1972. ISBN 0-486-63947-9. (Chapter 5 contains a very readable account of using Green's functions to solve boundary value problems in electrostatics.)
  • A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2
  • A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
  • Mathews, Jon; Walker, Robert L. (1970), Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, ISBN 0-8053-7002-1
  • G. B. Folland, Fourier Analysis and Its Applications, Wadsworth and Brooks/Cole Mathematics Series.
  • K. D. Cole, J. V. Beck, A. Haji-Sheikh, and B. Litkouhi, Methods for obtaining Green's functions, Heat Conduction Using Green's Functions, Taylor and Francis, 2011, pp. 101–148. ISBN 978-1-4398-1354-6
  • Green G, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). pages 10-12
  1. some examples taken from Schulz, Hermann: Physik mit Bleistift. Frankfurt am Main: Deutsch, 2001. ISBN 3-8171-1661-6 (German)

External links

This article is issued from Wikipedia - version of the Tuesday, March 22, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.