Basel problem

The Basel problem is a problem in mathematical analysis with relevance to number theory, first posed by Pietro Mengoli in 1644 and solved by Leonhard Euler in 1734[1] and read on 5 December 1735 in The Saint Petersburg Academy of Sciences (Russian: Петербургская Академия наук).[2] Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by Bernhard Riemann in his seminal 1859 paper On the Number of Primes Less Than a Given Magnitude, in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem.

The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series:

\sum_{n=1}^\infty \frac{1}{n^2} = \lim_{n \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{n^2}\right).

The sum of the series is approximately equal to 1.644934 A013661. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be π2/6 and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, and it was not until 1741 that he was able to produce a truly rigorous proof.

Euler's approach

Euler's original derivation of the value π2/6 essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series.

Of course, Euler's original reasoning requires justification (100 years later, Weierstrass proved that Euler's representation of the sine function as an infinite product is correct, see: Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.

To follow Euler's argument, recall the Taylor series expansion of the sine function

 \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots.

Dividing through by x, we have

 \frac{\sin(x)}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots.

Using the Weierstrass factorization theorem, it can also be shown that the left-hand side is the product of linear factors given by its roots, just as we do for finite polynomials (which Euler assumed, but is not always true):

\begin{align}
\frac{\sin(x)}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\
                    &= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots.
\end{align}

If we formally multiply out this product and collect all the x2 terms (we are allowed to do so because of Newton's identities), we see that the x2 coefficient of sin(x)/x is


  -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) =
    -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.

But from the original infinite series expansion of sin(x)/x, the coefficient of x2 is −1/(3!) = −1/6. These two coefficients must be equal; thus,

-\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.

Multiplying through both sides of this equation by -π2 gives the sum of the reciprocals of the positive square integers.

\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.

The Riemann zeta function

The Riemann zeta function ζ(s) is one of the most important functions in mathematics, because of its relationship to the distribution of the prime numbers. The function is defined for any complex number s with real part > 1 by the following formula:

\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.

Taking s = 2, we see that ζ(2) is equal to the sum of the reciprocals of the squares of the positive integers:

\zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2}
                = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934.

Convergence can be proven with the following inequality:

\begin{align}
  \sum_{n=1}^N \frac{1}{n^2} & < 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\
                             & = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\
                             & = 1 + 1 - \frac{1}{N} \;{\stackrel{N \to \infty}{\longrightarrow}}\; 2.
\end{align}

This gives us the upper bound ζ(2), and because the infinite sum has only positive terms, it must converge. It can be shown that ζ(s) has a nice expression in terms of the Bernoulli numbers whenever s is a positive even integer. With s=2n:

\zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}

A rigorous proof using Fourier series

Use Parseval's identity (applied to the function f(x) = x ) to obtain

\sum_{n=-\infty}^\infty |a_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 \, dx,

where

\begin{align}
  a_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} \, dx \\
      &= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\
      &= \frac{\cos(n\pi)}{n} i - \frac{\sin(n\pi)}{\pi n^2} i \\
      &= \frac{(-1)^n}{n} i
\end{align}

for n  0, and a0 = 0. Thus,

|a_n|^2 = \begin{cases}
\frac{1}{n^2}, & \text{for } n \neq 0, \\ 
0, & \text{for } n = 0,
\end{cases}

and

\sum_{n=-\infty}^\infty |a_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 \, dx.

Therefore,

\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 \, dx = \frac{\pi^2}{6}

as required.

A rigorous elementary proof

This is by far the most elementary well-known proof; while most proofs use results from advanced mathematics, such as Fourier analysis, complex analysis, and multivariable calculus, the following does not even require single-variable calculus (although a single limit is taken at the end).

For a proof using the residue theorem, see the linked article.

History of this proof

The proof goes back to Augustin Louis Cauchy (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of Akiva and Isaak Yaglom "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal Eureka, attributed to John Scholes, but Scholes claims he learned the proof from Peter Swinnerton-Dyer, and in any case he maintains the proof was "common knowledge at Cambridge in the late 1960s".

The proof

The main idea behind the proof is to bound the partial sums

\sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}

between two expressions, each of which will tend to π2/6 as m approaches infinity. The two expressions are derived from identities involving the cotangent and cosecant functions. These identities are in turn derived from de Moivre's formula, and we now turn to establishing these identities.

Let x be a real number with 0<x<\frac{\pi}{2}, and let n be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have

\begin{align}
  \frac{\cos (nx) + i \sin (nx)}{(\sin x)^n} &= \frac{(\cos x + i\sin x)^n}{(\sin x)^n} \\
                                             &= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\
                                             &= (\cot x + i)^n.
\end{align}

From the binomial theorem, we have

\begin{align}
& (\cot x + i)^n \\[6pt]
= {} & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\[6pt]
= {} & \left[ {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \right] \; + \; i\left[ {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \right].
\end{align}

Combining the two equations and equating imaginary parts gives the identity

\frac{\sin (nx)}{(\sin x)^n} = \left[ {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \right].

We take this identity, fix a positive integer m, set n = 2m + 1\, and consider x_r = \frac{r\pi}{2m + 1} for r=1, 2, \ldots, m. Then nx_r\, is a multiple of \pi\, and therefore a zero of the sine function, and so

0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}}

for every r = 1, 2, \ldots, m. The values x_1, \ldots, x_m are distinct numbers in the interval (0, π/2). Since the function \cot^2 x \, is one-to-one on this interval, the numbers t_r = \cot^2 x_r\, are distinct for r = 1, 2, …, m. By the above equation, these m numbers are the roots of the mth degree polynomial

p(t) := {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}.

By Viète's formulas we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that

\cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6.

Substituting the identity \csc^2x = \cot^2x + 1\,, we have

\csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6.

Now consider the inequality \cot^2x<\frac{1}{x^2}<\csc^2x. If we add up all these inequalities for each of the numbers x_r=\frac{r\pi}{2m + 1}, and if we use the two identities above, we get

\frac{2m(2m - 1)}6 < \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 < \frac{2m(2m + 2)}6.

Multiplying through by (π/(2m + 1))2, this becomes

\frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) < \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} < \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right).

As m approaches infinity, the left and right hand expressions each approach \frac{\pi^2}{6}\,, so by the squeeze theorem,

\zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} =
  \lim_{m \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}\right) = \frac{\pi ^2}{6}

and this completes the proof.

See also

References

Notes

  1. Ayoub, Raymond (1974). "Euler and the zeta function". Amer. Math. Monthly 81: 1067–86. doi:10.2307/2319041.
  2. E41 -- De summis serierum reciprocarum

External links

This article is issued from Wikipedia - version of the Sunday, March 20, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.