Constant of integration
In calculus, the indefinite integral of a given function (i.e., the set of all antiderivatives of the function) on a connected domain is only defined up to an additive constant, the constant of integration.[1][2] This constant expresses an ambiguity inherent in the construction of antiderivatives. If a function is defined on an interval and is an antiderivative of , then the set of all antiderivatives of is given by the functions , where C is an arbitrary constant. The constant of integration is sometimes omitted in lists of integrals for simplicity.
Origin of the constant
The derivative of any constant function is zero. Once one has found one antiderivative for a function , adding or subtracting any constant C will give us another antiderivative, because . The constant is a way of expressing that every function with at least one antiderivative has an infinite number of them.
For example, suppose one wants to find antiderivatives of . One such antiderivative is . Another one is . A third is . Each of these has derivative , so they are all antiderivatives of .
It turns out that adding and subtracting constants is the only flexibility we have in finding different antiderivatives of the same function. That is, all antiderivatives are the same up to a constant. To express this fact for cos(x), we write:
Replacing C by a number will produce an antiderivative. By writing C instead of a number, however, a compact description of all the possible antiderivatives of cos(x) is obtained. C is called the constant of integration. It is easily determined that all of these functions are indeed antiderivatives of :
Necessity of the constant
At first glance it may seem that the constant is unnecessary, since it can be set to zero. Furthermore, when evaluating definite integrals using the fundamental theorem of calculus, the constant will always cancel with itself.
However, trying to set the constant equal to zero doesn't always make sense. For example, can be integrated in at least three different ways:
So setting C to zero can still leave a constant. This means that, for a given function, there is no "simplest antiderivative".
Another problem with setting C equal to zero is that sometimes we want to find an antiderivative that has a given value at a given point (as in an initial value problem). For example, to obtain the antiderivative of that has the value 100 at x = π, then only one value of C will work (in this case C = 100).
This restriction can be rephrased in the language of differential equations. Finding an indefinite integral of a function is the same as solving the differential equation . Any differential equation will have many solutions, and each constant represents the unique solution of a well-posed initial value problem. Imposing the condition that our antiderivative takes the value 100 at x = π is an initial condition. Each initial condition corresponds to one and only one value of C, so without C it would be impossible to solve the problem.
There is another justification, coming from abstract algebra. The space of all (suitable) real-valued functions on the real numbers is a vector space, and the differential operator is a linear operator. The operator maps a function to zero if and only if that function is constant. Consequently, the kernel of is the space of all constant functions. The process of indefinite integration amounts to finding a preimage of a given function. There is no canonical preimage for a given function, but the set of all such preimages forms a coset. Choosing a constant is the same as choosing an element of the coset. In this context, solving an initial value problem is interpreted as lying in the hyperplane given by the initial conditions.
Reason for a constant difference between antiderivatives
This result can be formally stated in this manner: Let and be two everywhere differentiable functions. Suppose that for every real number x. Then there exists a real number C such that for every real number x.
To prove this, notice that . So F can be replaced by F-G and G by the constant function 0, making the goal to prove that an everywhere differentiable function whose derivative is always zero must be constant:
Choose a real number a, and let . For any x, the fundamental theorem of calculus says that
which implies that . So F is a constant function.
Two facts are crucial in this proof. First, the real line is connected. If the real line were not connected, we would not always be able to integrate from our fixed a to any given x. For example, if we were to ask for functions defined on the union of intervals [0,1] and [2,3], and if a were 0, then it would not be possible to integrate from 0 to 3, because the function is not defined between 1 and 2. Here there will be two constants, one for each connected component of the domain. In general, by replacing constants with locally constant functions, we can extend this theorem to disconnected domains. For example, there are two constants of integration for and infinitely many for so for example the general form for the integral of 1/x is:[3]
Second, F and G were assumed to be everywhere differentiable. If F and G are not differentiable at even one point, the theorem fails. As an example, let be the Heaviside step function, which is zero for negative values of x and one for non-negative values of x, and let . Then the derivative of F is zero where it is defined, and the derivative of G is always zero. Yet it's clear that F and G do not differ by a constant. Even if it is assumed that F and G are everywhere continuous and almost everywhere differentiable the theorem still fails. As an example, take F to be the Cantor function and again let G = 0.
References
- ↑ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 0-495-01166-5.
- ↑ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 0-547-16702-4.
- ↑ "Reader Survey: log|x| + C", Tom Leinster, The n-category Café, March 19, 2012