Infinite-dimensional vector function

Infinite-dimensional vector function refers to a function whose values lie in an infinite-dimensional vector space, such as a Hilbert space or a Banach space.

Such functions are applied in most sciences including physics.

Example

Set f_k(t)=t/k^2 for every positive integer k and every real number t. Then values of the function

f(t)=(f_1(t),f_2(t),f_3(t),\ldots) \,

lie in the infinite-dimensional vector space X (or \mathbf R^{\mathbf N}) of real-valued sequences. For example,

f(2) = \left(2,\frac24,\frac29,\frac2{16},\frac2{25},\ldots\right).

As a number of different topologies can be defined on the space X, we cannot talk about the derivative of f without first defining the topology of X or the concept of a limit in X.

Moreover, for any set A, there exist infinite-dimensional vector spaces having the (Hamel) dimension of the cardinality of A (e.g., the space of functions A\rightarrow K with finitely-many nonzero elements, where K is the desired field of scalars). Furthermore, the argument t could lie in any set instead of the set of real numbers.

Integral and derivative

If, e.g., f:[0,1]\rightarrow X, where X is a Banach space or another topological vector space, the derivative of f can be defined in the standard way: f'(t):=\lim_{h\rightarrow0}\frac{f(t+h)-f(t)}{h}.

The measurability of f can be defined by a number of ways, most important of which are Bochner measurability and weak measurability.

The most important integrals of f are called Bochner integral (when X is a Banach space) and Pettis integral (when X is a topological vector space). Both these integrals commute with linear functionals. Also L^p spaces have been defined for such functions.

Most theorems on integration and differentiation of scalar functions can be generalized to vector-valued functions, often using essentially the same proofs. Perhaps the most important exception is that absolutely continuous functions need not equal the integrals of their (a.e.) derivatives (unless, e.g., X is a Hilbert space); see Radon–Nikodym theorem

Derivative

Functions with values in a Hilbert space

If f is a function of real numbers with values in a Hilbert space X, then the derivative of f at a point t can be defined as in the finite-dimensional case:

f'(t)=\lim_{h\rightarrow0}\frac{f(t+h)-f(t)}{h}.

Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., t\in R^n or even t\in Y, where Y is an infinite-dimensional vector space).

N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if

f=(f_1,f_2,f_3,\ldots)

(i.e., f=f_1 e_1+f_2 e_2+f_3 e_3+\cdots, where e_1,e_2,e_3,\ldots is an orthonormal basis of the space X), and f'(t) exists, then

f'(t)=(f_1'(t),f_2'(t),f_3'(t),\ldots).

However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.

Other infinite-dimensional vector spaces

Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.

References

This article is issued from Wikipedia - version of the Thursday, October 15, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.