Geometric algebra

Not to be confused with Algebraic geometry.

A geometric algebra (GA) is a Clifford algebra of a vector space over the field of real numbers endowed with a quadratic form. The term is also sometimes used as a collective term for the approach to classical, computational and relativistic geometry that applies these algebras. The Clifford multiplication that defines the GA as a unital ring is called the geometric product. Taking the geometric product among vectors can yield bivectors, trivectors, or general n-vectors. The addition operation combines these into general multivectors, which are the elements of the ring. This includes, among other possibilities, a well-defined formal sum of a scalar and a vector.

Geometric algebra is distinguished from Clifford algebra in general by its restriction to real numbers and its emphasis on its geometric interpretation and physical applications. Specific examples of geometric algebras applied in physics include the algebra of physical space, the spacetime algebra, and the conformal geometric algebra. Geometric calculus, an extension of GA that incorporates differentiation and integration can be used to formulate other theories such as complex analysis, differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes[1] and Chris Doran,[2] as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity.[3] GA has also found use as a computational tool in computer graphics[4] and robotics.

The geometric product was first briefly mentioned by Hermann Grassmann, who was chiefly interested in developing the closely related exterior algebra, which is the geometric algebra of the trivial quadratic form. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term "geometric algebra" was repopularized by Hestenes in the 1960s, who recognized its importance to relativistic physics.[5]

Definition and notation

Given a finite-dimensional real quadratic space V = Rn with a quadratic form (e.g. the Euclidean or Lorentzian metric) g : VR, the geometric algebra for this quadratic space is the Clifford algebra Cℓ(V,g).

The algebra product is called the geometric product. It is standard to denote the geometric product by juxtaposition (i.e., suppressing any explicit multiplication symbol). The above definition of the geometric algebra is abstract, so we summarize the properties of the geometric product by the following set of axioms. The geometric product has the following properties:

A(BC)=(AB)C, where A, B and C are any elements of the algebra (associativity)
A(B+C)=AB+AC and (B+C)A=BA+CA, where A, B and C are any elements of the algebra (distributivity)
a^2 = g(a,a) \in \mathbb R, where a is a vector.

Note that in the final property above, the square need not be nonnegative if g is not positive definite. An important property of the geometric product is the existence of elements with multiplicative inverse, also known as units. If a2 ≠ 0 for some vector a, then a−1 exists and is equal to a/a2. Not every nonzero element of the algebra is necessarily a unit. For example, if u is a vector in V such that u2 = 1, the elements 1 ± u are zero divisors and thus have no inverse: (1 − u)(1 + u) = 1 − uu = 1 − 1 = 0. There may also exist nontrivial idempotent elements such as (1 + u)/2.

Inner and outer product of vectors

Given two vectors a and b, if the geometric product ab is[6] anticommutative; they are perpendicular (top) because ab = −(ba) and a · b = 0, if it is commutative; they are parallel (bottom) because ab = 0 and a · b = b · a.
Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of grade n elements in a real exterior algebra for n = 0 (signed point), 1 (directed line segment, or vector), 2 (oriented plane element), 3 (oriented volume). The exterior product of n vectors can be visualized as any n-dimensional shape (e.g. n-parallelotope, n-ellipsoid); with magnitude (hypervolume), and orientation defined by that on its n − 1-dimensional boundary and on which side the interior is.[7][8]

For vectors a and b, we may write the geometric product of any two vectors a and b as the sum of a symmetric product and an antisymmetric product:

ab=\frac{1}{2}(ab+ba)+\frac{1}{2}(ab-ba)

Thus we can define the inner product of vectors as the symmetric product

a \cdot b := \frac{1}{2}(ab + ba) = \frac{1}{2}((a+b)^2 - a^2 - b^2) = g(a,b),

which is a real number because it is a sum of squares. Conversely, g is completely determined by the algebra. The antisymmetric part is the outer product of the two vectors (the exterior product of the contained exterior algebra):

a \wedge b := \frac{1}{2}(ab - ba) = -(b \wedge a)

The inner and outer products are associated with familiar concepts from standard vector algebra. Pictorially, a and b are parallel if their geometric product is equal to their inner product, whereas a and b are perpendicular if their geometric product is equal to their outer product. In a geometric algebra for which the square of any nonzero vector is positive, the inner product of two vectors can be identified with the dot product of standard vector algebra. The outer product of two vectors can be identified with the signed area enclosed by a parallelogram the sides of which are the vectors. The cross product of two vectors in 3 dimensions with positive-definite quadratic form is closely related to their outer product.

Most instances of geometric algebras of interest have a nondegenerate quadratic form. If the quadratic form is fully degenerate, the inner product of any two vectors is always zero, and the geometric algebra is then simply an exterior algebra. Unless otherwise stated, this article will treat only nondegenerate geometric algebras.

The outer product is naturally extended as a completely antisymmetric, multilinear operator between any number of vectors:

a_1\wedge a_2\wedge\dots\wedge a_r = \frac{1}{r!}\sum_{\sigma\in\mathfrak{S}_r} \operatorname{sgn}(\sigma) a_{\sigma(1)}a_{\sigma(2)} \dots a_{\sigma(r)},

where the sum is over all permutations of the indices, with \operatorname{sgn}(\sigma) the sign of the permutation.

Blades, grades, and canonical basis

A multivector that is the outer product of r independent vectors (r \le n) is called a blade, and the blade is said to be a multivector of grade r. From the axioms, with closure, every multivector of the geometric algebra is a sum of blades.

Consider a set of r independent vectors \{a_1,...,a_r\} spanning an r-dimensional subspace of the vector space. With these, we can define a real symmetric matrix

[\mathbf{A}]_{ij}=a_i\cdot a_j

By the spectral theorem, A can be diagonalized to diagonal matrix D by an orthogonal matrix O via

\sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{A}]_{kl}[\mathbf{O}^{\mathrm{T}}]_{lj}=\sum_{k,l}[\mathbf{O}]_{ik}[\mathbf{O}]_{jl}[\mathbf{A}]_{kl}=[\mathbf{D}]_{ij}

Define a new set of vectors \{e_1,...,e_r\}, known as orthogonal basis vectors, to be those transformed by the orthogonal matrix:

e_i=\sum_j[\mathbf{O}]_{ij}a_j

Since orthogonal transformations preserve inner products, it follows that e_i\cdot e_j=[\mathbf{D}]_{ij} and thus the \{e_1,...,e_r\} are perpendicular. In other words, the geometric product of two distinct vectors e_i \ne e_j is completely specified by their outer product, or more generally

\begin{array}{rl}e_1e_2\cdots e_r &= e_1 \wedge e_2 \wedge \cdots \wedge e_r \\
&= \left(\sum_j [\mathbf{O}]_{1j}a_j\right) \wedge \left(\sum_j [\mathbf{O}]_{2j}a_j\right) \wedge \cdots \wedge \left(\sum_j [\mathbf{O}]_{rj}a_j\right) \\
&= \det [\mathbf{O}] a_1 \wedge a_2 \wedge \cdots \wedge a_r \end{array}

Therefore, every blade of grade r can be written as a geometric product of r vectors. More generally, if a degenerate geometric algebra is allowed, then the orthogonal matrix is replaced by a block matrix that is orthogonal in the nondegenerate block, and the diagonal matrix has zero-valued entries along the degenerate dimensions. If the new vectors of the nondegenerate subspace are normalized according to

\hat{e}_i=\frac{1}{\sqrt{|e_i \cdot e_i|}}e_i,

then these normalized vectors must square to +1 or −1. By Sylvester's law of inertia, the total number of +1s and the total number of −1s along the diagonal matrix is invariant. By extension, the total number p of these vectors that square to +1 and the total number q that square to −1 is invariant. (If the degenerate case is allowed, then the total number of basis vectors that square to zero is also invariant.) We denote this algebra \mathcal{G}(p,q). For example, \mathcal G(3,0) models 3D Euclidean space, \mathcal G(1,3) relativistic spacetime and \mathcal G(4,1) a 3D conformal geometric algebra.

The set of all possible products of n orthogonal basis vectors with indices in increasing order, including 1 as the empty product forms a basis for the entire geometric algebra (an analogue of the PBW theorem). For example, the following is a basis for the geometric algebra \mathcal{G}(3,0):

\{1,e_1,e_2,e_3,e_1e_2,e_1e_3,e_2e_3,e_1e_2e_3\}

A basis formed this way is called a canonical basis for the geometric algebra, and any other orthogonal basis for V will produce another canonical basis. Each canonical basis consists of 2n elements. Every multivector of the geometric algebra can be expressed as a linear combination of the canonical basis elements. If the canonical basis elements are {Bi | iS} with S being an index set, then the geometric product of any two multivectors is

(\Sigma_i \alpha_i B_i)(\Sigma_j \beta_j B_j)=\Sigma_{i,j} \alpha_i\beta_j B_i B_j\,.

Grade projection

Using a canonical basis, a graded vector space structure can be established. Elements of the geometric algebra that are simply scalar multiples of 1 are grade-0 blades and are called scalars. Nonzero multivectors that are in the span of \{e_1,\cdots,e_n\} are grade-1 blades and are the ordinary vectors. Multivectors in the span of \{e_ie_j\mid 1\leq i<j\leq n\} are grade-2 blades and are the bivectors. This terminology continues through to the last grade of n-vectors. Alternatively, grade-n blades are called pseudoscalars, grade-n−1 blades pseudovectors, etc. Many of the elements of the algebra are not graded by this scheme since they are sums of elements of differing grade. Such elements are said to be of mixed grade. The grading of multivectors is independent of the orthogonal basis chosen originally.

A multivector A may be decomposed with the grade-projection operator \langle A \rangle _r, which outputs the grade-r portion of A. As a result:

 A = \sum_{r=0}^{n} \langle A \rangle _r

As an example, the geometric product of two vectors  a b = a \cdot b + a \wedge b = \langle a b \rangle_0 + \langle a b \rangle_2 since \langle a b \rangle_0=a\cdot b\, and \langle a b \rangle_2 = a\wedge b\, and \langle a b \rangle_i=0\, for i other than 0 and 2.

The decomposition of a multivector A may also be split into those components that are even and those that are odd:

 A^+ = \langle A \rangle _0 + \langle A \rangle _2 + \langle A \rangle _4 + \cdots
 A^- = \langle A \rangle _1 + \langle A \rangle _3 + \langle A \rangle _5 + \cdots

This makes the algebra a Z2-graded algebra or superalgebra with the geometric product. Since the geometric product of two even multivectors is an even multivector, they define an even subalgebra. The even subalgebra of an n-dimensional geometric algebra is isomorphic to a full geometric algebra of (n−1) dimensions. Examples include \mathcal G^+(2,0) \cong \mathcal G(0,1) and \mathcal G^+(1,3) \cong \mathcal G(3,0).

Representation of subspaces

Geometric algebra represents subspaces of V as multivectors, and so they coexist in the same algebra with vectors from V. A k-dimensional subspace W of V is represented by taking an orthogonal basis \{b_1,b_2,\cdots b_k\} and using the geometric product to form the blade D = b1b2⋅⋅⋅bk. There are multiple blades representing W; all those representing W are scalar multiples of D. These blades can be separated into two sets: positive multiples of D and negative multiples of D. The positive multiples of D are said to have the same orientation as D, and the negative multiples the opposite orientation.

Blades are important since geometric operations such as projections, rotations and reflections depend on the factorability via the outer product that (the restricted class of) n-blades provide but that (the generalized class of) grade-n multivectors do not when n ≥ 4.

Unit pseudoscalars

Unit pseudoscalars are blades that play important roles in GA. A unit pseudoscalar for a non-degenerate subspace W of V is a blade that is the product of the members of an orthonormal basis for W. It can be shown that if I and I are both unit pseudoscalars for W, then I = ±I and I2 = ±1.

Suppose the geometric algebra \mathcal{G}(n,0) with the familiar positive definite inner product on Rn is formed. Given a plane (2-dimensional subspace) of Rn, one can find an orthonormal basis {b1,b2} spanning the plane, and thus find a unit pseudoscalar I = b1b2 representing this plane. The geometric product of any two vectors in the span of b1 and b2 lies in \{\alpha_0+\alpha_1 I\mid \alpha_i\in\mathbb{R} \}, that is, it is the sum of a 0-vector and a 2-vector.

By the properties of the geometric product, I2 = b1b2b1b2 = −b1b2b2b1 = −1. The resemblance to the imaginary unit is not accidental: the subspace \{\alpha_0+\alpha_1 I\mid \alpha_i\in\mathbb{R} \} is R-algebra isomorphic to the complex numbers. In this way, a copy of the complex numbers is embedded in the geometric algebra for each 2-dimensional subspace of V on which the quadratic form is definite.

It is sometimes possible to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that square to −1, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces.

In \mathcal{G}(3,0), an exceptional case occurs. Given a canonical basis built from orthonormal ei's from V, the set of all 2-vectors is generated by

\{e_3e_2,e_1e_3,e_2e_1\}\,.

Labelling these i, j and k (momentarily deviating from our uppercase convention), the subspace generated by 0-vectors and 2-vectors is exactly \{\alpha_0+i\alpha_1+j\alpha_2+k\alpha_3\mid \alpha_i\in\mathbb{R}\}. This set is seen to be a subalgebra, and furthermore is R-algebra isomorphic to the quaternions, another important algebraic system.

Dual basis

Let \{e_i\} be a basis of V, i.e. a set of n linearly independent vectors that span the n-dimensional vector space V. The basis that is dual to \{e_i\} is the set of elements of the dual vector space V that forms a biorthogonal system with this basis, thus being the elements denoted \{e^i\} satisfying

e^i \cdot e_j = \delta^i{}_j,

where δ is the Kronecker delta.

Given a nondegenerate quadratic form on V, V becomes naturally identified with V, and the dual basis may be regarded as elements of V, but are not in general the same set as the original basis.

Given further a GA of V, let

 \epsilon = e_1 \wedge \cdots \wedge e_n

be the pseudoscalar (which does not necessarily square to ±1) formed from the basis \{e_i\}. The dual basis vectors may be constructed as

e^i=(-1)^{i-1}(e_1 \wedge \cdots \wedge \check{e}_i \wedge \cdots \wedge e_n) \epsilon^{-1},

where the \check{e}_i denotes that the ith basis vector is omitted from the product.

Extensions of the inner and outer products

It is common practice to extend the inner and outer product on vectors to the entire algebra. This may be done through the use of the grade projection operator:

C \cdot D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{|r-s|}     (the inner product) where
 \langle C \rangle_r \cdot \langle D \rangle_s := \tfrac{1}{2}(\langle C \rangle_r \langle D \rangle_s + (-1)^{|r-s|} \langle D \rangle_s \langle C \rangle_r )

and

C \wedge D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{r+s}     (the outer product) where
 \langle C \rangle_r \wedge \langle D \rangle_s := \tfrac{1}{2}(\langle C \rangle_r \langle D \rangle_s - (-1)^{r+s} \langle D \rangle_s \langle C \rangle_r )


This generalization is consistent with the above definition involving antisymmetrization. Another generalization related to the outer product is the commutator product:

C \times D := \tfrac{1}{2}(CD-DC)

The regressive product is the dual of the outer product:

C \;\triangledown\; D := (C^{*} \wedge D^{*})I

with I the unit pseudoscalar of the algebra and A^{*} := AI^{-1} for multivector A.

The inner product on vectors can also be generalized, but in more than one non-equivalent way. The paper (Dorst 2002) gives a full treatment of several different inner products developed for geometric algebras and their interrelationships, and the notation is taken from there. Many authors use the same symbol as for the inner product of vectors for their chosen extension (e.g. Hestenes and Perwass). No consistent notation has emerged.

Among these several different generalizations of the inner product on vectors are:

\, C \;\big\lrcorner\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{s-r}   (the left contraction)
\, C \;\big\llcorner\; D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{r-s}   (the right contraction)
\, C * D := \sum_{r,s}\langle \langle C \rangle_r \langle D \rangle_s \rangle_{0}   (the scalar product)
\, C \bullet D := \sum_{r,s}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{|s-r|}   (the "(fat) dot" product)
\, C \bullet_H D := \sum_{r\ne0,s\ne0}\langle \langle C\rangle_r \langle D \rangle_{s} \rangle_{|s-r|}   (Hestenes's inner product)[9]

Dorst (2002) makes an argument for the use of contractions in preference to Hestenes's inner product; they are algebraically more regular and have cleaner geometric interpretations. A number of identities incorporating the contractions are valid without restriction of their inputs. Benefits of using the left contraction as an extension of the inner product on vectors include that the identity  ab = a \cdot b + a \wedge b is extended to  aB = a \;\big\lrcorner\; B + a \wedge B for any vector a and multivector B, and that the projection operation  \mathcal{P}_b (a) = (a \cdot b^{-1})b is extended to  \mathcal{P}_B (A) = (A \;\big\lrcorner\; B^{-1}) \;\big\lrcorner\; B for any blades A and B (with a minor modification to accommodate null B, given below).

Terminology specific to geometric algebra

Some terms are used in geometric algebra with a meaning that differs from the use of those terms in other fields of mathematics. Some of these are listed here:

Vector
In GA this refers specifically to an element of the 1-vector subspace unless otherwise clear from the context, despite the entire algebra forming a vector space.
Grade
In GA this refers to a grading as an algebra under the outer product (an \mathbb{N}-grading), and not under the geometric product (which produces a Z2n-grading).
Outer product
In GA this refers to what is generally called the exterior product (including in GA as an alternative). It is not the outer product of linear algebra.
Inner product
In GA this generally refers to a scalar product on the vector subspace (which is not required to be positive definite) and may include any chosen extension of this product to the entire algebra. It is not specifically the inner product on a normed vector space.
Versor
In GA this refers to an object that can be constructed as the geometric product of any number of non-null vectors. The term otherwise may refer to a unit quaternion, analogous to a rotor in GA.
Outermorphism
This term is used only in GA, and refers to a linear map on the vector subspace, extended to apply to the entire algebra by defining it as preserving the outer product.

Geometric interpretation

Projection and rejection

In 3d space, a bivector ab defines a 2d plane subspace (light blue, extends infinitely in indicated directions). Any vector c in the 3-space can be projected onto and rejected normal to the plane, shown respectively by c and c.

For any vector a and any invertible vector m,

\, a = amm^{-1} = (a\cdot m + a \wedge m)m^{-1} = a_{\| m} + a_{\perp m}

where the projection of a onto m (or the parallel part) is

\, a_{\| m} = (a\cdot m)m^{-1}

and the rejection of a onto m (or the perpendicular part) is

\, a_{\perp m} = a - a_{\| m} = (a\wedge m)m^{-1} .

Using the concept of a k-blade B as representing a subspace of V and every multivector ultimately being expressed in terms of vectors, this generalizes to projection of a general multivector onto any invertible k-blade B as[10]

\, \mathcal{P}_B (A) = (A \;\big\lrcorner\; B^{-1}) \;\big\lrcorner\; B

with the rejection being defined as

\, \mathcal{P}_B^\perp (A) = A - \mathcal{P}_B (A) .

The projection and rejection generalize to null blades B by replacing the inverse B−1 with the pseudoinverse B+ with respect to the contractive product.[11] The outcome of the projection coincides in both cases for non-null blades.[12][13] For null blades B, the definition of the projection given here with the first contraction rather than the second being onto the pseudoinverse should be used,[14] as only then is the result necessarily in the subspace represented by B.[12] The projection generalizes through linearity to general multivectors A.[15] The projection is not linear in B and does not generalize to objects B that are not blades.

Reflections

The definition of a reflection occurs in two forms in the literature. Several authors work with reflection on a vector (negating all vector components except that parallel to the specifying vector), while others work with reflection along a vector (negating only the component parallel to the specifying vector, or reflection in the hypersurface orthogonal to that vector). Either may be used to build general versor operations, but the former has the advantage that it extends to the algebra in a simpler and algebraically more regular fashion.

Reflection on a vector

Reflection of vector c on a vector n. The rejection of c on n is negated.

The result (c' ) of reflecting a vector c on another vector n is to negate the rejection of c. It is akin to reflecting the vector c through the origin, except that the projection of c onto n is not changed. Such an operation is described by

\, c \mapsto c'=ncn^{-1} .

Repeating this operation results in a general versor operation (including both rotations and reflections) of a general multivector A being expressed as

\, A \mapsto NAN^{-1} .

This allows a general definition of any versor N (including both reflections and rotors) as an object that can be expressed as a geometric product of any number of non-null 1-vectors. Such a versor can be applied in a uniform sandwich product as above irrespective of whether it is of even (a proper rotation) or odd grade (an improper rotation i.e. general reflection). The set of all versors with the geometric product as the group operation constitutes the Clifford group of the Clifford algebra Cp,q(R).[16]

Reflection along a vector

Reflection of vector c along a vector m. Only the component of c parallel to m is negated.

The reflection (c' ) of a vector c along a vector m, or equivalently in the hyperplane orthogonal to m, is the same as negating the component of a vector parallel to m. The result of the reflection will be

\! c' = {-c_{\| m} + c_{\perp m}} = {-(c \cdot m)m^{-1} + (c \wedge m)m^{-1}}
= {(-m \cdot c - m \wedge c)m^{-1}}
= -mcm^{-1}

This is not the most general operation that may be regarded as a reflection when the dimension n ≥ 4. A general reflection may be expressed as the composite of any odd number of single-axis reflections. Thus, a general reflection (a' ) of a vector a may be written

\! a \mapsto a'=-MaM^{-1}

where

\! M = pq \ldots r and \! M^{-1} = (pq \ldots r)^{-1} = r^{-1} \ldots q^{-1}p^{-1} .

If we define the reflection along a non-null vector m of the product of vectors as the reflection of every vector in the product along the same vector, we get for any product of an odd number of vectors that, by way of example,

 (abc)' = a'b'c' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1}) = -ma(m^{-1}m)b(m^{-1}m)cm^{-1} = -mabcm^{-1} \,

and for the product of an even number of vectors that

 (abcd)' = a'b'c'd' = (-mam^{-1})(-mbm^{-1})(-mcm^{-1})(-mdm^{-1})
= mabcdm^{-1} .\,

Using the concept of every multivector ultimately being expressed in terms of vectors, the reflection of a general multivector A using any reflection versor M may be written

\, A \mapsto M\alpha(A)M^{-1} ,

where α is the automorphism of reflection through the origin of the vector space (v ↦ −v) extended through multilinearity to the whole algebra.

Hypervolume of an n-parallelotope spanned by n vectors

For vectors  a and  b spanning a parallelogram we have

 a \wedge b = ((a \wedge b) b^{-1}) b = a_{\perp b} b

with the result that  a \wedge b is linear in the product of the "altitude" and the "base" of the parallelogram, that is, its area.

Similar interpretations are true for any number of vectors spanning an n-dimensional parallelotope; the outer product of vectors a1, a2, ... an, that is \bigwedge_{i=1}^n a_i , has a magnitude equal to the volume of the n-parallelotope. An n-vector doesn't necessarily have a shape of a parallelotope – this is a convenient visualization. It could be any shape, although the volume equals that of the parallelotope.

Rotations

A rotor that rotates vectors in a plane rotates vectors through angle θ, that is xRθxRθ is a rotation of x through angle θ. The angle between u and v is θ/2. Similar interpretations are valid for a general multivector X instead of the vector x.[6]

If we have a product of vectors R = a_1a_2....a_r then we denote the reverse as

R^{\dagger}= (a_1a_2....a_r)^{\dagger} = a_r....a_2a_1.

As an example, assume that  R = ab we get

RR^{\dagger} = abba = ab^2a =a^2b^2 = R^{\dagger}R.

Scaling R so that RR = 1 then

(RvR^{\dagger})^2 = Rv^{2}R^{\dagger}= v^2RR^{\dagger} = v^2

so RvR leaves the length of v unchanged. We can also show that

(Rv_1R^{\dagger}) \cdot (Rv_2R^{\dagger}) = v_1 \cdot v_2

so the transformation RvR preserves both length and angle. It therefore can be identified as a rotation or rotoreflection; R is called a rotor if it is a proper rotation (as it is if it can be expressed as a product of an even number of vectors) and is an instance of what is known in GA as a versor (presumably for historical reasons).

There is a general method for rotating a vector involving the formation of a multivector of the form  R = e^{-\frac{B \theta}{2}} that produces a rotation  \theta in the plane and with the orientation defined by a 2-blade  B .

Rotors are a generalization of quaternions to n-D spaces.

For more about reflections, rotations and "sandwiching" products like RvR see Plane of rotation.

Linear functions

An important class of functions of multivectors are the linear functions mapping multivectors to multivectors. The geometric algebra of an n-dimensional vector space is spanned by 2n canonical basis elements. If a multivector in this basis is represented by a 2n x 1 real column matrix, then in principle all linear transformations of the multivector can be written as the matrix multiplication of a 2n x 2n real matrix on the column, just as in the entire theory of linear algebra in 2n dimensions.

There are several issues with this naive generalization. To see this, recall that the eigenvalues of a real matrix may in general be complex. The scalar coefficients of blades must be real, so these complex values are of no use. If we attempt to proceed with an analogy for these complex eigenvalues anyway, we know that in ordinary linear algebra, complex eigenvalues are associated with rotation matrices. However, if the linear function is truly general, it could allow arbitrary exchanges among the different grades, such as a "rotation" of a scalar into a vector. This operation has no clear geometric interpretation.

We seek to restrict the class of linear functions of multivectors to more geometrically sensible transformations. A common restriction is to require that the linear functions be grade-preserving. The grade-preserving linear functions are the linear functions that map scalars to scalars, vectors to vectors, bivectors to bivectors, etc. In matrix representation, the grade-preserving linear functions are block diagonal matrices, where each r-grade block is of size \binom nr \times \binom nr. A weaker restriction allows the linear functions to map r-grade multivectors into linear combinations of r-grade and (nr)-grade multivectors. These functions map scalars into scalars+pseudoscalars, vectors to vectors+pseudovectors, etc.

Often an invertible linear transformation from vectors to vectors is already of known interest. There is no unique way to generalize these transformations to the entire geometric algebra without further restriction. Even the restriction that the linear transformation be grade-preserving is not enough. We therefore desire a stronger rule, motivated by geometric interpretation, for generalizing these linear transformations of vectors in a standard way. The most natural choice is that of the outermorphism of the linear transformation because it extends the concepts of reflection and rotation straightforwardly. If f is a function that maps vectors to vectors, then its outermorphism is the function that obeys the rule

\underline{\mathsf{f}}(a_1 \wedge a_2 \wedge \cdots \wedge a_r) = f(a_1) \wedge f(a_2) \wedge \cdots \wedge f(a_r).

In particular, the outermorphism of the reflection of a vector on a vector is

nan^{-1} \mapsto nAn^{-1},

and the outermorphism of the rotation of a vector by a rotor is

RaR^{\dagger} \mapsto RAR^{\dagger}.

Examples and applications

Intersection of a line and a plane

A line L defined by points T and P (which we seek) and a plane defined by a bivector B containing points P and Q.

We may define the line parametrically by  p = t + \alpha \ v where p and t are position vectors for points T and P and v is the direction vector for the line.

Then

B \wedge (p-q) = 0 and B \wedge (t + \alpha v - q) = 0

so

\alpha = \frac{B \wedge(q-t)}{B \wedge v}

and

p = t + \left(\frac{B \wedge (q-t)}{B \wedge v}\right) v.

Rotating systems

The mathematical description of rotational forces such as torque and angular momentum make use of the cross product.

The cross product in relation to the outer product. In red are the unit normal vector, and the "parallel" unit bivector.

The cross product can be viewed in terms of the outer product allowing a more natural geometric interpretation of the cross product as a bivector using the dual relationship

a \times b = -I (a \wedge b) \,.

For example, torque is generally defined as the magnitude of the perpendicular force component times distance, or work per unit angle.

Suppose a circular path in an arbitrary plane containing orthonormal vectors \hat{ u} and\hat{ v} is parameterized by angle.


\mathbf{r} = r(\hat{ u} \cos \theta + \hat{ v} \sin \theta) = r \hat{ u}(\cos \theta + \hat{ u} \hat{ v} \sin \theta)

By designating the unit bivector of this plane as the imaginary number

{i} = \hat{ u} \hat{ v} = \hat{ u} \wedge \hat{ v}
{i}^2 = -1

this path vector can be conveniently written in complex exponential form


\mathbf{r} = r \hat{ u} e^{{i} \theta}

and the derivative with respect to angle is


\frac{d  \mathbf{r}}{d\theta} = r \hat{ u} {i} e^{{i} \theta} = \mathbf{r} {i}

So the torque, the rate of change of work W, due to a force F, is

\tau = \frac{dW}{d\theta} =  F \cdot \frac{d  r}{d\theta} =  F \cdot (\mathbf{r} {i})

Unlike the cross product description of torque,  \tau =  \mathbf{r} \times  F, the geometric algebra description does not introduce a vector in the normal direction; a vector that does not exist in two and that is not unique in greater than three dimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation is relative to the angle between the vectors {\hat{u}} and {\hat{v}}.

Electrodynamics and special relativity

In physics, the main applications are the geometric algebra of Minkowski 3+1 spacetime, C1,3, called spacetime algebra (STA).[5] or less commonly, C3, called the algebra of physical space (APS) where C3 is isomorphic to the even subalgebra of the 3+1 Clifford algebra, C0
3,1
.

While in STA points of spacetime are represented simply by vectors, in APS, points of (3+1)-dimensional spacetime are instead represented by paravectors: a 3-dimensional vector (space) plus a 1-dimensional scalar (time).

In spacetime algebra the electromagnetic field tensor has a bivector representation {F} = ({E} + i c {B})e_0.[17] Here, the imaginary unit is the (four-dimensional) volume element, and e_0 is the unit vector in time direction. Using the four-current {J}, Maxwell's equations then become

Formulation Homogeneous equations Non-homogeneous equations
Fields  D F = \mu_0 J
 D\wedge F = 0  D\cdot F = \mu_0 J
Potentials (any gauge) F = D \wedge A D \cdot D \wedge A = \mu_0 J
Potentials (Lorenz gauge) F = D A

 D\cdot A = 0

D^2 A = \mu_0 J

In geometric calculus, juxtapositioning of vectors such as in DF indicate the geometric product and can be decomposed into parts as DF=D\cdot F+D\wedge F. Here D is the covector derivative in any spacetime and reduces to \bigtriangledown in flat spacetime. Where \bigtriangledown plays a role in Minkowski 4-spacetime which is synonymous to the role of \nabla in Euclidean 3-space and is related to the D'Alembertian by  \Box=\bigtriangledown^2 . Indeed, given an observer represented by a future pointing timelike vector \gamma_0 we have

\gamma_0\cdot\bigtriangledown=\frac{1}{c}\frac{\partial}{\partial t}
\gamma_0\wedge\bigtriangledown=\nabla

Boosts in this Lorentzian metric space have the same expression e^{{\beta}} as rotation in Euclidean space, where {\beta} is the bivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivector generated by the two space directions, strengthening the "analogy" to almost identity.

Relationship with other formalisms

\mathcal G(3,0) may be directly compared to vector algebra.

The even subalgebra of \mathcal G(2,0) is isomorphic to the complex numbers, as may be seen by writing a vector P in terms of its components in an orthonormal basis and left multiplying by the basis vector e1, yielding

 Z =  {e_1}  P =  {e_1} ( x  {e_1} + y  {e_2})
= x (1) + y ( {e_1}  {e_2})\,

where we identify ie1e2 since

({e_1}{e_2})^2 = {e_1}{e_2}{e_1}{e_2} = -{e_1}{e_1}{e_2}{e_2} = -1 \,

Similarly, the even subalgebra of \mathcal G(3,0) with basis {1, e2e3, e3e1, e1e2} is isomorphic to the quaternions as may be seen by identifying i ↦ −e2e3, j ↦ −e3e1 and k ↦ −e1e2.

Every associative algebra has a matrix representation; the Pauli matrices are a representation of \mathcal G(3,0) and the Dirac matrices are a representation of \mathcal G(1,3), showing the equivalence with matrix representations used by physicists.

Geometric calculus

Main article: Geometric calculus

Geometric calculus extends the formalism to include differentiation and integration including differential geometry and differential forms.[18]

Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,

\int_{A} dA \nabla f = \oint_{\partial A} dx f

and then one can write

\nabla f = \nabla \cdot f + \nabla \wedge f

as a geometric product, effectively generalizing Stokes' theorem (including the differential form version of it).

In 1D when A is a curve with endpoints a and b, then

\int_{A} dA \nabla f = \oint_{\partial A} dx f

reduces to

\int_{a}^{b} dx \nabla f = \int_{a}^{b} dx \cdot \nabla f = \int_{a}^{b} df = f(b) -f(a)

or the fundamental theorem of integral calculus.

Also developed are the concept of vector manifold and geometric integration theory (which generalizes Cartan's differential forms).

Conformal geometric algebra (CGA)

A compact description of the current state of the art is provided by Bayro-Corrochano and Scheuermann (2010),[19] which also includes further references, in particular to Dorst et al (2007).[20] Other useful references are Li (2008).[21] and Bayro (2010).[22]

Working within GA, Euclidean space \mathcal E^3 is embedded projectively in the CGA \mathcal G^{4,1} via the identification of Euclidean points with 1D subspaces in the 4D null cone of the 5D CGA vector subspace, and adding a point at infinity. This allows all conformal transformations to be done as rotations and reflections and is covariant, extending incidence relations of projective geometry to circles and spheres.

Specifically, we add orthogonal basis vectors \, e_+ and \, e_- such that \, {e_+}^2 = +1 and \, {e_-}^2 = -1 to the basis of  \mathcal{G}(3,0) and identify null vectors

 n_{\infty} = e_- + e_+ as an ideal point (point at infinity) (see Compactification) and
 n_{o} = \tfrac{1}{2}(e_- - e_+) as the point at the origin, giving
 n_{\infty} \cdot n_{o} = -1 .

This procedure has some similarities to the procedure for working with homogeneous coordinates in projective geometry and in this case allows the modeling of Euclidean transformations as orthogonal transformations.

A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.

History

Before the 20th century

Although the connection of geometry with algebra dates as far back at least to Euclid's Elements in the 3rd century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) that encoded all of the geometrical information of a space.[23] Grassmann's algebraic system could be applied to a number of different kinds of spaces, the chief among them being Euclidean space, affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann's algebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view, the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra described certain properties (or Strecken such as length, area, and volume). His contribution was to define a new product — the geometric product — on an existing Grassmann algebra, which realized the quaternions as living within that algebra. Subsequently Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and applied them to the geometry of rotations in n dimensions. Later these developments would lead other 20th-century mathematicians to formalize and explore the properties of the Clifford algebra.

Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometric algebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vector analysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to express and manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal compared to the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkit of choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, following lectures of Gibbs.

In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in 1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vector analysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy of quaternionic analysis in vector analysis can be seen in the use of i, j, k to indicate the basis vectors of R3: it is being thought of as the purely imaginary quaternions. From the perspective of geometric algebra, quaternions can be identified as C03,0(R), the even part of the Clifford algebra on Euclidean 3-space, which unifies the three approaches.

20th century and present

Progress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due to the work of abstract algebraists such as Hermann Weyl and Claude Chevalley. The geometrical approach to geometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebra[24] discusses the algebra associated with each of a number of geometries, including affine geometry, projective geometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a "new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantum mechanics and gauge theory.[25] David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinary space and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.

In computer graphics and robotics, geometric algebras have been revived in order to efficiently represent rotations and other transformations. For applications of GA in robotics (screw theory, kinematics and dynamics using versors), computer vision, control and neural computing (geometric learning) see Bayro (2010).

Software

GA is a very application-oriented subject. There is a reasonably steep initial learning curve associated with it, but this can be eased somewhat by the use of applicable software.

The following is a list of freely available software that does not require ownership of commercial software or purchase of any commercial products for this purpose:

The link provides a manual, introduction to GA and sample material as well as the software.

Software allowing script creation and including sample visualizations, manual and GA introduction.

For programmers,this is a code generator with support for C,C++,C# and Java.

See also

Notes

  1. Hestenes, David (February 2003), "Oersted Medal Lecture 2002: Reforming the Mathematical Language of Physics" (PDF), Am. J. Phys. 71 (2): 104–121, Bibcode:2003AmJPh..71..104H, doi:10.1119/1.1522700
  2. Doran, Chris (1994). "Geometric Algebra and its Application to Mathematical Physics". PhD thesis (University of Cambridge). Archived from the original on November 29, 2014.
  3. Lasenby & Lasenby Doran.
  4. Hildenbrand, D.; Fontijne, D.; Perwass, C.; Dorst, L. (2004), "Geometric Algebra and its Application to Computer Graphics" (PDF), Proceedings of Eurographics 2004
  5. 1 2 Hestenes 1966.
  6. 1 2
  7. R. Penrose (2007). The Road to Reality. Vintage books. ISBN 0-679-77631-1.
  8. J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 83. ISBN 0-7167-0344-0.
  9. Distinguishing notation here is from Dorst (2007) Geometric Algebra for computer Science §B.1 p.590.; the point is also made that scalars must be handled as a special case with this product.
  10. This definition follows Dorst (2007) and Perwass (2009) – the left contraction used by Dorst replaces the ("fat dot") inner product that Perwass uses, consistent with Perwass's constraint that grade of A may not exceed that of B.
  11. Dorst appears to merely assume B+ such that BB+ = 1, whereas Perwass (2009) defines B+ = B/(BB), where B is the conjugate of B, equivalent to the reverse of B up to a sign.
  12. 1 2 Dorst, §3.6 p. 85.
  13. Perwass (2009) §3.2.10.2 p83
  14. That is to say, the projection must be defined as PB(A) = (AB+) ⨼ B and not as (AB) ⨼ B+, though the two are equivalent for non-null blades B
  15. This generalization to all A is apparently not considered by Perwass or Dorst.
  16. Perwass (2009) §3.3.1. Perwass also claims here that David Hestenes coined the term "versor", where he is presumably is referring to the GA context (the term versor appears to have been used by Hamilton to refer to an equivalent object of the quaternion algebra).
  17. "Electromagnetism using Geometric Algebra versus Components". Retrieved 19 March 2013.
  18. Clifford Algebra to Geometric Calculus, a Unified Language for mathematics and Physics (Dordrecht/Boston:G.Reidel Publ.Co.,1984
  19. Geometric Algebra Computing in Engineering and Computer Science, E.Bayro-Corrochano & Gerik Scheuermann (Eds),Springer 2010. Extract online at http://geocalc.clas.asu.edu/html/UAFCG.html #5 New Tools for Computational Geometry and rejuvenation of Screw Theory
  20. Dorst, Leo; Fontijne, Daniel; Mann, Stephen (2007). Geometric algebra for computer science: an object-oriented approach to geometry. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 978-0-12-369465-2. OCLC 132691969.
  21. Hongbo Li (2008) Invariant Algebras and Geometric Reasoning, Singapore: World Scientific. Extract online at http://www.worldscibooks.com/etextbook/6514/6514_chap01.pdf
  22. Bayro-Corrochano, Eduardo (2010). Geometric Computing for Wavelet Transforms, Robot Vision, Learning, Control and Action. Springer Verlag
  23. Grassmann, Hermann (1844). Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert. Leipzig: O. Wigand. OCLC 20521674.
  24. Artin, Emil (1988), Geometric algebra, Wiley Classics Library, New York: John Wiley & Sons Inc., pp. x+214, ISBN 0-471-60839-4, MR 1009557 (Reprint of the 1957 original; A Wiley-Interscience Publication)
  25. Doran, Chris J. L. (February 1994). Geometric Algebra and its Application to Mathematical Physics (Ph.D. thesis). University of Cambridge. OCLC 53604228. Archived from the original on 29 November 2014.

References, and further reading

External links

English translations of early books and papers

Research groups

This article is issued from Wikipedia - version of the Wednesday, May 04, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.