home | index | units | counting | geometry | algebra | trigonometry & functions | calculus
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics

Final Answers
© 2000-2016   Gérard P. Michon, Ph.D.

Vector Spaces
and Algebras


Articles previously on this page:

 Stefan Banach 
 1892-1945
  • Banach spaces  are  complete  normed vector spaces.
    The above articles have moved... 
    Click for the new location.
 border
 border

On this site, see also:

Related Links (Outside this Site)

Théorie des opérations linéaires  (Banach spaces)  by  Stefan Banach  (1932).
 
Wikipedia:   Vector Space  |  Linear Algebra  |  Clifford Algebra
 
border
border

Vector Spaces,  Modules and Algebras


(2006-05-07)   Etymology
Vectors were so named because they "carry" the distance from the origin.

In medical and other contexts, "vector" is synonymous with "carrier".  The etymology is that of "vehicle":  The latin verb  vehere  means "to transport".

In elementary geometry, a  vector  is simply the  difference  between two points in space;  it is whatever has to be traveled to go from a given origin to some destination.  Etymologically, such a thing was perceived as "carrying" the notion of distance between two points  (the  radius  from a fixed origin to a point).

The term  vector  started out its mathematical life as part of the French locution  "rayon vecteur"  (radius vector).  The whole expression is still used to identify a point in ordinary (Euclidean) space, as seen from a fixed origin.

As presented next, the term  vector  more generally denotes an element of a linear space  (vector space)  of an indefinite number of dimensions  (possibly infinitely many)  over any  scalar field  (not necessarily the real numbers).


(2006-03-28)   Vector Space over a Field  K
Vectors can be added, subtracted or  scaled.  The scalars form a field.

scalar  is an element of the field  K.  A  vector space  E  is a set with a well-defined addition  (the sum  U+V of two vectors is a vector)  and multiplication by a scalar  (a scaled vector   x U   is still a vector)  obeying the following rules:

  • (E, + )  is an Abelian group.  This is to say that the addition of vectors is an associative and commutative operation and that subtraction is defined as well  (i.e.,  there's a zero vector, neutral for addition, and every vector has an  opposite  which yields  zero  when added to it).
  • Scaling is compatible with arithmetic on the  field  K :
"xÎK, "yÎK, "UÎE, "VÎE,   (x + y) U   =  
x (U + V)   =  
(x y) U   =  
1 U   =  
x U  +  y U
x U  +  x V
x (y U)
U


(2010-04-23)   Independent vectors.  Basis of a vector space.
The dimension is the largest possible number of independent vectors.

The modern definition of a vector space doesn't involve the concept of  dimension  which had a towering presence in the historical examples of vector spaces taken from Euclidean geometry:  A line has dimension 1, a plane has dimension 2, "space" has dimension 3, etc.

The concept of dimension is best retrieved by introducing two complementary notions pertaining to a set of vectors  B. 

  • B  is said to consist of  indenpendent vectors  when all  nontrivial linear combinations  of the vectors of  B  are nonzero.
  • B  is said to  generate  E  when every vector of the space E is a linear combination of vectors of B.

linear combination  of vectors is a sum of  finitely many  of those vectors, each multiplied by a scaling factor  (called a  coefficient ).  A linear combination with at least one  nonzero  coefficient is said to be  nontrivial.

If  B  generates  E  and  consists of independent vectors,  then it's called a  basis  of E.   Note that the trivial space  {0}  has an empty basis  (the empty set does generate the space  {0}  because an empty sum is zero).

To prove that all nontrivial vector spaces have a basis requires the  Axiom of Choice  (in fact, the existence of a basis for any nontrivial vector space is  equivalent  to the  Axiom of choice ).

HINT:  Tuckey's lemma  (which is equivalent to the axiom of choice)  says that there's always at least one maximal set in a family "of finite character".  That notion is patterned after the family of all the linearly independent sets in a vector space  (which, indeed, contains a set if and only if it contains all the finite subset of that set).  A basis is a maximal set of linearly independent vectors.

Dimension theorem for vector spaces :

A not-so-obvious statement is that all bases of  E  can be put in one-to-one correspondence with each other;  they all have the same  cardinal  (finite or not).  That cardinal is called the  dimension  of the space E:  dim (E)

Dimension theorem for vector spaces


(2010-06-21)   Intersection and Sum of Subspaces.
A vector space included in another is called a  subspace.

A subset  F  of a vector space  E  is a  subspace  of  E  if and only if it is stable by addition and scaling  (i.e., the sum of two vectors of  F  is in  F  and so is any vector of  F  multiplied into a scalar).

It's an easy exercise to show that the intersection  FÇG  of two subspaces  F  and  G  is a subspace of  E.  So is the  Minkowski sum  F+G  (defined as the set of all sums  x+y  of a vector  x  from  F  and a vector  y  from  G).

Two subspaces of  E  for which   FÇG = {0}   and   F+G = E  are said to be  supplementary.  Their sum is then called a  direct sum  and the following compact notation is used to state that fact:

E   =   F Å G

In the case of finitely many dimensions, the following relation holds:

dim ( F Å G )   =   dim (F)  +  dim (G)

The generalization to nontrivial intersections is  Grassmann's Formula :

dim ( F + G )   =   dim (F)  +  dim (G)  -  dim ( F Ç G )

A lesser-known version applies to spaces of finite codimensions:

codim ( F + G )   =   codim (F)  +  codim (G)  -  codim ( F Ç G )

Hermann Grassmann  (1809-1877)


(2010-12-03)  Linear maps.  Isomorphic vector spaces.
Two spaces are  isomorphic  if there's a linear bijection between them.

A function  f  which maps a vector space  E  into another space  F  over the same field  K  is said to be  linear  if it respects addition and scaling:

" x,y Î K" U,V Î E,     f ( x U + y V )   =   x f ( U )  +  y f ( V )

If such a linear function  f  is  bijective, its inverse is also a linear map and the vector spaces  E  and  F  are said to be  isomorphic

E   »   F

In particular, two vector spaces which have the same  finite  dimension over the same field are necessarily isomorphic.


(2010-12-03)   Quotient  E/H  of a vector space E by a subspace H
The equivalence classes (or residues) modulo  H  can be called  slices.

If H is a subspace of the vector space E, we may consider the partition of E into sets  (let's call them slices)  of the form  x+H.  The set of such slices is clearly a vector space  (scaling a slice or adding up two slices yields a slice).  This vector space is denoted  E/H  and is called the  quotient  of E  by H.

When  E/H  has  finite dimension  that dimension is called the  codimension  of  H.  A linear subspace of  codimension 1  is called an  hyperplane  of  E.

x+H  denotes the set of all sums  x+h  where  h  is an element of  HE/H  is indeed the quotient of  E  modulo the equivalence relation which defines as equivalent two vectors whose difference is in  H.

The canonical linear map which sends a vector  x  of  E  to the slice  x+H  is called the  quotient map  of  E  onto  E/H.

A vector space is always isomorphic to the direct sum of any subspace  H  and its quotient by that same subspace:

E   »   H Å E/H

Use this with  H  =  ker ( f )   to prove the following fundamental theorem:


(2010-12-03)  Fundamental theorem of linear algebra.  Rank theorem.
A vector space is isomorphic to the direct sum of the image and kernel  (French:  noyau)  of any linear function defined over it.

The  image  or  range  of a linear function  f  which maps a vector space  E  to a vector space  F  is a subspace of  F  defined as follows:

im ( f )   =   range ( f )   =   f (E)   =   { y Î F  |   $ x Î Ef (x) = y }

The  kernel  (also called  nullspace)  of  f  is the following subspace of  E :

ker ( f )  =  null ( f )  =  { x Î E  |   f (x) = 0 }

The  fundamental theorem of linear algebra  states that there's a subspace of  E  which is isomorphic to  f (E)  and  supplementary  to  ker ( f )  in E.  This results holds for a finite or an infinite number of dimensions and it's commonly expressed by the following isomorphism:

f (E)  Å  ker ( f )   »   E

Proof :   This is a corollary of the above, since  f (E)   and   E / ker ( f )   are isomorphic because a bijective map between them is obtained by associating  uniquely  f (x)  with the residue class  x + ker ( f ).  Clearly, that association doesn't depend on the choice of  x.  QED

The above argument is itself an incarnation of the so-called first isomorphism theorem, as published by Emmy Noether in 1927.

Restricted to vector spaces of finitely many dimensions, the theorem amounts to the following  famous result  (of great practical importance).

Rank theorem  (or rank-nullity theorem) :

For any linear function  f  over a finite-dimensional space  E, we have:

dim ( f (E) )  +  dim ( ker ( f ) )   =   dim ( E )

dim ( f (E) )  is called the  rank  of f.  The  nullity  of  f  is  dim ( ker ( f ) ).

In the language of the matrices normally associated with linear functions:  The  rank  and  nullity  of a matrix add up to its number of  columns.  The rank of a matrix  A  is defined as the largest number of linearly independent columns  (or rows)  in it.  Its nullity is the dimension of its nullspace  (consisting, by definition, of the column vectors  x  for which  A x = 0).

Wikipedia :   Fundamental theorem of linear algebra   |   Rank theorem   |   Rank of a matrix


(2006-03-28)   Module over a Ring  K
A vectorial structure where  division  by a scalar isn't "well defined".

module  obeys the same basic rules as a vector space, but its  scalars  are only required to form a ring;  a nonzero scalar need not have a reciprocal...

A module over  K  may be called a  K-module.  For example,   Q   is a   Z -module.  This is to say that the rationals form a module over the integers  (this particular example gave birth to the concept of an  "injective module").


(2007-11-06)   Normed Vector Spaces
normed vector space  is a linear space endowed with a  norm.

Vector spaces can be endowed with a function  (called  norm)  which associates to any vector  V  a real number  ||V||  (called the  norm  or the  length  of  V)  such that the following properties hold:

  • ||V||  is positive for any nonzero vector  V.
  • ||lV||  =  |l| ||V||
  • || U + V ||   ≤   || U ||  +  || V ||

In this,  l  is a  scalar  and  |l|  denotes what's called a  valuation  on the field of scalars  (a valuation is a special type of one-dimensional norm; the valuation of a product is the product of the valuations of its factors).  Some examples of valuations are the  absolute value  of real numbers,  the  modulus  of complex numbers and the  p-adic metric  of p-adic numbers.

Let's insist:  The norm of a nonzero vector is always a  positive real number,  even for vector spaces whose scalars aren't real numbers.


(2009-09-03)   Two Flavors of Duality
Algebraic duality  & topological duality.

In a  vector space  E,  a  linear form  is a linear function which maps every  vector  of  E  to a  scalar  of the underlying field  K.  The set of all linear forms is called the  algebraic dual  of  E.  The set of all  continuous  linear forms is called the  [ topological ] dual  of  E.

With finitely many dimensions, the two concepts are identical  (i.e., every linear form is continuous).  Not so with infinitely many dimensions.  An element of the dual  (a continuous linear form)  is often called a  covector.

Unless otherwise specified, we shall use the unqualified term  dual  to denote the  topological dual.  We shall denote it  E*  (some authors use  E*  to denote the algebraic dual and  E'  for the topological dual).

The  bidual  E**  of  E  is the dual of  E*.  It's also called  second dual  or  double dual.

canonical  homomorphism  F  exists which immerses  E  into  E**  by defining  F(v),  for any element  v  of  E,  as the linear form on  E*  which maps every element  f  of  E*  to the scalar  f (v).  That's to say:

F(v) ( f )   =   f (v)

If the canonical homomorphism is a bijection, then  E  is said to be  reflexive  and it is routinely identified with its  bidual  E**.

E   =   E**

If  E  has infinitely many dimensions, its  algebraic  bidual is  never  isomorphic to it.  That's the main reason why the notion of  topological  duality is preferred.  (Note that a Hilbert space is always reflexive in the above sense, even if it has infinitely many dimensions.)

Example of an algebraic dual :

If   E = R(N)   denotes the space consisting of all complex sequences  with only finitely many nonzero values,  then the  algebraic  dual of  E  consists of  all  complex sequences without restrictions.  In other words:

E'   =   RN

Indeed, an element  f  of  E'  is a linear form over  E  which is uniquely determined by the unrestricted sequence of scalars formed by the images of the elements in the  countable  basis  (e0 , e1 , e2  ... )  of  E.

E'  is a Banach space, but  E  is not  (it isn't complete).  As a result, an  absolutely convergent series  need not converge in  E.

For example, the series of general term   en / (n+1)2   doesn't converge in  E,  although it's absolutely convergent  (because the series formed by the norms of its terms is a well-known convergent real series).

Representation theorems for  [continuous]  duals :

representation theorem  is a statement that identifies in concrete terms some abstractly specified entity.  For example, the celebrated  Riesz representation theorem  states that the [continuous] dual of the  Lebesgue space  Lp  (an abstract specification)  is just isomorphic to the space  Lq  where  q  is a simple function of  p   (namely, 1/p+1/q = 1 ).

Lebesgue spaces are normally linear spaces with  uncountably many  dimensions  (their elements are functions over a continuum like  R or C).  However, the  Lebesgue sequence spaces  described in the next section are simpler  (they have only  countably many  dimensions)  and can serve as a more accessible example.

Dual space


(2012-09-19)   Lebesgue sequence spaces
lp  and  lq  are duals of each other when   1/p + 1/q  =  1

For  p > 1,  the linear space  lp  is defined as the subspace of  RN  consisting of all sequences for which the following series converges:

( || x || p ) p   =   ( || (x0 , x1 , x2 , x3 , ... ) ||p ) p   =   Sn  | xn |p

As the notation implies,  ||.||p  is a  norm  on  lp  because of the following famous nontrivial inequality  (Minkowski's inequality)  which serves as the relevant  triangular inequality :

|| x+y ||p     ≤     || x ||p  +  || y ||p

For the topology induced by that so-called  "p-norm",  the [topological] dual of  lp  is isomorphic to  lq ,  where:

1/p  +  1/q   =   1

Thus,  lp  is  reflexive  (i.e., isomorphic to its own bidual)  for any  p > 1.

Riesz representation theorem

 Tensor Product

(2009-09-03)   Tensorial Product and Tensors
E Ä F   is  generated  by tensor products.

Consider two vector spaces  E  and  F  over the  same  field of scalars  K.  For two  covectors  f  and  g  (respectively belonging to  E*  and  F*)  we may consider a particular linear form denoted  f Ä g  and defined over the  cartesian product  E´F  via the relation:

f Ä g (u,v)   =   f (u)  g (v)

The binary operator  Ä  thus defined from  (E*)´(F*)  to  (E´F)*  is called  tensor product.  (Even when  E = F,  the operator  Ä  is  not  commutative.)

 Come back later, we're
 still working on this one...


(2015-02-21)   Graded vector spaces and supervectors.
Direct sum of vector spaces indexed by a monoid.

 Come back later, we're
 still working on this one...

When the indexing monoid is the set of natural integers  {0,1,2,3,4...}  or part thereof, the  degree  n  of a vector is the  smallest  integer such that the direct sum of the subfamily indexed by  {0,1 ... n}  contains that vector.

Graded vector space


(2007-04-30)   [ Linear ]  Algebra over a Field K
An internal product among vectors turns a vector space into an algebra.

An  algebra  is the structure obtained when an internal multiplication is defined on the vector space  E  (the product of two vectors being a vector) which is both  scalable  and  distributive  (over addition).  That's to say:

"xÎK, "yÎK, "UÎE, "VÎE, "WÎE,   (x y) (U V)   =  
U (V + W)   =  
(V + W) U   =  
(x U) (y V)
U V + U W
V U + W U

The  associator  is defined as the following trilinear function.  It measures how the internal muliplication fails to be associative  (much like the bilinear  commutator  says how some multiplication fails to be commutative).

[ U , V , W ]   =   U (V W)  -  (U V) W

If its internal product has a neutral element, the algebra is called  unital :

$ 1 ,   " U ,       1  U   =   U  1   =   U

By definition,  a  derivation  in an algebra is an endomorphism  D  obeying:

D ( U V )   =   D(U) V  +  U D(V)

Associative Algebras :

When the associator defined above is identically zero, the algebra is said to be  associative  (some authors have used the word  "algebra"  to denote only  associative  algebras,  including Clifford algebras).  In other words,  associative algebras  fulfill the additional requirement:

"UÎE, "VÎE, "WÎE,       U (V W)   =   (U V) W

Weaker requirements include alternativity or mere power-associativity...

Alternative Algebras :

In general, a multilinear function is said to be  alternating  if its sign changes when the arguments undergo an  odd  permutation.  An algebra is said to be  alternative  when the aforementioned  associator  is alternating.

The  alternativity  condition is satisfied if and only if the following holds:

"U "V       U (V V)   =   (U V) V     and     U (U V)   =   (U U) V

Octonions are a non-associative example of such an  alternative algebra.

Power-Associativity :

The weakest useful form of associativity is  power-associativity  which states that the subalgebra generated by a single element is associative.  That's to say that the n-th power of an element is well-defined regardless of the order in which you may choose to perform the relevant multiplications:

U1   =   U
U2   =   U U
U3   =   U U2   =   U2 U
U4   =   U U3   =   U2 U2   =   U3 U
...   ...
" i > 0   " j > 0       Ui+j   =   Ui Uj

The number of ways to work out a product of  n  identical factors is equal to the  Catalan number  C(2n,n)/(n+1):  1, 1, 2, 5, 14, 42, 132... (A000108)
Power-associativity  means that, for any given n, all those ways yield the same result.  The following special case  (n=3)  is  not sufficient:

 Dangerous Bend

 "U ,       U (U U)   =   (U U) U

The Three Standard Types of Subassociativity :

By definition,  a  subalgebra  is an algebra contained in another  (the operations of the subalgebra being restrictions of the operations in the whole algebra).  Any intersection of subalgebras is a subalgebra.  The subalgebra  generated  by a subset is the intersection of all subalgebras containing it.

The above three types of subassociativity can be fully characterized in terms of the associativity of the subalgebras generated by  1,  2  or  3  elements:

  • If all subalgebras generated by one element are associative,
    then the whole algebra is  power-associative.
  • If all subalgebras generated by two elements are associative,
    then the whole algebra is  alternative  (theorem of Artin).
  • If all subalgebras generated by three elements are associative,
    then the whole algebra is  associative  too.

Flexibility :

A product is said to be  flexible  when:

"U "V       U (V U)   =   (U V) U

In particular,  anticommutative  products are flexible.  This is usually  not  classified as a form of subassociativity.

Alternative Algebras over an Arbitrary Field  by  R.D. Shafer  (1942).
Subassociative Groupoids  by  Milton S. Braitt  &  Donald Silberger   (2006).
Examples of Algebras that aren't Power-Associative  (MathOverflow, 2011-11-04).
Jordan algebras (1933)   |   Albert algebra (1934)
Power-Associative Rings (1947)  by  Abraham Adrian Albert (1905-1972).
New Results on Power-Associative Algebras (Ph.D. 1952)  by  Louis A. Kokoris (1924-1980).

 Sophus Lie
(2015-02-14)   Lie algebras over a Field K
Anticommutative algebras obeying  Jacobi's identity.

Hermann Weyl (1885-1955) named those structures after the Norwegian mathematician Sophus Lie (1842-1899).

Traditionally, the basic "internal multiplication" in a Lie algebra is denoted by a square bracket  (called a  Lie bracket )  which must be  anticommutative  and obey the so-called  Jacobi identity,  namely:

[B,A]   =   - [A,B]       (in particular,  [A,A]  =  0 )
 
[A,[B,C]] + [B,[C,A]] + [C,[A,B]]   =   0

Thus, the operator  d  defined by   d(x)  =  [A,x]   is a  derivation, since :

[A,[B,C]]   =   [[A,B],C] + [B,[A,C]]

In a Lie algebra, the  associator  can be written:

[A,B,C]  =  [A,[B,C]] - [[A,B],C]  =  [A,[B,C]] + [C,[A,B]]  =  [[C,A],B]

Representation of a Lie Algebra :

The  bracket notation  is compatible with the key example appearing in quantum mechanics, where the Lie bracket is obtained as the  commutator  over an ordinary linear algebra of linear operators, where the Lie bracket is defined as the  commutator  for the functional  composition  of operators:

[ U , V ]   =   U o V  -  V o U

In quantum mechanics, the relevant linear operators are  Hermitian  (they're called  observables).  To make the Lie bracket an  internal  product among those Hermitian operators, we would have to multiply the left-side of the above defining relation into the imaginary unit  i  (or any real multiple thereof).  This complication is irrelevant to the theoretical discussion of the general case and we'll ignore it here.

If the Lie bracket is so defined, the  Jacobi identity  is a simple theorem, whose proof is left to the reader.

Conversely, an anticommutative algebra obeying the Lie identity is said to have a  representation  in terms of linear operators if...

 Come back later, we're
 still working on this one...

Lie algebra   |   Free Lie algebra   |   Lie group vs. Lie algebra   |   Lie algebra representation


 Pascual Jordan (2015-02-19)   Jordan algebras are commutative.
Turning any  linear algebra  into a commutative one.

Those structures were introduced in 1933 by  Pascual Jordan (1902-1980).  They were named after him by A. Adrian Albert (1902-1980)  in 1946.

Just like commutators turn linear operators into a  Lie algebra,  a  Jordan algebra  is formed by using the anti-commutator  or  Jordan product :

U V   =   ½ ( U o V  +  V o U )

Axiomatically, a  Jordan algebra  is a commutative algebra  ( U V  =  V U )  obeying  Jordan's identity :   (U V) (U U)  =  U (V (U U)).

A Jordan algebra is always  power-associative.

Jordan algebra (1933)   |   Pascual Jordan (1902-1980)


Signature of William K. Clifford (2007-04-30)   Clifford algebras over a Field K
Unital associative algebras with a quadratic form.

Those structures are named after the British geometer and philosopher William Clifford (1845-1879) who originated the concept in 1876.

 Come back later, we're
 still working on this one...

Clifford Algebras  by John Baez  (2001).
Introduction to Clifford Algebra  (Geometric Algebra)  by John S. Denker  (2006).
A brief Introduction to Clifford Algebra  by  Silvia Franchini, Giorgio Vassallo & Filippo Sorbello  (2010)
 
Wilipedia :   Quadratic forms   |   Clifford algebras   |   Classification of Clifford algebras


(2015-02-23)   Involutive Algebras
A special linear involution is singled out  (adjunction or conjugation).

As the  adjoint  or  conjugate  of an element  U  is usually denoted  U*  such structures are also called  *-algebras  (star-algebras).  The following properties are postulated:

 Come back later, we're
 still working on this one...

Wilipedia :   Involutive algebra


 Arms of John von Neumann (2015-02-23)   Von Neumann Algebras
Compact operators resemble ancient infinitesimals...

John von Neumann (1903-1957)  introduced those structure in 1929  (he called them simply  rings of operators).  Von Neumann  presented their basic theory in 1936 with the help of Francis Murray (1911-1996).

Von Neumann algebra  is an  involutive algebra  such that...

 Come back later, we're
 still working on this one...

By definition,  a  factor  is a Von Neumann algebra with a trivial center  (which is to say that only the scalar multiples of identity commute with all the elements of the algebra).

Wilipedia :   Von Neumann algebras


(2009-09-25)   On multi-dimensional objects that are "not vectors"...

To a mathematician, the  juxtaposition  (or  cartesian product )  of several vector spaces over the same field  K  is  always  a vector space over that field  (as component-wise definitions of addition and scaling satisfy the above axioms).

When physicists state that some particular juxtaposition of quantities  (possibly a single numerical quantity by itself)  is "not a scalar", "not a vector" or "not a tensor"  they mean that the thing lacks an unambiguous and intrinsic definition.

Typically, a flawed vectorial definition would actually depend on the choice of a frame of reference for the physical universe.  For example, the derivative of a scalar with respect to the first spatial coordinate is "not a scalar"  (that quantity depends on what spatial frame of reference is chosen).

Less trivially, the gradient of a scalar  is  a physical covector (of which the above happens to be  one  covariant coordinate).  Indeed, the definition of a gradient specifies the same object  (in dual space)  for any choice of a physical coordinate basis.

Some physicists routinely  introduce  (especially in the context of General Relativityvectors  as "things that transform like elementary displacements" and  covectors  as "things that transform like gradients".  Their students are thus expected to grasp a complicated notion  (coordinate transformations)  before the stage is set.  Newbies will need several passes through that intertwined logic before they "get it".

I'd rather introduce the mathematical notion of a vector first.  Having easily absorbed that straight notion, the student may then be asked to consider whether a particular definition depends on a choice of coordinates.

For example, the  linear coefficient of thermal expansion  CTE)  cannot be properly defined as a scalar  (except for isotropic substances)  it's a  tensor.  On the other hand, the related  cubic  CTE is always a scalar  (which is equal to the trace of the aforementioned CTE tensor).


 David Hestenes  (2007-08-21)   The "Geometric Calculus" of Hestenes
Unifying some notations of mathematical physics...

Building on similitudes in several areas of mathematical physics, David Hestenes (1933-) has been advocating a denotational unification, which has gathered a few enthusiastic followers.

The approach is called  Geometric Algebra  by its proponents.  The central objects are called  multivectors.  Their coordinate-free manipulation goes by the name of  multivector calculus  or  geometric calculus,  which first appeared in the title of Hestenes' doctoral dissertation (1963).

That's unrelated to the abstract field of  Algebraic Geometry  (which has been at the forefront of mainstream mathematical research for decades).

Geometric Calculus  by  David Hestenes  (Oersted Medal Lecture, 2002)   |   Video (2014)
Geometric Algebra Research Group  (at the  Cavendish Laboratory ).
Excursions en algèbre géométrique  by  Georges Ringeisen.   |   Wikipedia :   Geometric Algebra
 
Videos :   Geometric Algebra  by  Alan Mcdonald   0 | 1 | 2 | 3 | 4 | 5
Tutorial on Geometric Calculus  by  David Hestenes  (La Rocelle, July 2012).

border
border
visits since March 28, 2006
 (c) Copyright 2000-2016, Gerard P. Michon, Ph.D.