NOTES ON SIMPLICIAL HOMOLOGY
TABLE OF CONTENTS
  1. Preface
  2. Introduction To The Introduction
  3. Models, Abstractions & Examples
  4. The Presimplex
  5. The Simplex
  6. Groups, Abelian Groups, Homomorphisms & Cosets
  7. Oriented Simplexes
  8. Complexes
  9. Oriented Complexes
  10. Abelian Groups Of Chains
  11. The Boundary Operator & Poincare's Lemma
  12. The inverse of Poincare's Lemma
  13. Homology Groups
  14. Betti Groups & Betti Numbers
  15. The Cohomology Concept & Duality
  16. Cohomology Groups
  17. Homology v. Homotopy
  18. References

CONNECTIONS WITH SIMPLICIAL HOMOLOGY
  1. Differential Forms
  2. Electromagnetic Theory
  3. Manifolds & DeRham Cohomology
  4. Clifford Rings & Algebras
  5. Projective Algebras
  6. Quantum Theory
  7. References

   Go To ToC
Preface

This is *not* a replacement for a good rigorous text; actually, it should require several of them and some independent research to fill in with full generality. This is merely an essay that makes a few unusual connections, things that I might say in a classroom. The approach here is diffrent and little more free wheeling than what an academic text must be, painting with a broad brush (or roller) and looking at simplicial homology with the question: what is this stuff all about, where did it come from, and what is it good for? Some good references of varying difficulty are given below.

My intention is to be visually intuitive about the meanings of the mathematical abstractions so to provide a basis of geometrically inclined thought about what would otherwise be an abstraction of pure axiomatics deprived of the fundmaental thoughts that are behind the abstractions.

Much mathematics is taught and written in the theorem-proof manner without much attention paid to what motivates the abstractions. Having the fundamental nature of a geometer rather than an analyst, I've never found this a satisfactory situation. I suppose the assumption in the theorem-proof method is that the motivations are either irrelevant or supposed to be intuitively obvious. In the second case, most often, they are not. In the first case, the assumption would be just silly.

I don't know who the audience is for this, but it should work well for students of mathematics or physics. Professors of either could probably pass on. This is because I will take a fairly simple approach in structure with an attempt to look at how the algebra is taken from a scheme of structure within discrete approximations to continuous manifolds, and their embeddings, how things are generalized and further abstracted. Connections with other algebraic structures and quantum theory may be of interest to some professionals - on the other hand, they may not be.


   Go To ToC
Introduction To The Introduction

Loosely speaking, a manifold [Wikipedia] is a topological space that "looks locally" like R^n.

Saying what the quoted phrase means leads to the standard definitions, that involve local homeomorphisms (topological equivalence relations) between neighborhoods of the manifold M and neighborhoods of R^n. Though simplicial homology is really about topological spaces and not manifolds, they are a good intuitive place to start. Though two dimensional manifolds are cheated of the general complexities of higher dimensional manifolds, intrinsic curvature, e.g., they are also a good place to start.

A few words on topology: There are almost two seemingly different meanings to the word topology. Mathematicians are introduced to topology by means of "point set topology" as defined in the previous link to [Appendix A]. From this the notions of limit point and accumulation point are defined for sequences of points in the space S endowed with a given topology. This eventually leads on to the idea of topological equivalence through homeomorphisms which define the *continuous* maps between topological spaces. In category theory, these are the morphisms associated with the category of topological spaces.

On the other hand, the second apparent meaning of "topology" is encoded in the popularized quip that a topologist is a mathematician who doesn't know the difference between a coffee cup and a doughnut. The idea being that the surface covering a coffee cup is continuously deformable into the surface covering a doughnut, also called the torus T^2.

The obvious connection between these two seemingly different ideas of topology is the concept of continuity. The second popular conception is actually built from point set topology, which is why that is the definition of a topology.

In its development, topology was conceived in both these views, and in both views the idea was to generalize geometry in such a way that a concept of nearness no longer required a metric or a norm.

That definition of topology is so general that one has to make further assumptions to create something useful mathematically, and those further assumptions can be made many different ways.

Point set topology quickly becomes almost immediately useful in the mathematics of infinite dimensional linear spaces, e.g. Banach spaces that are the fundamental concepts of analysis.

Where a Euclidean norm is well defined for any element of an n-dimensional space, it is not well defined when n is unbounded. Elements of convergence and attendant concepts of topology are required to be invented and added, and different topologies can be used in different ways for different purposes.

The continuity of homeomorphisms in topology is first between topological spaces, and the continuity of standard continuous manifolds arises from local homeomorphisms to and from an R^n structure equipped with a Euclidean structure. This relates the two apparently distinct notions of topology.

When point set topology is considered, it is often the local topological structure that is of interest, but when the manifold kind of topology is considered, the local Euclidean structure is already provided, and it is the global topology that becomes the focal point. In the following, it is the global structure that is emphasized.


   Go To ToC
Models, Abstractions & Examples

All that said, visualize two finite plane sectors, say interiors of the same Jordan curve one above the other and not touching. Now, run a continuous band around the edges so that the top sheet is *smoothly* connected to the botton sheet. The result is continuosly deformable to a sphere S^2 - but for visual purposes, let's not. But, the sphere picture makes it more obvious that *any* closed path on this surface can be contracted to a point continuously. At this point we can say equivalently that any such closed path is "homologous to zero", or that any two such closed paths are "homotopically equivalent". The latter because any closed path can be continuously deformed into any other. I bring this up now as a basis for abstractions that follow.

Now, from one sheet to another, punch a hole so that the top sheet is smoothly connected to the bottom sheet all around the hole. Notice that you are visualizing this as a two dimensional closed surface embedded in a three dimensional space. One might call these "Jordan surfaces" since they divide the R^3 into an interior and an exterior, and have the character of a deformed sphere S^2. Notice also that the "hole" can be seen as a continuation of the three dimensional space. You may have the idea that if this surface were embedded in a four dimensional space that the hole would be a continuation of that four dimensional space; that would be a good guess.

The first thing to notice is that while the first space was deformable to S^2, the space with the hole is deformable to T^2, and that, T^2 is not deformable to S^2, precisely because of the hole.

From here on assume unless explicitly stated otherwise that paths are closed and deformations are continuous.

In T^2 it is no longer true that any path is deformable into any other path. With regard to mutual connection through deformability, there are two disjunct classes of paths: those that can be deformed by contraction to a point, and those that cannot.

To see how this works, it might be instructive to do two things: 1) create prototypical paths and play with deforming them, 2) simplify the toroidal picture with a little topological surgery.

First fix a point P on the "crown of the torus", and send out a droid point from there that traces out a path returning returning to P.

If the droid completes an encircling of "the hole" before returning its path cannot be contracted to a point.

The torus T^2 = T^1 X T^1, is a product of two circles: take a circle in E^3, extend a radius line from the center of the circle to some exterior point C. Swing the circle on the radias arm about C in a direction always orthogonal to the plane of the circle. With a completed circle in the swinging, the first cricle has swept out the surface of the torus. The radius of the first circle is the minor radius r, and the radial arm from C to the center of the first circle is the major radius. The center of the first circle sweeps out the centroid of the torus, a circle of radius R.

Now, cut the torus through P in a plane orthogonal to the centroid, slicing the centroid only onces, opening it up into a finite tubular section with length (2 pi)R:


	P                                                P
	--------------------------------------------------
	|                                                |
	|                                                |
 	|                                                |
	|                                                |
 	|                                                |
	--------------------------------------------------

   where the vertical line represents an "on edge" circle; its length
   is a diameter 2r.

Paths not contractible to point can wrap around either of the T^1 contained in T^2

What if the path wraps around both of the T^1?

Winding inequivalences A class of paths can actually be specified by a pair of integers (n, m) specifying the winding numbers for each T^1.

T^n and S^n

Connectedness and simply connectedness Multiply connectedness

punching more holes The Genus Euler's Relation

triangulation of a surface simplicial decompositions and approximations


   Go To ToC
The Presimplex

The idea of a simplex is a generalization of a triangle, as the "simplest polygon" of the plane E^2. A triangle has as "faces" the simplest (and only) polygon of dimension n=1. At n=1. the very concept of "polygon/polytope" has been so trivialized that it is barely recognizable. While a triangle is the simplest polygon in dimension n=2, a tetrahedron is the simplest polygon for n=3. Similarly, for n > 3, as a tetrahedron has triangular faces, so an n-dimensional simplex (with n+1 verticies) has (n-1)-dimensional faces. For the combinatorics of subdimensional structures of simplexes and polytopes in n dimensions, see the local page grafdim.html.

Two questions arise simultaneouly as to the dimension of a topological space and its global "holiness", the latter of which has to do with its "hole structure" not its spiritual nature. It seems that this latter structure can be encoded in classes of paths, and it is true that any path can be continuously deformed into a circle, and that a circle can be approximated by polygons. The most efficient topological approximation is the triangle. A path in and n-dimensional space is always 1-dimensional, and a triangle is always the right minimal approximating polygon in any n-dimesnional space. So the latter question of holiness really reduces to traingles representative of a class of paths that cannot be deformed into each other within the topological space.

The concept of "the dimension of a topological space", on the other hand, depends on the existence of an approximating local simplex, local in the sense of neighborhoods of any given point. In certain special cases that are the fodder of text book topology, the Brouwer-Urysohn dimension is a uniform concept (valid and the same for every point), and a topological invariant, i.e., preserved under homeomorphisms. As algebra revolves around the real numbers R, topology revolves about Euclidean spaces based on R^n, for n > 0. But, general topology is much wilder than that. The topological space may consist of any number of disconnected components.

A fairly general concept of connectedness can be defined by the following:


Let S be a topological space that is Hausdorff and p_1 and p_2 be two distinct points in S. If there exists a neighborhood of each, N(p_1) and N(p_2) respectively, such that N(p_1) Cap N(p_2) is not null, then, by definition, the points p_1 and p_2 are connectible.

If, for every pair of p_1 and b in S, p_1 and p_2 are connectible, then by definition S is connectible. The relation of connectibility is symmetric and reflexive, but not transitive.

This is more general than the usual definition of connectedness, which is what defines connected components of a topological space. In back of this definition is the idea that a "point connecting path" may be drawn between p_1 and b which resides within S. This happens to be the same idea in back of the standard definition connectdness, but there are obvious spaces that are connectible but not connected.

If p_1 and b in S are connectible, p_1 and p_3 are connectible, and b and p_3 are connectible, but there also exist neighborhoods n(p_1), n(b), n(p_3) that are mutually disjunct, we will say that (p_1, p_2, p_3) determines an oriented pretriangle, or an oriented presimplex of dimension 2. Similarly, for connectible and separatible p_1 and p_2, (p_1, p_2) determines a directed prelinesegment, or oriented presimplex of dimension 1. A point of S is a presimplex of dimension 0, both in and *over* S

As paths can be approximated first by line segments and then by triangles, so surfaces can be approximated first by triangles and then by tetrahedra. Line segments joined together create triangles and triangles joined together create tetrahedra. This same idea we use for presimplexes in an inductive definition.

Let (p_1, p_2, ..., p_n) be an oriented presimplex of dimension (n-1) defined in a Hausdorff space S, and let p_(n+1) be in S.

A point p is said to be separatible from any set s, if there exists an open set U(s) containing s and an open set V(p) containing p where U(s) Cap V(p) is null.

Define something like a convex hull of a presimplex. Is convexity definable? Hull - union of the intersections of all N(p_k) or - the smallest set of p's where p is connectible to all p_k simultaneously????

A point p is said to be connectible to any set s, if there exists open sets U(s) and V(p) whose intersection is not null.

A point p is said to separatible from a presimplex s

A point p is said to connectible to a presimplex s

If p_(n+1) is separatible and connectible to a presimplex of dimension (n-1), then (p_1, p_2, ..., p_n, p_(n+1)) defines an oriented presimplex of dimension n.


   Go To ToC
The Simplex, Simplicial Subdivisions
   Let {p_0, p_2, ..., p_n} be an unoriented presimplex of dimension n
   in S.  For real a_k >= 0, define a simplex over S by the set
   s of points p,

	               n
	s  :=  { p := Sum a_k p_k: Sum a_k = 1 }
	              k=0           k

   Visually the "topological" simplex s is defined as the open interior
   of the fit together collections of points, line, faces, ..., that would
   normally be seen by a geometer as "the simplex".  Notice that the values
   of the coordinates a_k are strictly greater than zero.  If they were
   allowed to be zero, s would include the collection of points, lines, ....

   We'll call this the closure of s, and denote it by ^s

   The idea of the presimplex is to prevent the points from being a system
   of degenerate points, and to create a situation where, e.g., a vertex
   of a tetrahedron is not found in the plane of its opposing triangular
   face.  If the topological space S is a locally compact metric space,
   the separation and linear independence of the points (verticies)
   can be accomplished by easier means, and the simplex is then constructed
   *in* S and not *over* it.

   We will interpret the points p_k as vectors so that addition and scalar
   multiplication of points are understood in the usal way.  The coordinate
   values a_k are called the barycentric coordinates for a point p.
   The metaphoric reference is of course to a physical "center of mass".

   Denote a simplex s by the string of its verticies p_0 p_1 ... p_n, then
   the k-dimensional faces can be denoted by p_i(0) p_i(1) ... p_i(k),
   where the indicies i(.) each range over the values 0, 1, 2, ..., n.
   The structure of a simplex does not include an ordering of its verticies,
   so the simplex is invariant under the group of permutations of its
   verticies.

   Counting how many k-dimensional faces there are for an n-dimensional
   simplex is then a classic combinatorial problem given as: how many
   ways can k+1 objects be separated from n+1 distinct objects?  The answer
   is the binomial coefficient (n+1 k+1).

   One can see fairly easily that the closure of s,

	^s  =  Cup            Cup          p_i(0) p_i(1) ... p_i(k)
	        k  {i(0), i(1), ..., i(k)}

  which takes the union of all the subsimplexes of k-dimensional faces,
  of which totally there are 2^n.

The simplex is a concept that appears is many aspects of geometry regarding the honeycombing of n dimensional spaces of positive, negative and zero curvature, of approximation theory. It is a simple abstraction that is very mathematically useful. For any n > 2, the n-simplex is the primary regular n-polytope in a space of n dimensions. The simplex reappears in linear programming, linear and nonlinear analysis, in regard to existence proofs for solutions of certain equations through fixed point theorems That a simplex is a covex body is of importance in many branches of mathematics including the theory of holomorphic functions of several complex variables. The simplex appears in the Regge calculus for expressing the content of the Einstein equations of general relativity using simplicial decomposition.


   Go To ToC
Abelian Groups, Homomorphisms & Cosets

   A group, the primal object of all abstract algebra, is defined
   as a set G with,

	1) An associative binary operation '#' that maps G x G *onto* G;
	2) The existence in G of an identity e:

		For all a in G, a # e  =  a

	3) for all a in G, there exists a unique inverse a^-1 with respect
	   to the binary operation defined by,

		a # a^-1  =  e

See a few examples and elaboration in another file. The subject of group theory in all its grand general glory is enormous, so what follows is only the essential matter of what is needed here.

A subgroup of G is a subset G_0 of G that is closed under the group operation.

An equivalence relation:

   Let a, b be elements of G, and suppose that G has a subgroup G_0.
   Consider the condition that

		a # b^-1  is in G_0

   If a is in G_0, then b^-1 is in G_0 since G_0 is closed under #,
   then also, for the same reason, b is in G_0.

   Assume that neither a nor b is in G_0.

Coset

   A group is called a commutative or Abelian group if the binary
   operation is commutative,

		a # b  =  b # a

   for all a, b in G.

For Abelian groups the binary operation is a generalization of the operation of addition for numbers, so the '+' notation is used.

A map h, from a group G into (i.e., an injective map or endomorphism) a group G' which preserves the binary relationships of G is called a homomorphism. If markings with an apostrophe mean the cognately mapped structure in G',


		h: (a # b = c)   ->  (a' #' b' = c')

   A homomorphism that is 1-1 is a group isomorphism.
   I.e., both groups have the same abstract structure.

The important points to grasping the concept of a group homomorphism that is not an isomorphism (actually, any homomorphism) are two. First, a homomorphism is a map of an algebraic structure *into* another algebraic structure, and Second, that the map allows a collapse of structure, the idea being that the image of the homomorphism (metaphor: a visual projection) contains some, but not necessarily all of the algebraic information (structure) of the domain of the mapping. Some elements of the domain other than the identity element in the domain may map under the homomorphism to the identity in the range, or image of the homomorphism. In the following schematic,


                h: A ------------------->     B
                                         _______________
                                         |             |
                 _______  - - - > - - -  | - - -       |
                 |     |                 |     | | A'  |
                 || K ||- - - - > - - - -|- - -|e|     |
                 |     |                 |     | |     |
                 -------  - - - > - - -  | - - -       |
                                         |             |
                                         ---------------

the domain of h is A as a map into B, and onto its range A' in B. The set K in A such that h: K -> e, the identity in B, is called the kernel of the homomorphism h.

presentation

representation
a homomorphism of the abstract group dtructure into a group of matrices. There is rarely a unique such homomorphism; the homomorphisms are often divided into irreducible representations (IREPS) and reducible representations (REPS). IRREPS are the building blocks of all REPS.

representing Abelian groups

Abelian groups are typified by cyclic groups and groups of translations in linear spaces.


   Go To ToC
Oriented Simplexes

   As a simplex is specified by an unordered simple set of verticies

	{p_0, p_1, ..., p_n},

   an oriented simplex is given by (n+1)-tuple of points

	(p_0, p_1, ..., p_n),

   with a specific symmetry regarding the action of the group of
   permutations P(n).  As before, we will assume that points may be
   multiplied by scalars, and so then may n-tuples of them.

   Any element of P(n), the group of permutations on n symbols,
   may be classified as even or odd depending on whether the number of
   two-point interchanges is required to generate the particular
   permutation; than any permutation can, infact, be expressed by a
   sequence of interchanges or exchanges of points is another matter.

   The programming technique to alphabetize an arbitrary
   list by such interchanges (a "bubble sort" which is *not* the most
   efficient algorithm) is practically a proof of the theorem, by concept,
   that shows that an arbitrary permutation can be acheived by such
   interchanges.  The sequence of interchanges is not unique, nor is the
   number of interchanges; however, it is a theorem that the evenness or
   oddness is an invariant of the consequent permutation.

   Let S(n) be the subset of P(n) that consists of even (symmetric)
   permutations, and A(n), the subset of odd (antisymmetric) permutations.
   Clearly,

			P(n)  =  S(n) Cup A(n)

   The group product in P(n) is by successive application of permutations.
   While the product of two symmetric permutations is symmetric, the
   product of two antisymmetric permutations is symmetric - not
   antisymmetric.  Symbolically:

			S(n) S(n)  =  S(n)
			A(n) A(n)  =  S(n)
			A(n) S(n)  =  P(n)

   Thus, S(n) is a subgroup of P(n), while A(n) is not.

   A nonfaithful representation of P(n) is given by a group homomorphism
   that maps S(n) to a multiplicative group of order 2 with the elements
   {+1, -1}, so that

	S(n+1) (p_0, p_1, ..., p_n)  ->  +1 (p_0, p_1, ..., p_n)
	A(n+1) (p_0, p_1, ..., p_n)  ->  -1 (p_0, p_1, ..., p_n)

   The group {+1, -1} defines the two orientations of the oriented
   simplex (p_0, p_1, ..., p_n).
			


   Go To ToC
Complexes

A complex is defined as a collection of simplexes where for any simplicial member of the collection, its collection of faces are also included in the collection. This implies that the faces of each of the former faces, are also included, etc. E.g., if the tetrahedral simplex


	{p_0, p_1, p_2, p_3}

   is a member of the complex, then so are the 4 triangular faces

   {p_0, p_1, p_2}, {p_0, p_1, p_3}, {p_0, p_2, p_3}, {p_1, p_2, p_3}, 

   and since each of these are members, then so are the 6 lines that are
   faces of each of the these triangles:

		   {p_0, p_1}, {p_0, p_2}, {p_0, p_3},

		   {p_1, p_2}, {p_2, p_3}, {p_3, p_1}

   Then, of course, also the points which are the faces of the lines:

			{p_0}, {p_1}, {p_2}, {p_3}.

This is to say that every member of a complex comes with all of its simplicial substructure.

As a simplex can be decomposed as a sum of lesser dimensional forms, so also can complexes, with orientation again providing +1 and -1 multiplication factors of terms in sums.


   Go To ToC
Oriented Complexes

If every member of a complex is an oriented simplex, the complex is an oriented complex.

As the verticies of a simplex can function as the basis of a module that is an Abelian group, so the members of a complex K can function similarly. If g_k are elements of a Group G, and s_k are elements of a complex K, then,


		M(G, K)  =  {m : m = Sum g_k s_k}
		                      k

   is a module over G.

If K is a oriented complex, where we understand that multiplication of any simplex by +1 is simply a multiplicative identity, while multiplication by -1 is reversal of orientation, a natural Abelian group to consider is J, the group of integers.


   Go To ToC
Chains

Let K be an oriented simplex, and M(J, K) a module over J with basis in K. Consider a submodule of M(J, K) of linear combinations in J of n-dimensional simplexes for some n. This is an Abelian subgroup, with elements of the form


		L  =  Sum k_j s_j
		       j

   where k_j are in J, s_j are in K and all s_j are n-dimensional.
   We call these n-dimensional chains, and denote the Abelian group of
   them by L^n(K).

   We take the zero element of M(J, K), "0" as the zero element of L^n(K)
   for all n = 0, 1, ....  In L^n(K), the zero element is considered to
   be an n-dimensional chain, and note that

		(p_0, P_1) + (p_1, p_0)  =  0


   Go To ToC
The Boundary Operator & Poincare's Lemma

The boundary operator is a fairly simple, yet meaty and fecud concept that lies at the core of simplicial homology, homology in general, and becomes important in many other areas, some of which I will describe below as connections.

To define the boundary of a chain, we define first a boundary operator and its action on an oriented n-dimensional simplex.

   Let a boundary operator d be defined as follows:

	For L  =  (p_0, p_1, ..., p_n)

	dL  :=  Sum (-1)^k (p_0, p_1, ..., |p_k|, p_(k+1), ..., p_n)

		where |p_k| means remove the element from the n-tuple
	        leaving a (n-1)-tuple.

	d{p_0}  :=  1

   For example,  if L = (p_0, p_1, p_2, p_3), a tetrahedron,

                                   p_0
                                  /\
                                 /| \
                                / |  \
                               /  |   \
                              /   |    \
                         p_1 /____|_____\ p_3
                             \    |     /
                              \   |    /
                               \  |  /
                                 \|/
                                 p_2

	   dL  =  (p_1, p_2, p_3) - (p_0, p_2, p_3)
		+ (p_0, p_1, p_3) - (p_0, p_1, p_2)

   The boundary of L is an oriented sum of the faces of L.  Looking at the
   specifics of the orientation, we can keep track of things by using a
   "right hand rule" familiar to all physicists, curling the fingers of
   RH with the progression of verticies in a face triangle, the thumb
   indicates the orientation of that face.  This done, and the orientation
   reversals taken into account, dL is a sum of faces with orientations
   that all point inward, into the simplex.

   Notice that we obtain a face of the simplex by deleting a vertex from
   the simplex, and in the process delete all lines going to or coming
   from the simplex.

From the definition of d acting on an n-simplex and the result, as a 3-chain of applying d to a 4-simplex, and also from a desire to be able to apply d more than once, it becomes necessary to make the assumption that d is a linear operator, so that for


			L  =  Sum k_j s_j
			       j

			dL  :=  Sum k_j ds_j
				 j

   This completes the assumed properties of the boundary operator d
   in its action on the module M(J, K).

The first thing that becomes obvious is that d maps n-chains to (n-1)-chains, and so


			d L^n(K)  ->  L^(n-1)(K)

Since the image of d in L^(n-1)(K) is never a simplex, it is clear that this is strictly an injective map. And the assumed linearity of d then implies immediately that d is a homomorphism from the group L^n(K) onto a proper subgroup of L^(n-1)(K).


   A cycle is defined as a chain L, where dL = 0.  For any L 
   it is a theorem, often called Poincare's Lemma, that d(dL) = 0.
   This is most generally recited as "The boundary of a boundary
   vanishes".  E.g., Consider a ball in n-space.  Its boundary is an
   (n-1)-sphere.  The boundary of any sphere vanishes.

   If it is true for a simplex, then it will be true for any chain,
   and therefore more generally true for any element of the module
   M(J, K).  Although I want to avoid the standard theorem-proof
   routine, the focus on this result may be worth it.  The fact that
   the proof is trivial by a classic conclusion doubles the worth.

   Proof:

   Using the definition given for the action of d on a simplex,
   and iterating it 

	dL  :=  Sum (-1)^k (p_0, p_1, ..., |p_k|, p_(k+1), ..., p_n)

   Then
	d^2 L  =

   Sum Sum (-1)^j (-1)^k (p_0, p_1, ..., |p_k|, p_(k+1), ..., |p_j|, ..., p_n)
    j   k

   Sum Sum (-1)^(j+k) (p_0, p_1, ..., |p_k|, p_(k+1), ..., |p_j|, ..., p_n)
    j   k

   A classic point in many a proof has been reached: each term is the
   product of a factor symmetric in the two indicies j and k, i.e.,
   (-1)^(k+j), and a factor that is antisymmetric in the same indicies
   implies the vanishing of the result.  Extend the result by linearity
   to L^n(K) and then also to M(J, K).

   QED

Now, from Poincare's lemma we see that the sum of two, and therefore any number of cycles is a cycle. This tells us that the set of cycles in L^n(K) is a subgroup of L^n(K).

Let Z^n(K) be the subgroup of cycles in L^n(K). [Much of the original group theoretical work was written in German, and certain synbologies that have become standardized are mnemonics in German. The symbol 'Z' comes from the German "Zyklus" for "cycle", as the standard 'E' for an identity element in an abstract Group comes from "Einheit" for "identity"; of course, the German nomenclature came first.]

Formally (and finally) we say (and define) that a cycle z in Z^n(K) is homologous to zero in K, and write


			z  ≊  0

The standard symbol for this equivalence relation (clearly symmetric, reflexive and transitive, as required) is either a single flattened sine curve or two of them, one over the other, slightly separated. My symbology is completely unused elsewhere.


   Go To ToC
The Inverse of Poincare's Lemma


   Go To ToC
The Cohomology Concept & Duality

Throughout mathematics, structures have dual structures where an exchange of a given structure and its dual leave the mathematical statements invariant. That is the general mathematical concept of duality

One might say that mathematics is riddled with the single and only group of order two as a group of form invariance. Equivalently, there may be an obsession with isomorphic involutions of order two.

Perhaps, since mathematics is clearly I would think, a work of art and not really the discovery of some preexisting Platonic ideal, this is because mathematicians are mostly dualistically minded, with some innate or cultural sense of balance. There is a certain inertia to cultural thought, and mathematics does have its own culture by force of its history. The great innovations of mathematics, as in any of the sciences, have occurred in defiance of the common wisdom - which *is* that cultural inertia.

Yet, dual structures are difficult to explain away this way, and tend to give credence to the Platonic viewpoint that mathematicians *discover* rather than *invent*. The structure mathematics may very well be a product of the commonalities of the human nervous sytem and it perceptual apparatus. I happen to believe this to be true - at the moment.

A few examples of dualties set theory linear spaces - Banach -> Hilbert Manifolds and the spaces of functions defined on them

What should be clear is that a structure and its dual are not necessarily equivalent; the point of the dual structure is that in some manner, peculiar to the formal circumstance, a structure and its dual conspire together to create an invariant. The abstract symmetry consists of exchanging the structure with its dual, and in so doing leaving the invariant is actually invariant. A protypical example of this is in the differential geometry of metrically related "contravariance" and "covariance" of tensorial geometric objects.

To tease that a bit further: generally, there is a group G of transformations theat act on the structure which induces a dual action of a "dual group" on the dual structure. In the appropriate combination of structure and dual structure, under the actions of these two groups, invariant(s) are found. These are the "first invariant(s)". Then, in addition, it is found that the invariant(s) are also invariant under the exchange of the structure and its dual; hence, the "invariance of the invariant" under duality transformations. This is as much a duality of formalism and notation as it is of anything else. The question of how much notation itself influences the thought processes and development of mathematics is a different and interesting question in itself.

Morphologically, the existence of any duality depends on both concepts of "identity" (upon which "equivalence" depends) and "form invariance", which, strictly speaking, are metamathematical concepts.

Spaces of functions on a topological space. Cohomology and dual linear spaces.


   Go To ToC
Cohomology Groups


   Go To ToC

   Go To ToC
Homology v, Homotopy


   Go To ToC
CONNECTIONS
Differential Forms

Diffeomorphisms

Ref Tensor Darboux Classes & contravariant covariant distinctions Electromagnetic Theory

Manifolds & DeRham Cohomology Ref Manifold The inverse Poincare Lemma

Clifford Rings & Algebras

Projective Algebras


   Go To ToC
Quantum Theory

Mechanics can be divided into two areas or problems. The first is kinematics, whose problem is to specify the possible states that a physical system can exist in; the second is dynamics whose problem, given some state at a specfic time, is to specify the state as some future time.

In its Hamiltonian formulation, classical mechanics speaks of its observable quantities, such as energy and angular momentum, as measurable (and differentiable) functions of primary variables q_k and p_k, positions and momentum, each of which are construed as functions of time. The (q_k, p_k) coordinatize a manifold of the allowable states of the system under consideration, and that manifold is called phase space. A primary function H(q, p) of the q_k and p_k, and possibly time called "The Hamiltonian" represents, in non relativistic physics, the energy of a system. First order equations Poisson brackets Symplectic structure

In Quantum Mechanics (QM), there is a canonical quantization where the fundamental pairs q_k and p_k are mapped to linear selfadjoint operators Q_k and P_k which act on a separable, infinite dimensional projective complex Hilbert space, which interpretively replaces the often nonlinear phase space of states. For the perhaps surprisingly intricate functional analysis of this situation see [Neumann 1932]. In addition, the Poisson brackets get mapped to commutators of operators.

Quantum Mechanics is a large subject with many difficult and arcane problems that I don't want to explain further here. Instead, I want to concentrate on one particular aspect of the standard relationship between classical and quantum mechanics. This centers around two ideas. 1) A classical variable is replaced with a formally Hermitean (technically selfadjoint) operator acting on a linear complex space, and 2) that such an operator has a well defined eigenvalue problem to which a physical interpretation is attached.

Regarding the first idea, while a physical variable "a" may (and usually does) take on a continuum of values, it's quantum counterpart, a formally Hermitean operator "A" may or may not take on those values. In a finite dimensional space, there is no difference between "formally Hermitean" and "selfadjoint", and from here on, I'll make the assumption of finite dimensions. The concepts will not be degraded by this. The interpretation is contained in the second idea.


   Let H be a finite dimensional Hilbert space and let |.> be
   elements of H.  For a Hermitean linear operator A acting on H,
   a well defined eigenvalue problem exists with eigenvalues
   a_k and eigenvectors |a_k>:

			A |a_k>  =  a_k |a_k>

   If in any representation, A is represented by a matrix equal to its
   transpose complex conjugate (A!), A is Hermitean.  This is the
   appropriate extension of a symmetric real matrix, equal to its transpose,
   to complex matricies.  As one is guaranteed that a symmetric matrix
   will have real eigenvalues, so is one is guaranteed the same result
   for Hermitean matricies.

   One is also guaranteed that a set of |a_k> exists that is an
   orthonormal basis for H.

   For ease of discussion, it will be assumed that no two of the
   eigenvalues are equal (zero spectral multiplicity).

   A principle of measurement:
   Whenever the variable associated to A is measured, the only values
   that can appear are the eigenvalues of A.

   The Principle of Superposition:
   This is simply a pompous name that describes the linearity of the
   Hilbert space.  Any allowed state of the system is represented by an
   element of the Hilbert space, and it is then trivial that any linear
   combination of states is also a state.

   This becomes interesting only with regard to the physical interpretation
   of the formal mathematics.  Since the |a_k> are an orthonomal basis
   for H, an arbitrary state |f> can be represented as,

		|f>  =  Sum c_k |a_k>
		         k

   where c_k are complex numbers.

   The dual space of H, the space of linear functionals on H, is completely
   isomorphic to H.  Visualizing the matrices that appear in representations,
   The elements of H, the |.>, appear at "column vectors"; the dual
   map sending matricies into their complex cojugate transpose sends
   "column vectors" into complex cojugate "row vectors", which we notate
   abstractly as <.|, so the orthonormality of the eigenbasis
   |a_k> is expressed as

			<a_k||a_j>  =  delta_(kj)

   a Kronecker delta.  As a standard convenient shorthand in this notation
   of Dirac, write,
   
			<a_k|a_j>  =  delta_(kj)

   Also,

		<f|  =  Sum c_k* <a_k|
		         k

   where c_k* is the complex conjugate of c_k.
   Then, combining the eigenvalue equation and the superposition idea,
   
		A |f>  =  Sum c_k a_k |a_k>
		           k

		<f| A |f>  =  Sum c_k c_j* a_k <a_j|a_k>
		              k,j

		<f| A |f>  =  Sum |c_k|^2 a_k
		              k,j

   |c_k|^2 >= 0 being the square of the modulus of c_k.  In order,
   conceptually, to reconcile the measurement principle with the
   superposition principle, one can interprete |c_k|^2 as a probability
   distribution, saying that when the system is in the state |f>,
   the result on measurement of the variable A will yield a_k with
   a priori probability |c_k|^2.

   The standard theory then normalizes all physical states represented
   by the |f> in H,

			<f|f>  =  1

   so that the states then actually reside in a projective Hilbert space,
   where for any othonormal basis |a_k>

		<f|f>  =  Sum |c_k|^2  =  1
		           k

   This effectively normalizes the probability distribution |c_k|^2

   Say we are talking about a position operator, Q.  Its eigenvalues
   q_k are those that are classically observed.  The quantization
   procedure then attaches each of these values to each of an orthonormal
   eigenbsis of a projective Hilbert space, or more generally, of a
   Hilbert space.  In a Hilbert space of dimension n, the tips of
   eigenbasis vectors can be connected to draw the edges of an
   (n-1)-simplex.  The eigenvectors |q_k> represent the verticies of
   the simplex.

   The normalized states then range over the *closed* simplex, closed
   since the coefficients c_k are separately allowed to be zero.

   A new feature, however, is that the Hilbert space is a *complex*
   space of n complex dimensions, and 2n real dimensions.  The
   standard probability interpretation speaks only to the moduli of
   the c_k, so we have left the arguments, or phases dangling.

   If we confine ourselves to this one operator, those phases are
   irrelevant, and can be ignored.  It should be emphasized that this
   point we are only in the arena in which the game of QM is played,
   and that we are not yet playing the full blown game.  For that
   one has to see how things work with operators that do not commute
   with each other.  Staying with commuting operators, and noting that
   the above simplex is only for a single "coordinate axis", suppose
   we have a set of m mutually commuting position operators Q(r),

		Q(r) Q(s) - Q(s) Q(r)  =  0,

   for all r, s = 1, 2, 3, ..., m.

		

   

only reals needed for position Varadarajan

Extension to m mutually commuting position operators

Multiple connectedness The Aharanov-Bohm effect Gauge theories generally Applications of the Inverse Poincare Lemma




Go To ToC
COLLECTED REFERENCES
  1. [Kuratowski 1962]
  2. [Flanders 1963]
  3. [Bourbaki 1966]
  4. [Hurewicz 1948]
  5. [Neumann 1932]



Go To ToC
FOOT NOTES

   1. 




Physics Pages

Math Pages


Home Page

The URL for this document is
http://graham.main.nc.us/~bhammel/MATH/simphomol.html
Email me, Bill Hammel at
bhammel@graham.main.nc.us READ WARNING BEFORE SENDING E-MAIL
Created: August 22, 2002
Last Updated: