An ongoing essay or monolog concerning the concept of quantum gravity ab initio, and the constraints on structural models for it that are imposed by the essential physical notions of general relativity and especially by quantum theory.




        "It is better to say something and not be sure than not to
         say anything at all."  -- R. P. Feynman,
                                   The Meaning of It All
                                   Thoughts of a Citizen Scientist




One rather general way of looking at the problem of quantum gravity (QG), which has eluded theoretical physicists now for over 80 years, is to consider it as the problem of creating a mathematical model that embodies simultaneously the *fundamental* concepts of general relativity (GR) and the fundamental concepts of quantum theory (QT).

The unification of these two essential pillars of modern physics has proved to be extremely vexing in that all the simple attempts at *formal* unification result in conceptual quagmires that seem not to be interpretable in any reasonble way.

It is possible to be suspicious of either, or both of the components that are to be united, and then to suspect that perhaps the formalisms already overburden the concepts by structural assumptions that are conceptually unnecessary.

The first subproblem then is to decide just what the fundamental concepts of these mathematical models are in relation to the real world.

This may sound like a simple question to which the answers are already known, in the sense that the mathematical and interpretational aspects of QM and GR have already been axiomatized. That context, however, begs the question by avoiding it. For example, in [Bunge 1967], the formalisms are each separately made primary, and simply put on a mathematically logical basis; the conceptual basis of the overall physics is avoided.

One thinks immediately of the extensive and sophisticated mathematical formalism in which GR and current notions of QT are expressed, and that these are the fundamental concepts. The expression of QT is currently restricted to "methods of quantization", of which there many besides the ever popular canonical quantization that begins with the Hamiltonian formulation of classical mechanics. Such a quantization method starts with a mathematical formulation of the kinematics and dynamics of a classical system and then applies quantization rules of transformation to express the quantized version of the kinematics and dynamics of the same system. I would suggest here that this may be shortsighted and too rigid a point of view. I would rather take the viewpoint that these formalisms are specific instantiations of underlying metaprinciples of GR and QT, and that the standard instantiations are simply not appropriate, nor amenable to a consistant combination of the two, metaprinciples; the key, again, being that the formalism overburdens the concepts by making unnecessary, and actually wrong, assumptions.

This problem of quantum gravity might be called the primal Gordian knot of current theoretical physics, one that has resisted the theoretical analog of "percussive engineering" for over eighty years.

We have two highly complicated physical theories each extremely succesful in their regimes of discourse. Yet, in subtle and not so subtle ways they seem to contradict one another. The existing mathematical formalisms can, with even more complication and sophistication, be slammed together; the result is an unintepretable mess. The full word on recent techniques of quantizing the gravitational field using quantum electrodynamics as a model is not yet in. However clever and sophisticated the attempts are, personally, I do not believe they will solve the problem, though something can often be learned even in failure. That is the exciting life of any scientist: most often, they are wrong, as the following may also turn out to be.

In appoaching the QG problem, with the understanding that no current combination of relativistic and quantum formalisms solve the problem, there are three obvious areas of consideration in an attempt to understand the basic nature of the problem itself:



	1) Physical and conceptual problems with GR itself

	2) Physical and conceptual problems with QT itself

	3) Problems of conceptual contradiction between QT and GR


I allow that neither QT nor GR are exactly correct formalistically, but that they may be adjusted to permit their consistent combining; but the very first thing to do is to examine the problem areas paying particular attention to philosophical and interpretational aspects of theory.

Prior General Considerations On The Nature of Relativistic (R) and Quantum (Q) Theories:

As they are customarily considered and formulated mathematically, the metaprinciples of quantum theory and relativistic theory are mutually inconsistent.

The current orthodox QG scene consists of two streams of thought: First, attempt to "quantize the gravitational field", and second, start at some supposed quantum level and construct a theory that gives rise to massless spin-2 particles that would then take the role of the putative graviton; the latter stream being, of course, the string theory approach.

In the first stream, it eventually becomes clear that the very coordinates themselves must, as a matter of consistency, be quantized; nobody seems to know what to do with this, or how to do such a quantization correctly - or, there's a perfectly good theory someplace, and I know nothing about it. Though the coordinates represent points of a putative spacetime manifold, they are not the points. There is good reason to believe that the points of the manifold as events cannot be specified with arbitrary precision and that they are intrinsically limited by the existence of a physically real Planck regime which conceptually resists any continuum model which declares to exist what, in fact, does not exist. There is also good reason to believe that our geometrical and topological pictures of unstructured points, lines, etc. are actually in disagreement with known mathematics.
Classical Geometry & Physics Redux

In the string theory, the second stream, a background problem exists and shows that a string theory can be built only when an a priori classical spacetime is given.

Success in either the first or second stream cannot lead to any truly fundamental physical theory, though they may, in some sense, be said to work, provide enlightenment, or approximation techniques.

Considerations, More Specifically:

Here are some of my considerations, a bit more explicitly, taking the above streams of thought into account:



	1) A truly fundamental physical theory can not begin with a
	   quantization procedure mapping a classical model to a quantized
	   model.  Fundamental physics begins with a Q theory, and
	   should show classical tendencies as the peaking of probability
	   distributions for large objects, or many objects.

	   In this sense, the second orthodox stream is "more correct"
	   than the first, but still even string theory begins as a
	   fundamentally incorrect quantization of a classical model.

	2) Neither stream takes into account the fundamental limitations
	   implied by the Planck units, but rather continues in the 19th
	   and 20th century manner using the calculus of continua in
	   the modeling of spacetime structure when the implications of
	   the Planck units are that these continua are ultimately
	   meaningless from the viewpoint of Q physics.

	   No place in real physics has an infinitesimal or infinity
	   actually appeared; such things are in principle, not
	   measurable, and any fundamental theory must necessarily
	   take into account the Planck limitations' intimations of
	   "no infinitesimals", as well as the finitude of the universe.
	   A fundamental theory must be in essence finitistic.
	   A sufficiently finitistic theory will possibly even escape the
	   undecidabilities of Goedel regarding its theorems while still
	   entailing problems of computability or sovability; this
	   however, is unlikely since anything sufficiently simple will
	   probably only allow completely deterministic dynamics.
	
	   On this count, both orthodox streams fail miserably; the
	   conclusion follows that neither stream is fundamentally
	   correct, and that any theory that relies on either can never
	   achieve an assumed Theory Of Everything (TOE).

	3) Since a fundamental physical theory of existence must be
	   Q by its nature, the idea of quantization being logically
	   backwards; one misses the patent richness of an a priori
	   Q theory in proceeding by a quantization mechanism.

	4) The Knot theorist Louis [Kauffmann 1991] shows (with more
	   assumptions than he admits) that the special complex linear Lie
	   group in two dimension SL(2, C), which covers the one component
	   Lorentz group of Special Relativity (SR) arises already at
	   the level of the concept of "distinction", and from this, one
	   can expect to see that this Lie group, or more likely its
	   algebra, might appear in a fundamental theory at a primitive,
	   or slightly higher than primitive structural level.

	5) While an aspect of R principles is a space & time democracy,
	   spatial variables are nominally operators (QM) or completely
	   classical parameters (QFT); on the other hand, in no standard
	   existing Q theory does a time operator exist, nor can one be
	   constructed within any present Q theory.  This is already a
	   conceptual and formal contradiction between present Q and
	   R theories.

	6) Observers exist (in the literature) in both Q theories and
	   in R theories.  However, there is no real room for any
	   anthropomorphic observers in either; the observer concept
	   is pretty much of a hoax, a residue of the quirky philosophy
	   of logical positivism.  The uncertainties of Q theories are
	   essential structural elements of physical ontology and have
	   nothing to do with observers.  Q theories are not subjective,
	   meaning that, Bohr was wrong, that v. Neumann's psychophysical
	   parallelism is equally wrong, and that in the Planck regime,
	   where observers are difficult even to conceive of, uncertainty
	   relations still have theoretic meaning.  This situation lends
	   credence to the idea that the "state vector" is not merely a
	   secondary construct, but is instead the symbol of an ontological
	   entity - in agreement with the outlook of Roger Penrose.

	7) The fundamental assumption of any R theory is the ontology
	   of a given spacetime manifold of sufficient smoothness.
	   In this context, classical absolute determinism is inescapable.
	   The idea of an Lorentz invariant probability distribution
	   defined by a complex valued measure on Minkowski space has
	   been proven to be impossible.  Q theories (which I clearly
	   believe to be more correct regarding fundamental physics)
	   show the indeterminacy of both future and past.  The current
	   Q & R theories are then at odds again, and mutually inconsistent.

	   Any Q theory must be essentially linear, while GR is
	   essentially nonlinear, yet again they are mutually inconsistent
	   in their fundamental formal expressions.

	8) One way in which to view the large Planck mass (as I argue in
	   a brief essay) is obviously not as a maximal mass, since such
	   a view cannot be argued in principle from QM or QFT, but
	   rather as the smallest mass for which GR makes sense: the
	   Schwarzschild diameter of the Planck mass is the Planck
	   length.  Then, GR is not, and cannot be considered to be
	   fundamental since it fails specifically at a mass larger
	   than any of the known elementary particles.  This argument
	   is admittedly weak, yet suggestive.
	   
	9) QM is fundamentally incomplete and incorrect: QM assumes a
	   Newtonian space and time based on a continuum which is not
	   physically possible.  The additional assumption of the R
	   principle resulting in an inhomogeneous wave equation on a
	   Minkowski space (K-G eq.) leads to a mess of negative
	   energies and negative probabilities that defy interpretation
	   in that context alone; even applying the method of second
	   quantization, raising the wave function to an operator does
	   not relieve the negative probabilities.  The negative
	   probabilities can only be alleviated and negative energies absorbed
	   in a method of second quantization with suitable constraints,
	   creating a many particle QFT with a structure predicting
	   particle-antiparticle structure.  Even then, theory becomes
	   as much black magic as it does a physics which seeks
	   to sidestep the very real question of physically existing
	   continua.  We really have no particular need of the full
	   set of real numbers; the "precontinuum" of the rationals
	   will do very nicely.

	   A linearization of the K-G equation a la Dirac, predicts the
	   existence of spin-1/2, leaving the result that there is no
	   consistent relativistic single particle Q theory.  A strange
	   situation which invites the seeming necessity of some sort of
	   mystical holism, since theory and its philosophy denies this.

       10) The very idea of canonical field quantization on a Riemannian
	   manifold is not invariant w.r.t. change of coordinates, and
	   these are considered as classical parameters. [Fulling 1973]


Putting all of this together, one comes to the seemingly lame cul du sac: there are strongly correct aspects of what I will loosely call orthodox Q and R principles; it is also apparent that these principles are seriously defective for any fundamental physics, each as they stand in their current formulations, as well as in any attempted concurrent validity.

Since it seems to me most certain that Q principles are necessarily far more dominant in fundamental theory than R principles, it may turn out that R theory, which is usually constructed on the level of continuum models, describe an emergent symmetry that is not present on the Planck level.

A serious axiomatic problem that exists in any R theory is the fundamental assumption of the existence of an absolute spacetime (ST), and in particular, the existence of the temporal extension that is still connected to the Newtonian time that flows not only uniformly, but coherently and synchronously at every point of space. Newton himself was bothered by having to make this assumption; in retrospect, at that time, he had little choice.

The advances of QM and R theories have done little but propagate this fairly mystical idea of Newtonian time in physical theories. In a relativistic spacetime, there is no motion. The idea of thinking in terms of dynamics of a relativistic theory is artificial at best, and misleading at worst. We we call these theories "relativistic", "absolute" would be more accurate and certainly less misleading. Relativity theory is not about things being relative, it is about different absolutisms, of a four dimensional space and group theoretic (geometrical) invariants constructed from everything that it contains.

One can see consequences of this relativistic fiction in any attempt to create the Hamiltonian formulation of Electromagnetic Theory (EMT); this results in a "frozen formalism" that can be not quite be avoided by inventing yet another ad hoc dynamical parameter. [Mercier 1959].

Either one enters an infinite regress of temporal concepts, or finally determines a necessarily Q origin for an apparent, local, Newtonian time that serves for the non R dynamical descriptions of systems in Q and Newtonian regimes. QM qua QM is not about to serve up such an explanation.

One can see consequences yet again in the "No Interaction Theorems" in Special Relativity due to Mukunda and Sudarshan [Sudarshan 1974].

Modern physics has not yet addressed this fundamental mystery of what time actually is, how we get away with the Newtonian mystery, what space actually is, and why, locally and at least down to about a hadron radius we get away with a Euclidean model. We'll look at these mysteries a bit later on.

Despite what seem to be essential failures in the two approaches criticized above in meeting the problems of quantum gravity, there is a third approach that seems to avoid the problem that an a priori spacetime background needs to be given. That approach is through loop quantum gravity which may be allied with the present approach consisting of a fundamental generalization of the basic kinematical constraint of quantum theory.

While loop quantum gravity is still an approach that is a quantization procedure with a goal of constructing a projective Hilbert space of states, its two main features are that it does avoid the assumption of a "background metric", and concommitantly that field on which attention is focused is an affine connection. In GR, while the metric is associated with a gravitational potential, the affine connection whose components are the Riemann-Christoffel symbols, are associated with gravitational forces.

The differences between a "standard" classical field, prototypically the EM field, that occur with the gravitational field are that the classical field is being defined on some given metric spacetime with a gauge group (as a consequence of the masslessness of the field) that is not directly concerned with geometry, whereas the gravitiational field is the metric field and the gauge group is the infinitely parametered group of coordinate transformations, the Bondi-Metzner-Sachs group of diffeomorphisms. In its abstraction to the theory of differential forms [Flanders 1963] , classical ElectroMagnetic Theory (EMT) is seen to be a purely topological theory (not requiring a metric), so much so that it can be reformulated without much difficulty in terms of homological algebra (actually a dual cohomological algebra), without even continuity, much less differentiability requirements.

In attempts to quantize the GR, these different aspects get in the way, regarding both formal mathematical manipulation and regarding the concepts of space, time and spacetime themselves.

The notion of QT in its most general form is not clear cut, even though we possess a fair number of its aspects by an examination of the methods of quantization. There is a QT expressed by the quantization of a known classical system, or standard nonrelativistic quantum mechanics (QM), [Merzbacher 1961], [Messiah 1965]. Even there, however, as is well known, there is the ambiguity in the quantizing map from classical to quantum formalism in the ordering of products of operators that can only sometimes be resolved by symmetrizing the product.

There is QT expressed by the quantization of the electromagnetic field, quantum electromagnetics QEM, often considered as the quantization of an infinite number of classical oscillators. Add to QEM an interaction with the electron as described by the Dirac equation and one has the still current crown jewel of theoretical physics, quantum electrodynamics (QED).

One cannot think of QT simply as some process of quantizing a classical system. Logically, QT comes first; the classical system is some kind of an average and/or most probable result. From this view, the process of quantization is a bit primitive and silly. Quantization is not a proper QT, but, it surely gives some good clues. Mathematical physicists are now busy creating quantum groups, quantum polynomials, quantum geometry, noncommutative geometry, and supersymmetrizing every mathematical thing in sight. Supersymmetry and quantum groups, however, are other stories for another time, though their connection with a genuine QT is probably not nil.

One should logically start with an unknown metatheory that is QT, and from this derive its representations or instantiations. Thus, "quantization of a classical system" is doing things backwards.

That there should be possible such a metatheory as QT is suggested by an incompleteness or gap in that there exist quantum systems with no classical analogs. The obvious examples are particles with spin, particularly those of half-odd integral spin. Models of spin-1/2 as an actual particle spinning in space, appear periodically and prodigeously in the physics literature; none ultimately seem satisfactory or convincing, although there classical suggestions to spin-1/2 in Euler angles of classical mechanics, the "belt trick" that illustrates a topological property of E3, that shows itself even with regard to macroscopic objects like belts that are embedded in it. The notion of spin, however, is not just some special property of E3. As Lie group SU(2) happens to be the universal covering group of the doubly connected Lie group SO(3), every SO(n) for n > 2 is doubly connected and has a similar SPIN(n) group as its universal covering group.

More generally and specifically, there is a spin covering group for any pseudoorthogonal Lie group SO(p, q). The spin covering groups are gotten at through Clifford algebras, of which the Pauli matrices and the Dirac matrices are special cases. So the existence of spin can be had in any En, and then "locally at a point" in any manifold of n dimensions.

One approach to Clifford algebras is try to express a quadratic form as a product of two linear forms. This is the textbook standard for deriving the Dirac equation from the Klein-Gordon equation that is obtained by the usual "operator substitution" of elementary QM applied to the relativistic energy:



        E2  =  p2 + m2


This makes the idea of spin-1/2 seem not such a quantum concept since one can factor this classical form into two linear forms using the same steps by which the Dirac equation was originally "derived". [Dirac 1958]

It is sometimes said that spin is a quantum phenomenon, others point to the derivation of the Dirac equation to say that it is a relativistic phenomenon. Spin, once seen as arising simply from the factorization of quadratic forms is neither inherently Q or R in nature; it can be either, and spin can be derived from the nonrelativistic Schrödinger equation, as it was by Feshbach and Villars simply by factorising the quadratic Laplacian operator that appears there. The mystery reduces to why it is that quadratic forms permeate physical theory.

The only available remark I have is that quadratic forms happen to be mathematical tools with which to express magnitudes associated with complicated mathematical objects. Symmetry groups are often defined as being those that leave such magnitudes invariant. In a sense we can blame the central appearance of quadratic forms in mathematics on the old theorem on right triangles attributed to Pythagoras.

An significantly additional insight is provided, however, by prof. James G. Gilson of Queen Mary College ( http://www.maths.qmul.ac.uk/~jgg/) in his work on stochastic quantum theory.

Spin seems now to have origin in subtle geometric properties of E3; and yet, that very subtlety makes the explanation not very satisfying. What would be satisfying? Probably a geometric rather than algebraic understanding, a picture that shows "what is going on". Surely, it is this kind of satisfaction that is being sought in the many models that try to envision spin-1/2 as a spinning particle. In a very real way, to the human mind, to understand is to geometrize. We do this, knowing full well that there are and must be limits to validity of such an approach to understanding.

If there is not a clear picture of geometry at the basis of geometry of our picture of macroscopic spacetime, assuming there is still a mathematical model, it will be undoubtedly algebraic.

An important conceptual crux of the problem of inventing QG is that the existing theories imply that the very fabric of spacetime, at least as defined by any system of coordinatizing functions, must be Q in their nature.

From a different point of view, one can see that the general covariance which requires a form invariance under coordinate substitution has a pattern much like that of Yang-Mills gauge fields, and involves the introduction of a gauge field that is the massless gravitational field obeying the Einstein equation, again implying the necessity of coordinate quantization since the gravitational field is the gauge field of general covariance. [Kaempffer 1965] gives a concise, and I think clear, exposition on this point.

We know an essential aspect of quantum systems is that there is dispersion: as a physically realizable QM state propagates in time, what will be measured at some future time will be less determinate that what one measures at the point of state preparation. Put another way, the wave function speads in time.

The Schrödinger equation is a linear, parabolic partial differential of second order in its spatial derivatives, and first order in its time derivatives. Its form is exactly like that of the equation of Heat:



	(L + k ∂/∂t) f  =  0

   where L is the second order Laplacian operator, k is the heat
   dissipation constant, ∂/∂t is a partial derivative, and f is the
   temperature field.  This general form is sometimes also called the
   Lax Equation


The difference is that k is replaced with "i ħ", and f is construed as a complex rather than real valued function. Where one would have an exponential decay exp( -kt ... ) factor in solutions to the heat equation, that factor in the Schrödinger equation becomes exp( -i ħ t ... ). It requires a little more to show that ħ can be interpreted as a "constant of dispersion of free space" for the Q psi-waves, in much the same spirit as one speaks of the dialectric constant for free space in the context of EMT and the Maxwell equations.

The exceptions to this spreading is are "stationary" energy eigenstates. The prediction of the future as well as retrodiction of the past is, however, not generally determinate; it turns out that this rule must be conceptually applied to the fabric of spacetime, yet GR and SR starts with the assumption of a given spacetime manifold meaning that the entire history of the universe is already determined at the moment of its first existence. Either the fundamental mathematical assumptions of both SR and GR are manifest nonsense, or the standard interpretations of the fundamental mathematical formalism is nonsense. In either case, there is nonsense in the air of theory, and it would be a good idea instead to be speaking sensibly.

Concerning the incompleteness of quantum mechanics (QM), a particular aspect of QT, which is for one particle and does not contain the principles of GR, or SR, there are at least two schools of thought: the Copenhagen school (after Niels Bohr) believes that QM is complete and absolutely true, the other more skeptical school which had Einstein in its camp believes that QM is essentially incomplete.

I suppose it is worth remarking, but perhaps gratuitous, that a question of physical incompleteness is distinct from sensibility, and also distinct from mathematical incompleteness. Most physically theories are necessarily physically incomplete simply because all of physical reality has not yet been completely, absolutely and successfully modelled; to assume that such a thing is even possible requires a bit of arrogance. Interestingly, Lenin said categorically that it was not.

Logically, and theoretically, however, things are a bit more subtle when it comes to QT and Relativistic Theory (RT), simply because these are misnomers. They are not theories, but metatheories which govern the construction of theories. While it is not too difficult to understand when a physical theory is incomplete, understanding the physical incompleteness of a physical metatheory is not so easy exactly because the metatheory is another level removed from any physical modelling and from physical measurements.

However, the way in which an incompleteness is expressed or thought of varies. Einstein once had the idea that underlying QM was a theory of "hidden variables" behaving in a deterministic way but whose statistical behavior gave rise to the indeterminism of QM that is expressed in the Uncertainty Principle. Others see an incompleteness in terms of the two processes of QM, 1. the evolution of a quantum state by the partial differential equation called the Schrödinger equation and 2. the discontinuous process of measurement. QM does not provide a connection between them, and does not tell us what happens during a measurement process.

In this sense it may be also called incomplete. Various modifications of the Schrödinger equation have been investigated that are nonlinear in an attempt to obtain such a picture; no satisfactory alternative to the Schrödinger equation has emerged from this. Johann v. Neumann [Neumann 1932], went so far as proving a theorem that said that hidden variables were impossible. That theorem, though interesting, is generally considered to be discredited and off the mark, especially since David Bohm has constructed a type of hidden variable theory that agrees with all the predictions of QM, while being a hidden variable theory. This theory has been known and around for many years. More attention should probably be paid to it, as John Bell [Bell 1975], has suggested.

While Einstein expected that a perfectly good classical statistics of these hidden variables existed to explain the quantum statistics, Bohm's theory has an underlying classical behavior that is guided by an essentially quantum statistics of a "pilot wave". The peculiarities of the quantum probability theory involving a complex structure are what seem unavoidable. Cf. [Mackey 1968], at 3.8, "Why The Hilbert Space Is Complex".

The notion of GR in its most general form is not clear cut. The second rank symmetric gravimetric tensor field that generalizes the gravitational potential of Newtonian gravity theory can be seen, modulo locally operative coordinate transformations, as determining 1) a local lightcone structure and 2) a conformal factor.

Although there are other gravitational theories that cannot be ruled out besides that of Einstein, e.g., Brans-Dicke, here I will consider only the Einstein theory with cosmological constant as the classical theory of gravitation to which a concept of quantum gravity should relate. The cosmological constant in even this classical theory represents a "vacuum energy" or zero point energy which will be even more important in a quantum context. That a vacuum energy appears in the classical concept despite the masslessness of the graviton is a result of the specifics of nonlinearity of the Einstein equations.

What are the essential conceptual bases of GR?
This is an easier question simply because GR is not a quantized theory. The essence, irrespective of the specific gravitational equations of Einstein, is the principle of General Covariance which is not so much a statement of theory as it is of metatheory. It says how the laws of physics should be cast as geometrical objects of a curved spacetime manifold. It says that real measureable physical objects are those that transform as tensors under the infinite dimensional pseudogroup of general coordinate transformations (the Bondi-Metzner-Sachs group) of the manifold and that the laws of physics should be form invariant (expressed tensorially) under that pseudogroup. The spacetime manifold should, in this case, possess, at very least, the smoothness of a differentiable or C1 manifold, where C0 is merely continuous. With Einstein's equations, it should be a C2 manifold, with the added technicality of paracompactness. [Hawking 1973]

The notion of general covariance appears to be just a little simpler than it actually is. While the equations of any physical theory may have a very large group of symmetries of their form invariance, the solutions to these equations may be constrained in such a way that the group preserving the equations' form is too large to be allowed by a given solution. E.g., the famous spheristatic Schwarzschild solution of the Einstein equations, with vanishing cosmological constant will not allow general all general coordinate transformations without breaking the topological structure of the underlying manifold. It was exactly by introducing such a topology breaking coordinate transformation that David Finkelstein interpreted the singular surface at r = 2m of the Schwarzschild solution as a semipermeable membrane, and in so doing also provided the basis for wormhole structures in GR, for which the writers for Star Trek should be eternally grateful.

If you insist that allowed coordinate systems on a manifold be singularity free, i.e., the Jacobian of the transformation changes be bounded and nonzero everywhere, where certain global topologies admit certain kinds of coordinate systems, others will not. The Euclidean plane will admit global rectangular coordinate systems, while a sphere will not. A sphere will, however, admit two partitioning patches where a non singular coordinate system can be defined on each. Try wrapping the plane onto a sphere and see how a singularity must develop.

Interestingly, the torus, T2 admits a nonsingular cartesian (i.e., rectilinear) coordinate system, as do surfaces with more punctures: for two punctures, picture two doughnuts surgically connected by a cylinder. Iterate the obvious extensions, and note the posibilities of ring structures and intersecting ring structures resulting. E.g., replace the verticies of a tetrahedron with tori, and its 6 lines with connecting tubes.

We know that, ultimately, general covariance cannot remain on the planck level and so carrying the covariance principle or an abstraction thereof to the Planck level is not trivial. If there is a suitable abstraction, then it must somehow lead back to general coordinate invariance in a limit where the Planck length is taken to zero.

Nonsingular coordinate transformations induce local (at a point!) Lorentz transformations, along with deforming and conformal transformations, since the local transformations Sab(x)=dx'a/dxb (partial derivatives) must be elements of GL(4, R), which has 16 parameters. Factoring out the conformal transformations leaves the 14 parameter group SL(4, R). The Lorentz covering group SL(2, C), which preserves in its local action the local lightcone structure, has 6 parameters. Then, the coset spaces,



		GL(4, R) / (D(C) X SL(2, C))

		SL(4, R) / SL(2, C)


are parametrized by 4 complex variables, which span the complexified tangent space.

Is general covariance as gauge transformation helpful? Is cone plus scaling factor useful? Starting with SR as prior picture (locality) makes more sense. But can Lorentz invariance of a fundamental law be maintained at the Planck level? Lorentz structure, or rather its algebraic abstraction to its Lie algebra of generators turns out to be an almost spanning structure of the very simplest of general quantum structures. In plexus of such simplest algebraic structures, it would be rather impossible to find something other than the structure that is there.

The Hilbert space associated with such a structure is of 2 complex dimensions, i.e., 4 real dimensions. Suppose one had such a plexus of minimally energetic little quantized volumes of space - and necessarily an algebraic Q time that provides for the expression of an energetically induced "transience". On average, over the plexus or substantial pieces of it, an overall structure that exhibits a local Lorentz or SL(2, C), should be exactly what is to be expected. From this viewpoint, the essential structure of a local spacetime as a Minkowski space is a result of the being of the foregoing very cool plexus of Q existence. A Minkowski space is the, as it has long been interpreted from the viewpoint of the Einstein equations, a vacuum (lowest energy state) of quantum existence. This interpretation is a bit more structured and specific, however, since the the vaccum as lowest energy state in the Einstein equations is merely a trivial solution to the equations, while as the lowest energy state of a plexus of Hilb(2), there is an intrinsic energy, together with possible energy fluctuations, that has as analog the zero point energy of an harmonic oscillator in QM. One avoids the infinities of QFT by understanding both the finiteness of the local Hilb(2) structure and the finiteness of its replications that fill what would be called a "vacuum universe", which, of course, is merely a possible state of a universe.

The idea of the Cauchy Problem

What are the fundamental metaprinciples?
Heisenberg, according some folklore (it may be documentable) understood the uncertainty principle from data and developed his matrix mechanics as an abstract system that would yield it. It would appear to me that in, truth, the fundamental principle of QT is the Uncertainty Principle. A mathematical formalism that yields the an uncertainty principle seems, necessarily, to involve a set of noncommutative operators. Such objects appear in noncommutative rings and algebras. Structures of these types which, in a sense, measure the noncommutativity of the operators (a point of physical interest) are Lie rings and Lie algebras.

An Aside:
In existing physics, when it comes to stating physical relationships or "laws", Lie algebras appear to dominate Lie groups. If one defines a noncommutative "quantum space" by associating Lie algebra elements with coordinates for the space, is obvious, but worth remarking that the space is not determined until a representation is given; furthermore, although the representation is linear, the quantum space will have intimations of curvature (with torsion) that can be associated with the Lie group as a manifold.

That a set of quantum operators can be found corresponding to the generalized coordinates of a classical system may be true, but the converse is clearly not true. This is to say that, logically, quantization methods form a proper subset of instantiations of QT. One way of expanding this proper subset so that it is no longer proper is to drop unnecessary assumptions. One assumption that cannot be dropped is the uncertainty principle since it is almost the funamental essence of quantum theory. Let's see what can be dropped and probably should.

If one insists on the validity of the Cauchy problem in QT, then it becomes necessary for other mathematical machinery to be assumed. The notions of "state of a system" and a "time parameter" having ontological status are necessary for taking the Cauchy problem seriously. If state of a system is limited intrinsically by the Planck time and in practice by times already of many orders of magnitude larger, and the time continuum is inconsistant with prior quantization, then temporal as a Cauchy problem is not a legitimate concept.

Can the Cauchy problem be done without?
In classical Hamiltonian physics, a particle state is given by (q, p) its generalized coordinate q together with its generalized momentum. The solution of the Cauchy problem is the time evolution of this state. The generator of the time evolution is the energy or Hamiltonian function of the (q, p). In quantum physics the state becomes either an element of a projective Hilbert space or a density matrix while (q, p) becomes a set of operators acting on the Hilbert space. Again a Hamiltonian (operator this time) is the generator of time evolution of the state, and the evolution is just as determinate as the classical state. (This is the Schrödinger picture.) The difference is what is being evolved. In classical physics the values (q, p) can, in principle, be precisely specified; in quantum physics they may not. In classical physics the readability or observation of the values of (q, p) is assumed selfevident; in quantum physics the connection observations and formalism is through expectation values of observables that are represented by operators. These expectation values are the quantum mechanical connections with classical concepts. These connections are what give quantum physics its interpretational meaning. The temporal evolution of the quantum state implies a temporal evolution of expectation values. So, here is the point of this paragraph: If the Cauchy problem is not valid in its usual sense in QM, we will not have temporal evolution of expectation values, which is to say, the classical connection is lost and with it interpretational meaning.

Here is a way out of the impasse:
Quantum physics, while supplying expectation values



                           <s| A |s>


written in Dirac [Dirac 1958] notation, as the expected value of the observable A in the state |s>, also provides, at no additional cost, the apparatus for the concept of transition amplitude:


                           <s| A |r>


and then of transition probabilities


                         |<s| A |r>|2


Conceptually, think of evolution in terms of transition amplitudes and not in the usual determinate way of the Schrödinger equation. There is an added complication: the evolution of the quantum state of the system is now not necessarily determinate. The evolution is more like a Markov process, the difference being that the matrices of transition amplitudes, multiply not probabilities.

Suppose that an Hermitean "time operator" T exists which does not correspond to any specific macrotime parameter but is associated with some property of the system which makes it into a clock. No reasonably defined time operator can exist with the confines of standard QM, so obviously some adjustments are going to have to be made.

Suppose further that the eigenvectors (I've eschewed "eigenstate" deliberately.) of an Hermitean energy operator H are not stationary: T and H do not commute, but that nevertheless one retains the concept that "Energy is generator of time translations", in the same way that for noncommuting Q and P operators, "momentum is the generator of spatial translations." Dually, of course, the coordinate operator Q is the generator of translations in momentum space.

If one implements that idea for one essential "click" of a fundamental time τ0 quantum, there is a propagator V(1) that can be constructed to operate on a general vector to express this concept using the energy operator, whatever it might be:



			V  :=  (I - i H/ħ τ0 )

   Then,
                         T |tk>  =  tk |tk>
   and
                         |<tk| V |tj>|2


is related to the probability that the system will make a transition to clock time tj when it is started at tk. Implicit in this is the idea that there is some quantum of "time" (not the same concept as the that told by the clock) within which (or during which) this transition takes place. This is still an expression of the idea that energy is the generator of time translations. For two succesive such quanta of time, the relative probabilities should be then simply,

                         |<tk| V2 |tj>|2


However, in this murky area of theorizing, one should be mindful of the consequences of basic assumptions, and a consequence should be that the limit of such fundamental processes should give back the results of what is now garden variety quantum theory. A little back fudging shows this propagator idea to be correct and that the m-click propagator for time displacements motivated by an energy operator that is the generator of the time displacements so defined should actually be defined to be,


                V(m, H(n))  :=  (I - i τ0/n H/ħ)m


Why this should be so comes from expanding the RHS expression using the binomial theorem, and noting that the limit for large m is an appropriate exponential of the QM one parameter dynamical semigroup for a time independent Hamiltonian. Its matrix elements form an S-matrix of sorts. This propagator becomes important in considering how the illusion of Newtonian time emerges statistically from a more elementary understanding of quantum theory.



   From the binomial theorem,

	                 m
	(1 + A/m)m  =   Σ (m j) (A/m)m
	                j=0

   where (k j) = k!/(j! (k-j)!), is a binomial coefficient.  When m is very
   large, we can use Stirling's approximation for factorial (or Gamma
   function) applied to binomial coefficients, so that for large m,

	(m j)/mm  →  1/j!

   So for large m, and arbitrary A,

	                       m
	(1 + A/m)m  approx=   Σ  Aj/j!
	                      j=0

   with

	                   ∞
	lim (1 + A/m)m  =   Σ  Aj/j!  =  exp( A )
	m→∞            j=0

   Uniform convergence of the series is guaranteed for any bounded
   operator A.

   Take then, the single transition propagator to be

	( I - i H(n) dt / ħ )  =  ( I - i τ0/n  H(n)/ħ )

   where H(n) is an operator acting on an n dimensional complex vector
   space with inner product (a Hilbert space), so H(n) is represented by
   an nxn matrix, and further take the m-transition propagator, where
   dt := τ0/n, a "clock phase unit", to be exactly a correct form that
   is indeed a power:

	( I - i H(n) dt / ħ )m  =

		( I - i τ0/n  H(n)/ħ )m  =

   (inserting and distributing m/m)

		( I - i τ0 (m/n) (1/k)  H(n)/ħ )m  =

   Then with s(n) :=  τ0 (m/n) in the set τ0 [0, ∞) for any finite n.
   Notice that there is no necessary restriction or bound on m.

	( I - i H(n) dt / ħ )m  =

		( I - i (1/m)  H(n) s(n)/ħ )m  =


   This is an Important Choice of Theory so that evolution of by this
   propagator has the correct form of QM evolution for very many transitions.
   Then this k-transition propagator is, using the large m approximation
   (m j)/mj → 1/j!,

	    m
	=   Σ  (m j) (1/m)j ( -i s(n) H(n)/ħ )j
	   j=0

	    m
	=   Σ  ( -i s(n) H(n)/ħ )j / j!
	   j=0

	→  exp( -i s(n) H(n)/ħ )

   approximately in the second equality for large m.  In the m-transition
   propagator, notice the two canceling appearances of m; the m-transition
   propagator is still just a m-fold product for single transitions, and
   setting m=1 gives back the single transition propagator.

   In the third line, the asymptotic approximation to an exponential, it
   is important to remember that s(n) := τ0 (m/n), and with n fixed and
   bounded, and actual limit for large m will ultimately diverge by
   oscillation.  But, also remember the insertion of the artificial (m/m),
   and notice that a cyclic (ergo unitary) matrix,

	 CT(n)  :=  exp( -i τ0 H(n)/(n ħ) )  =

	  ∞
	    Σ  ( -i τ0 H(n)/(n ħ) )j / j!
	   j=0

   that for arbitrary complex z,

	 (CT(n))z  :=  exp( -i z τ0 H(n)/(n ħ) )

   and that CT(n) is a unitary cyclic operator of order n.


With regard to relativistic quantum structures derived from combinatorics, see Relativitic Feynman Path Integrals & Random Flight

For a genuine exponential limit to exist that will agree with standard QM, it will be necessary to take n to infinity also in such a way that s(n) becomes a continuous variable, or at least that it have a bounded, nonzero limit. This is necessary to avoid contradiction with the very general theorems of [Wielandt 1949], [Wintner 1947], and Olga Taussky [Cooke 1950], which say that the CCR are not representable by bounded operators in any normed algebra over the real or complex numbers. See, e.g., the discussion at the end of [Appendix J].

If we take n to be very large and consider a large number m < n of transitions, the propagator will behave like the exponential propagator of QM closely enough to restore the predictions of QM at the Fermi regime of about 10(-13) cm., which is roughly the radius of a Hadron. This would require approximately that n = 1020, a rather large number for a vector space dimension indicating that on a hadronic level, a continuum approximation will, for many purposes, be good.

Now, let us interpret the m-click propagator structure by its parts:

First notice that the propagator, call it V, is an operator valued function of H(n), the supposed energy operator that motivates a translation in "time" (as so defined), and so is indirectly a function of n, as well as m, so write



	V(0, H(n))  :=  I

	V(1, H(n))   =  ( I - i H(n) s(n) / ħ )

	V(m, H(n))   =  ( I - i H(n) s(n) / ħ )m

	             =  Vm(1, H(n))  =  V(m-1, H(n)) V(1, H(n))

	V(m, H(n))  =  ( I + i H(n) s(n) / ħ )m

   In expansion, we isolate the factors and terms for interpretation:

	V(m, H(n))  =

	    m
	=   Σ  (m j) (1/m)j ( -i s(n) H(n)/ħ )j
	   j=0


                         The Quantities of the Propagator

   n        is the size of the clock, the number of pointer positions.

   m        is a total or final number of elapsed "Planck clicks".

   j        is the number of intermediate transitions during the
	    absolute "bean counting" number of Planck clicks.

   (m j)    is the number of ways of selecting j unordered, but
            distinguishable objects from m of them.

   mj      is the number of ways of filling j unordered but distinguishable
	    slots with an alphabet of m distinguishable symbols; or the
	    number of ways of labeling j objects with m possible symbols;
	    or, the number of ways of assigning j transitions to m Planck
	    clicks.  That m >= j indicates that there can be Planck clicks
	    that are somehow skipped by transitions.  This is to say that
	    while a transition of some sort must wait a Planck click, it
	    can wait more than one click before happening.

   (1/m)j  divides out the distinguishability of the m Planck clicks
	    AND the distinguishability of the actual j transitions.

	    This lets us know abstractly that there will be a non vanishing
	    probability of "no transition" or stasis associated with any
            Planck click.

   That we sum over j from j=0 to j=m means that there are Q weighted
   mutually exclusive alternatives that are possible and may happen,
   and over which does happen we have no control or knowledge.

   This also means that j is limited by m, that during m Planck clicks,
   no more than m actual transitions can take place.  At m transitions
   in m Planck clicks, the clock runs at its maximal rate.  The number
   (j/m) < 1 is then a dimensionless measure of "the rate of the clock",
   and thus of "the rate of time" that it measures.

   Note that this combinatorial interpretation of the propagator does
   not involve any approximations, and is at the level of fundamental,
   physical and quantum theoretical structure.

   This stochastic, dynamical propagator V, is then clearly the exact
   operator solution to a quantum, polychotomic, random flight problem.

   Now, consider the propagator amplitudes:

   <ti| V(m, H(n)) |tj>  :=  <ti| ( I - i H(n) dt / ħ )m |tj>

	    m
	=   Σ  (m k) (1/m)k ( -i s(n)/ħ )k <ti| Hk(n) |tj>
	   k=0

   where, <ti| Hk(n) |tj> is the amplitude for k nonvacuous transitions
   leading from ti to tj, which is obviously independent of m, the
   progression of Planck clicks.  These amplitudes of powers of H(n) will
   be important elements to be able to calculate.


In a Markov process, the probability matrices multiply to determine the final state. Here, the probability amplitude matricies multiply, and in either case, one does not have a final state, but a set of probabilities, one for any possible final state, given a possible initial one. Evolution of the quantum state itself is now stochastic. It turns out that to normalize for a proper probability distribution, one need only define the probabilities by



                         |<tk| V(m) |tj>|2
                         ---------------------
                          (1/n) Tr( V2(m) )

   where Tr(.) is the trace functional on the C*-algebra of linear
   maps Hilb(n) → Hilb(n).

   Notice that the normalizing factor Tr( V2(m) )/n is invariant under
   under any basis substitution by similarity transformation of the
   in the group GL(n, C), which is to say that under such transformations
   there are no sources or sinks in probability measure: probability
   normalization is not altered.  In the generalized sense of dynamics,
   however, m → m+1, the normalization, hence the total probability
   measure does change.  This is necessarily so because possible number
   of (longer) paths necessarily increases.


As the m-click propagator V(m) acts on Hilb(n) also acts by the dual map as V(m) on Hilb(n), in a Schrödinger picture, one may also then, in a Heisenberg picture, express propagation as an action on the C*-algebra with symbolic structure Hilb(n) X Hilb(n), so for any operator A representing an "observable" property of the system, propagation is expressed as



			A  →  V(m) A V(m)


Since for large values of n and m, which is to say with great spatial extension and for long waiting times measured in absolute Planck clicks, V(m) is well approximated by the standard unitary time propagator exp( -i H/ħ s ), for these same conditions, the standard expression in QM of propagation also appears, for which one can use the Baker Campbell Hausdorff formula. This shows that the for the conditions of large n and m, one may approximate in the propagator,



	                          n'
		V(m) A V(m)  =   Σ  sk/k! C(A: H, k)
	                         k=0

   where n' is some suitably large number.

   While in either Schrödinger or Heisensenberg picture, V(m) is not
   unitary, a cayley map

		V(m)  →  V(m)/V(m)

   gives a unitary operator.   This is an answer to the question of what
   to do until the n → ∞ limit arrives or can be successfully taken
   as an approximation.  The question of whether or not unitarity is
   necessary remains; it *should* be a reasonable assumption that any
   observer will be capable of normalizing a probability distribution,
   and so the question of the importance of unitarity is a real one.


In QM, H and its powers will not be of trace class, but for a finite dimensional analog, as for example, FCCR of CCR, taking the trace is obviously always possible. This specifically allows the treatment of stochastic dynamics by probability amplitudes where that is locked out in QM.

Aside:
This has similarity in structure to the propagation Kernel in Feynman's formulation of QM by path integrals, [Feynman 1965] suggesting the truth of Feynman's prediction that the path integral formulation or propagation kernel formulation is conceptually more general than any standard formulation of quantum theory. Formally, Feynman's kernel has connection to Bergman's kernel [Bergman 1933] [Helgason 1962] for a Hilbert space of analytic functions defined on a complex Hermitean manifold. The Bergmann kernel relates the metric on the manifold to the inner product on the Hilbert space of functions over the manifold.
This step is not unlike the step from classical to quantum physics, where the evolution of the old (q, p) points are not determinate, but a new concept of "state" is invented which is determinate and which has determinate evolution. A remaining question is: if the state no longer evolves determinately, is there a new replacement that does? I believe that I haven't yet given enough specification of the setup to make that question answerable. However, in fairly longwinded exposition, I give a derivation of a local Newtonian time from quantum theoretical principles that follow the above, exactly, in On the Quantum Theoretical Origins of Newtonian Time.

Take a brief excursion from QM to GR and then back to QM:
In attempts to quantize the gravitational field as was done with the EM field, or simply noting the existence and meaning of the Planck units for length and time, it becomes clear that in quantizing gravity it will be necessary for the basic notions of space and time embodied in the coordinate functions also to be quantized in some way. At the Planck level, the continuum model of space and time structure breaks down. Once consequence of this is that also at this level the distinction between these two classical concepts breaks down. If a mathematical model of space and time exists and we have not reached ultimate area of impossibility, it should reflect, at least, operators that correspond to these "degrees of freedom". Such operators may be said to correspond to the classical degrees of freedom, but of course, they will have nonclassical properties that involve noncommutativity. The question is how should this noncommutativity be expressed mathematically. This depends on how one expects the associated uncertainties to behave.

It was clear to Aristotle [Aristotle 1941] that, conceptually in his classical sense, space cannot exist without time, and that time cannot exist without space. Spatial points are distinguished by movement of some sort, requiring a time concept and temporal points are also distinguished by a movement of something withing a context of spatial extension. In relativity theories, measurements of either are related by the classical physical constant c, the speed of light in a vacuum. We know that the ulimate limit of fine measurements of spatial and temporal differences are the Planck units that are related by



                    lp  =  c  tp


where presumably, the limit is set by the differences being at the level of intrinsic uncertainties. If we have as ultimate quanta of space, lp and time, tp, then the constant c can be constructed simply by c=lp/tp, and it will have the physical units of a velocity. The formalism of QM also inherited by general quantum field theory always allows a limiting situation where an uncertainty within the uncertainty relation can approach zero; the other must then, of course, become unbounded. Here we cannot allow this, but must, instead, have for uncertainties, absolute lower bounds:


                    (Δ Lq)  >  lp  =  c tp
                    (Δ Tq)  >  tp
   so

                (Δ Lq) (Δ Tq)  ≥  c tp2


If Lq and Tq do not commute,


   so
                    [Lq, Tq]  =  iZ,    Z  ≠  0

   then, generally for the uncertainties of Lq and Tq

     (Δ Lq)2 (Δ Tq)2  >=  (1/4)<Z>2 + (1/4)<{Lq, Tq}>2

   where

              {Lq, Tq}  :=  Lq Tq + Tq Lq

    defines the anticommutator of of two operators.

    For finite dimensional matricies, Tr( Z ) = 0
   

The proof of an uncertainty relation, like that of QM, in the usual manner, using the Schwartz inequality, excludes expectation values, hence uncertainties being take with respect to eigenvectors of Lq and eigenvectors of Tq, even in the context of finite matrices. In such a case it is possible to prove separately that the eigenvectors possess intrinsic nonvanishing uncertainties


           <tk| Lq2 |tk> - <tk| Lq |tk>2

           <lk| Tq2 |lk> - <lk| Tq |lk>2


Since


                         Lq |lk>  =  lk |lk>
   and
                         Tq |tk>  =  tk |tk>


it appears that a fundamental impenetrable distance has not arisen: the eigenstates of Lq and Tq have precise eigenvalues with vanishing uncertainty. However, the considerations of Lq and Tq must be done together. If finite dimensional Lq and Tq do not commute, there is an intrinsic uncertainty in spacetime points.

I've just considered one dimension for space and one for time. If there are several Lq operators that are mutually noncommuting, there is then an intrinsic uncertainty in spatial points independently. This is illustrated in [Theorem 14.1] of [Section XIV] of the working FCCR paper.

The above is a natural consideration, coming directly from quantum mechanical notions, of local space, time and spacetime with intrinsic uncertainty regarding points of the space that need not go beyond the context of representations of Lie algebras.

? Consider the relativistic invariant



                    S2  =  Lq2 + c2 Tq2


Our notion of time as a statistical parameter would certainly be in accordance with the observed connection between the arrow of time and entropy (as first, I believe put forth by I. Prigogine), and the second law of thermodynamics: with increasing time the entropy of a closed system does not decrease, and rather tends to increase. So the story goes that the universe, being by definition a closed system, has its time statistically determined by the increase in entropy.

I would like to see or do an analysis of "dynamics" of a general spacelike hypercell space (with thickness measured by a local "i=sqrt(-1)" representing the logically necessary local quantum time extension expectation value, together with a global definition of the entropy of the hypercell space that determines what the hypercell surface does next. The locality of i suggests that the hypercell space may be expressed as having an "almost complex extension", and is not restricted to being a "complex extension" that is global. Again, one should include or expect as a theorem that the probability distribution of s will be highly peaked. As a question, this is not nearly enough well posed; it is merely an idea of a mathematical model that needs to be worked out.

Aside: The story may not be quite so simple, since there are enough indications that the concept of time is actually a multiple one, and in this case one is always confronted in any statement concerning time with the problem of stating anf defining "which time".
This also very nicely provides an explanation for the experience that time always flows forward and not backward. This is in the face of the fact that all the fundamental dynamical equations of physics are indifferent to the direction of time flow. If there is a statistical time, which is what we perceive, then one might look to a quantum construct from which it is derived. This is much like understanding that the statistical notion of pressure of a gas in a container is the result of many molecules of the gas applying a force to the walls of the container by coliding with it. So the question needs to be ask: what is the underlying quantum thing that gives rise to the statistical concept of time? Since QM, RQM and QFT all use the statistical time as a given parameter, we cannot look to them for them for an answer. In this sense, since we are looking for a concept within QT that cannot be found in any known instantiation of QT, all of them are essentially incomplete.



   QM - canonical quantization
        cauchy problem: "parabolic" equation with initial conditions
        over a putative space propagated in a putative time.

   GR - constrained initial data with gauge freedom given on a putative
        spacelike hypersurface propagated in a putative time.


On a fundamental level, time as we understand it experientially and within the formalism of current physics, does not exist. I am not talking here about a concept of psychological time, which has very much to do with brain's capacity for memory. This concept is very difficult to get out of; dealing with time in SR is a good start. The future based on a time of a fundamental level is indeterminate and does not exist as some spacelike slice in an assumed a priori given spacetime continuum; the past, is equally indeterminate and also, in the same sense, does not exist. Time travel, is then impossible, except possibly as a process that involves some kind of quantum tunneling, that is statistical and short ranged.

The idea of time travel into the future in sense of H. G. Wells' "Time Machine" fantasy is just plain silly. The future is the future precisely because it hasn't yet happened, and this as true for QM wave functions as it is for classical states; moreover, it is not predictable in any determinate sense except in a statistical manner. What we do expect however is that in most important circumstances a classical future will be predicted with a highly peaked statistical distribution. That can be expected simply on the basis that for certain regimes with which we are familiar the predictive powers of classical physics are not complete nonsense.

In a very real sense, there are many physical senses of time, each of which are connected regimes of spatial magnitude: the time by which the age of the universe is measured is not a Newtonian time. This is also rather distinct from a "time coordinate" in GR. The time coordinate of SR is not the "uniformly progressing" time of Newton. These are funny types of formal temporal constructions that are not things that one can measure with any sort of directness. For directness, ultimately, we always fall back on clocks of various sizes which are defined by different things. We may use a fiduciary ammonia clock, or we may use the cycle of the earth in its orbit about the sun. We may use the cycles of the moon, or something completely different. In any case, the clock is physically bounded, cyclic, and we must in some way be able to read it.

The indeterminateness of QT expressed in its essential dispersion is true for the very fabric of spacetime. Only a "local now" can be said to exist on a fundamental level. Therefore, consider a quantum "space" (the counterpart of a specific spacelike hypersurface segment in GR) which is a nexus of quantum spacetime cells each of which has within it a structure of a local quantum time which cannot be conceptually or even ontolgically divorced from the local quantum space. One can no more consider arbitrarily contracting points of space to a point, than one consider squeezing time to a point, and conversely. So this quantum spacelike hypersuface made of nexi must have a local thickness in local quantum times and in local quantum space. These quantum cells evolve according to their own quantum time, but the cells do not all evolve with complete independence, but are influenced by those to which they are "connected", suggesting that the concept of local evolution itself is not determinate. The question of how the entities of this grand nexus relate is key.

A naturally intuitive idea is that interaction is by direct contact or more generally by proximity. This is, of course, no grand intuition as it turns out that the very concepts of interaction and proximity are not independent so that the idea of their connection is at least partially if not completely tautologous. An exception is that "spooky" action at a distance encountered in entangled states of QM such as those of the EPR configuration.

The quantum statistical behavior of these nexial entities then determines the evolution of the quantum "spacelike" hypersurface of local "nows" and in doing so draws out a definition of the macroscopic parameters that we call time and space. There is no given a priori spacetime manifold, and so a basic premise underlying GR is inoperative on a fundamental level in a fairly specific way. Space and time as we would normally think of them arise statistically from quantum extensions that are represented by elements of local algebras. The alternative, of accepting this assumption of a dynamically emergent, apparent spacetime continuum, is to accept an absolute determinism that is physically real: the past and the future are already given, exist, and have ontological status. Everything down to every particle decay, creation and interaction is completely determined. That would not be consistent with the necessary aspects of quantum theory. Given the essential role of randomizers of some sort in so many regimes of existence, I am unwilling to accept the absolute determinism of GR, and with it, the implied denial of any component of free will.

It seems to me that then one must, in a fundamental physics, ultimately abandon the existence of even a macroscopic given spacetime continuum and the diffeomorphism group of the general covariance associated with it. This is to say that a "correct" thing to do does not entail a canonical quantization of the Einstein equations, however mathematically instructive it might be to wrestle with.

This is not quite as dire as it might appear since one can argue that the probabilities of evolution become highly peaked on average and that in the large the spacetime continuum is a valid approximation as is the principle of general covariance when speaking "in general", that is, without putting too much detail into the universe. The quantum details are what provides non determinism while the peakedness of the distribution in transition amplitudes is what provides the damping of excessive randomness.

In the context of the Aharanov-Bohm effect, the phase of the QM state is identified with a path integral of vector potential, A for the EM field F. In the absence of any EM field, the integral should be path independent, which means the phase is determinate, or singlevalued. This makes particular sense when F is recognized as essentially the curvature tensor derived from A as a connection on spacetime.

Another way of defining a single valued function in the presence of an EM field is to allow that the space on which the the state function is defined is multiply connected (multiply sheeted). The appearance of indeterminate phase is then a consequence of collapsing the sheets. One might conjecture that the effect of the EM field on the quantum level is to disrupt the single sheetedness of classical spacetime giving a picture not unlike the spacetime foam picture first proposed by Wheeler.

The minimal coupling of the EM field to a quantum particle is defined by allowing that the gauge invariance of the the EM field be identified with the phase invariance of the QM state; it may be that the gauge invariance (coordinate choice) of the gravitational field may be actually associated with the free will of choice. A choice of coordinates is, in fact, a choice of free will which does not seem to be associated with a physical field.

A Remark on essential differences between tensorial objects of opposing symmetries:
Recalling the association of multivalued functions on one manifold with those functions made univalent on a the "same" manifold that has been surgically altered by punctures in connection with a gemetrical interpretation of of Aharanov-Bohm effect that is connected with an antsymmetric EM field tensor, this antsymmetric tensor field gives information of a topological nature.

On the other hand, tensor fields with symmetric indical symmetry seem to provide metrical/interaction information, protypically given by the fundamental, symetric metrical tensor field gμν of GR.

The curvature tensor derived from gμν, having both indicial symmetries and antisymmetries is then possessed of both topological and metrical information. Cf. Appendix B to FCCR presentation on Lie Groups

Further considerations:
Masslessness and masiveness
Masslessness <=> gauge invariance (a symmetry)
Massiveness <=> gauge noninvariance (local hidden or broken symmetry)
Masslessness is primary in an embryonic universe

This is merely a conceptual outline of what I see as a correct approach to QG based primarily on minimal necessary properties of a QT that is not yet clearly formed. This QT takes precedence over GR, as it should when we are speaking of ultimately small structures. GR is a classical phenomenon, and should be retrieved as classical mechanics is retrieved by a concept of "most probable" or by an averaging technique. A known point that should be emphasized is that an expectation value and a most probable value are not necessarily equal.

One will have the same conundrum with approximating GR out this kind of GQ as one has in understanding where and how systems pass from Q-type behavior to a C-type behavior. This problem can be expressed in the question: why don't baseballs behave like elementary particles? Try doing a Young double slit experiment with baseballs. A resulting diffraction pattern is not likely to be found. It's not exactly clear that the experiment can even be done!

The affine structure of a space is given by its geodesics. Consider, simply as an example a space of two dimensions with the T2 toplogy: a doughnut ..., cyclic in two coordinates. Cut, make a half twist and reglue. Space then has torsion. If we shrink the major radius to zero ... a projective ball with antipodal points identified, from the exterior, it has the physical appearance of no volume, regardless of its radius, it behaves as it were a point: anything that goes in, appears immediately at the antipodal point.

Now consider a space T3, 3 dimensional and cyclic in three coordinates.



Top of Page


Home Page


Physics Page


Math Page


Email me, Bill Hammel at
bhammel@graham.main.nc.us
READ WARNING BEFORE SENDING E-MAIL

The URL for this document is:
http://graham.main.nc.us/~bhammel/FCCR/qg.html
Created: May 21, 1998
Last Updated: June 7, 1998
Last Updated: June 9, 1998
Last Updated: June 13, 1998
Last Updated: June 15, 1998
Last Updated: November 24, 1998
Last Updated: August 3, 1999
Last Updated: May 28, 2000
Last Updated: June 30, 2000
Last Updated: July 9, 2002
Last Updated: July 12, 2002
Last Updated: July 17, 2002
Last Updated: July 26, 2002
Last Updated: July 29, 2002
Last Updated: August 3, 2002
Last Updated: August 10, 2002
Last Updated: September 20, 2002
Last Updated: April 23, 2003
Last Updated: February 29, 2004
Last Updated: March 2, 2004
Last Updated: June 8, 2004
Last Updated: November 13, 2004