On Quantum Theoretical Origins Of Newtonian Time In A
Generalized Relativistic Quantum Theory With Time Operator
In Finite Dimensional C*-algebras.

Note: This document uses utf-8 unicode encoding.

Bill Hammel

ABSTRACT (A bit prolix)

	In this essay, after indicating a collection of reasons for
	generalizing both current quantum theory and relativity, an
	existing generalization of quantum theory is outlined after
	discussing the essential and necessary elements of any quantum
	theory.  It is then argued that Newtonian time arises as
	a two fold statistical phenomenon: first, locally as a "proper"
	local quantum time, and then further as a statistically coordinated
	plexus of these local proper times that are necessary quantum
	extensions depending on the ontology of energy: no energy,
	then no time, and conversely.

	[There are now two prequel essays that are rather conceptual,
	 "easier", introductory and fairly nonmathematical in substance:
	 Origins of the Species of Time
	 Classical Geometry & Physics Redux
	 These are more wide ranging, historical and philosophical.]

	For any natural number n > 1, a finite, and therefore physically
	local algebraic structure, FCCR(n) that possesses all the
	requirements of a fundamental quantum (Q) theory is presented
	which quantizes space and time.  The existence of a time operator,
	Fourier related to the energy of a "massless" oscillator clock,
	implies that cognate vectors symbolizing states in the standard
	quantum mechanics (QM) must generally be reinterpreted as processes,
	and that evolution of these processes must themselves be stochastic
	in the manner of a Feynman kernel.  This reinterpretation avoids
	the frozen formalism of Hamiltonian formulations that adds a
	relativistic requirement of Poincaré symmetry, and may have
	implications for the apparent similar freezing associated with
	the superpoincaré group.

	A local, proper, Newtonian time arises from a highly peaked
	transition probability distribution for clock transitions that
	increasingly favors the uniform running of large (large n)
	clocks.  This illustrates: the theoretical ascendancy in FCCR(n)
	of transition amplitudes and transition probabilities over
	expectation values, as well as the reclaiming of the status
	of expectation values and the dynamical differential equation
	structure of QM as a regime of very large n - which is to say,
	a FCCR(n) → CCR limit exists, both kinematically, and
	dynamically.  Apparently, however, there are, different
	possible (as yet unexplored) limits.

	NB: The classical version of this local time was already
	introduced by Lorentz [Wikipedia] explicitly in 1904, and implied by
	FitzGerald  [Wikipedia] in 1894.

	The "time" operator here rather suggests a more primitive
	concept, very similar to, a more general idea of synchronicity
	that was introduced by Jung and Pauli. [Jung 1955]
	FCCR(n) classifies processes, by an indefinite metric on
	the local, finite dimensional Hilbert space, as "toponic",
	"photonic", and "chrononic" so that the relic toponic
	processes correspond to the states of QM, in a limit in
	the strong operator topology of FCCR(n) to the Canonical
	Commutation Relations (CCR).  FCCR(2) realizes the Canonical
	Anticommutation Relations (CAR).  The two kinematical
	pillars of QM are then derived from one essential structure
	associated with the unbounded sequence of Lie algebras,
	su(n) and su(1, n-1), or u(n) and u(1, n-1).

	The merged kinematic and dynamical algebras become isomorphic
	to su(n), suggesting connections and unifications with and
	applications to internal unitary particle symmetries, hence a
	mooting of the earlier "no go" theorems regarding the uniting
	of internal and spacetime symmetries that also does not ignore
	supersymmetry.  There are connections with Hopf algebras
	and noncommutative geometry suggested, but not developed here.

	The appropriate quantization of all physical quantities suggests
	applications to the problem of quantum gravity by means of a
	plexus of local algebras forming a manifold or lattice
	type structure where locally the structure is homomorphic
	and homeomorphic to an algebra of subspaces, i.e., Clifford
	algebra, which has an associated degenerate form, a Grassmann
	algebra associated naturally with Fermionic structure.

	Regarding concepts of time, it appears that there is at least
	one concept associated to every level of theory that presumably
	models levels of physical reality, and that all these concepts
	are quite different from one another.  The newest of them is
	the operator T(n) introduced here, which serves to relate
	configurations of n energetic processes.  This is not a given
	sort of time, nor an arbitrary parameter, as the "time" of most
	physics tends to be; instead, it exists as an inextricable
	aspect of bounded energetic processes and the processes by which
	these proccesses evolve in an indeterminate, stochastic way:
	the usual form of equations of motion expressed as a differential
	equation is replaced with an algebraic equation that then
	approximates the differential Schrödinger type equation for
	very large n.  This is to say again that the standard
	nonrelativistic quantum physics can be regained as a limit,
	but it is not a unique limit.

	Since this will not be submitted to any journal, it will
	surely be extended in text and pertinent references, tweaked,
	corrected & modified until the proverbial cows come home,
	or I die, which ever comes first.

Table of Contents
  1. Prelude: Relativistic (R) and Quantum (Q) Metatheories
  2. On Mending the Defects
  3. Review of FCCR Structure
  4. Upsilon, the second Fourier Transform
  5. Hermite Polynomials & their Zeros
  6. Fundamental Velocities & Quantum Jumps
  7. Invariance Groups
  8. Q(n) & P(n) Expectation values
  9. From CCR to FCCR
  10. Q Counting becomes Exponential Propagation
  11. Dimensionless clock speed
  12. Limits of The Discrete Clock Phase
  13. Normalizing Transition Frequencies
  14. Specific Special Cases For Low Values Of n
  15. Transition Probabilities In Asymptopia
  16. 1-Click Root Mean Square Expectation Values
  17. Clock States & Entropy
  18. Clock Dynamics
  19. Calculating Transition Amplitudes
  20. Noncommutative Geometry of Higher Dimensions
  21. Some Questions and Projects
  22. Approximating Dynamical Algebra
  23. Newtonian Uniformity of Time Progression
  24. Direct Sums of FCCR
  25. QM Time Dependent Phase Factors As Recurrence Amplitudes
  26. Further Elaborations
  27. m-Click Transitions For Large m
  28. Conclusions & Criticisms
  29. Copyright Notice

Prelude: General Considerations On The Essential Nature of
Relativistic (R) and Quantum (Q) Metatheories

As they are customarily considered and formulated mathematically, the metaprinciples of quantum theory and relativistic theory are mutually inconsistent.

This is a subtle, logical, technical and mathematical problem that theoreticians have mostly ignored for approximately 90 years. The resolution of the problem regarding special relativity is only hand waving and talking louder. E.g., there is no such thing as any logicially legitimate thing as a theory of a single quantized relativistic particle; strangely, this is considered theoretically acceptable.

Extending the unification to that of the metaprinciples of Q and general relativity should give us the most elusive theory of "quantum gravity", in somehow quantizing Einstein's gravimetric Potential field gμν, a second rank tensor field, symmetric in its two covariant indices μ and ν.

The current orthodox quantum gravity (QG) scene, with a view to reconciling the inconsistency, consists of two major streams of thought: First, attempt to "quantize the gravitational field" by canonical or other methods, and second, start at some supposed quantum level and construct a theory that gives rise to massless spin-2 particles that would then take the role of the putative graviton; the latter stream being, of course, the string theory approach.

In the first stream, it eventually becomes clear that the very coordinates themselves must, as a matter of consistency, be quantized; nobody seems to know what to do with this, or how to do such a quantization correctly - or, there is a perfectly good theory someplace, and I know nothing about it. One must be careful, of course, even in classical GR to distinguish between geometry and its possible coordinatizations, particularly distinguishing geometrical from coordinate singularities.

The string theory, the second stream, background problem exists and shows that a string theory can be built only when an a priori classical spacetime is given. The temptation is, of course, to assume the obvious flat background of either the flat Euclidean space E3, or the associated flat Minkowski space. Given the circumstances that one is now interested in the dynamical structure of space itself, and that such a space can then be assumed to be a result of averaging and coarse graining, such an assumption becomes tenuous at best. It seems the problem may be best encoded in Wheeler's koanic phrase "Geometry without geometry". The implied problem is that of deriving geometry from something more primitive - possibly, algebra, and along the lines of Klein's Erlanger program, though Wheeler's first thoughts were, quite naturally, a quantization on a level of topology, or even below that on the level of quantum set theory.

Apparent success in either the first or second stream cannot lead to any truly fundamental physical theory, though they may, in some sense, be said to work, provide enlightenment, or approximation techniques.

Perorations on this, more specifically:
[Cf. similarly, an essay on a general approach to the problem of quantum gravity]

   Here are some of my considerations, a bit more explicitly, taking
   the above streams of thought into account:

	1) A truly fundamental physical theory cannot begin with a
	   quantization procedure mapping a classical model to a quantized
	   model.  Fundamental physics begins with a Q theory, and
	   should show classical tendencies as the peaking of probability
	   distributions for large objects, or many objects.  We already
	   have difficulties in showing that classical mechanics arises
	   in a clear sense from quantum mechanics, basically because
	   quantum mechanics expresses its own universality.

	   In this sense, the second orthodox stream is "more correct"
	   than the first, but still even string theory begins as a
	   fundamentally incorrect quantization of a classical model.
	   Should it turn out to be somehow correct, it would be a
	   rather extraordinary accident.

	2) Neither stream takes into account the fundamental limitations
	   implied by the Planck units, but rather continues in the 19th
	   and 20th century manner using the calculus of continua in
	   the modeling of spacetime structure when the implications of
	   the Planck units are that these continua are ultimately
	   meaningless from the viewpoint of Q physics.  J.A. Wheeler
	   understood this obviousness long ago.

	   No place in real physics has an infinitesimal or infinity
	   actually appeared; such things are in principle, not
	   measurable, and any fundamental theory must necessarily
	   take into account the Planck limitations' intimations of
	   "no infinitesimals", as well as the finitude of the universe.
	   A fundamental theory must be in essence, finitistic.
	   A sufficiently finitistic theory will possibly even escape the
	   undecidabilities of Gödel regarding its theorems while still
	   entailing problems of computability or solvability; this
	   however, is unlikely since anything sufficiently simple will
	   probably only allow completely deterministic dynamics.
	   On this count, both orthodox streams fail miserably; the
	   conclusion follows that neither stream is fundamentally
	   correct, and that any theory that relies on either can never
	   achieve an assumed Theory Of Everything (TOE).

	3) Since a fundamental physical theory of existence must be
	   Q by its nature, the idea of quantization being logically
	   backwards; one misses the patent richness of an a priori
	   Q theory in proceeding by a quantization mechanism.

	4) The Knot theorist Louis Kauffmann shows (with more assumptions
	   than he admits) that the special complex linear Lie group
	   in two dimension SL(2, C), which covers the one component
	   Lorentz group of Special Relativity (SR) arises already at
	   the level of the concept of "distinction", [Kauffman 1991]
	   and from this, one can expect to see that this Lie group, or
	   more likely its algebra, might appear in a fundamental theory
	   at a primitive, or slightly higher than primitive structural
	   level in the imposition of an ontologic condition of

	5) While an aspect of R principles is a space & time democracy,
	   spatial variables are nominally operators (QM) or completely
	   classical parameters (QFT); on the other hand, in no standard
	   existing Q theory does a time operator exist, nor can one be
	   constructed with all of the expected properties within any
	   present Q theory.  This is already a conceptual and formal
	   contradiction between present Q and R theories, on both
	   theoretical and metatheoretical levels.  Cf., e.g.,
	   Time in Conventional Quantum Theory
	   EDGE: THE END OF TIME - Julian Barbour
	   QUANTUM-MIND Archives -- March 2000 (#24)
	   20th century time

	6) Observers exist (per the literature) in both Q theories and
	   in R theories.  However, there is no real room for any
	   anthropomorphic observers in either; the observer concept
	   is pretty much of a hoax, a residue of the quirky philosophy
	   of logical positivism.  Sticking to a terminology of quantum
	   "experiment" or "abstact measurement procedure" might be less
	   confusing, and also indicate more clearly both the necessity
	   and the strangeness in physical theory of these concepts,
	   and that the idea of the nonunitary measurement process in
	   QM always involves the measurement of a Q system by some
	   classical (non Q) measuring device.  How do you implement
	   an essentially Q measuring device?  There may be a perfectly
	   good answer to that question, but I happen not to know it.

	   The idea of an observer in QM theories becomes slightly
	   ridiculous as more and more particles are conceptually
	   included to form the maximal system that we call "the
	   universe of existence".  Who is or can do the observing?
	   A god perhaps that exists outside of the universe, and
	   with which then by definition we can have no connection?
	   This suggests that the so called "Q state", which we
	   further argue must be a "Q process" is not merely a
	   convenient mathematical fiction, but a model of
	   Q ontology.

	   The uncertainties of Q theories are in my opinion the most
	   essential structural elements of physical ontologies but
	   they have nothing intrinsically to do with observers.
	   What is, is; we make models of it, still understanding
	   that ultimately we are part of the model.  There are
	   intrinsic limitations of that model, as there are
	   intrinsic Gödelian limitations to axiomatic mathematical
	   models, generally; yet, there is a finitude attached to
	   the universe (as we currently understand it), if only
	   implied by the ostensible finite number of undecaying
	   protons.  (Though it might seem more symmetrically
	   satisfying if the neutrons of electric charge 0 were,
	   the seemingly permanent particles, protons are the
	   little beasties in question, and there is a good sense
	   to this: neutrons would not call into existence the
	   necessary electrons, and the universe and its chemistry
	   would be a very different thing.) Other particles are
	   then fluctuations or resonances; what alleged gravitons
	   are is unknown.  While the idea of them strives for
	   analogy to QED, where exchanged longitudinal photons
	   between electrically charged particles mediate the
	   "electromagnetic force", equally massless longitudinal
	   gravitons should mediate the "gravitational force" between
	   two massive particles - mass, being the charge of the
	   gravitational field.  Unless a negative mass exists,
	   things are a bit different, since gravitational forces
	   seem always to be attractive.

	   Q theories are not subjective, since the material quantum
	   universe exists, meaning that, Bohr was wrong, that v. Neumann's
	   psychophysical parallelism is equally wrong, and that in the
	   Planck regime, where observers are difficult also to conceive
	   of, uncertainty relations still have both theoretic and
	   metatheoretic meaning.

	   This situation lends credence to the idea that the
	   "state vector" is not merely a secondary construct, but is
	   instead the symbol of an ontological entity - in agreement,
	   seemingly, with the general outlook of Roger Penrose.
	   It is a structured symbol of process.

	7) The fundamental assumption of any R theory is the ontology
	   of a given spacetime manifold of sufficient smoothness.
	   In this context, classical absolute determinism is inescapable,
	   in principle, regardless of the questions of computability.

	   The idea of a Lorentz invariant probability distribution
	   defined by a complex valued measure on Minkowski space has
	   been proven to be impossible.  Q theories (which I clearly
	   believe to be more correct regarding fundamental physics)
	   show the indeterminacy of both future and past.  The current
	   Q & R theories are then at odds again, and mutually inconsistent.

	   Any Q theory must be essentially linear, while GR is
	   essentially nonlinear, yet again they are mutually inconsistent
	   in their fundamental formal expressions.  This, however, could
	   be seen as a weak and too vague argument.  Nevertheless, the
	   kind of linearization arising from the De Witt functional equation
	   defined on the "superspace" of old that speaks of the "wave
	   function of the universe" presents severe logical difficulties
	   with regard to any quantum measurement theory.

	8) One way in which to view the large Planck mass (as I argue in
	   this brief essay) is obviously not as a maximal mass, since such
	   a view cannot be argued in principle from QM or QFT, but
	   rather as the smallest mass for which GR makes sense: the
	   Schwarzschild diameter of the Planck mass is the Planck
	   length.  Then, GR is not, and cannot be considered to be
	   fundamental since it fails specifically at a mass larger
	   than any of the known elementary particles.  This argument
	   is also, by itself, admittedly a bit weak, but perhaps not,
	   in consort with others.  This does, however, raise the
	   question of whether "quantizing the classical GR theory"
	   actually makes sense; if that is in question, so then is
	   its local "at a point" expression in SR as fundamental
	   physical metatheory.  A form of fundamental Q ontology
	   is necessary, but the current expressions of R ontology
	   appear to be not only inconsistent with Q ontology, but
	   just wrong, and might rather be approached as a quantum
	   statistically emergent "local" symmetry, in the sense of
	   fundamentally peaked probability distributions, calculated
	   by means of (correlations of?) transition amplitudes.

	   In current physical theory, the "speed of light in a vacuum"
	   is, numerically, nothing more than a convention; it is a
	   conversion factor from time units to spatial units, measured
	   by existing frequencies or wavelengths associated with
	   energies of atomic electron transitions between atomic
	   energy states.  Why Cesium would have been chosen over
	   something more ubiquitous, like hydrogen, boggles my
	   mind.  If you might want to convey the standard to some
	   extraterrestrial species, why set yourself up also to
	   have to explain what Cesium is?  The fine structure of
	   Hydrogen transitions would make the point quite simply
	   and elegantly. Ah - "government science".  With enough
	   such simple things, we could ask for a proof of the
	   annoying Goldbach conjecture that every even number is
	   expressible as the sum of two primes.
	9) QM is fundamentally incomplete and incorrect: QM assumes a
	   Newtonian space and time based on a continuum which is not
	   physically possible.  The additional assumption of the R
	   principle resulting in an inhomogeneous wave equation on a
	   Minkowski space (K-G eq.) leads to a mess of negative
	   energies and negative probabilities that defy interpretation
	   in that context alone; even applying the method of second
	   quantization, raising the wave function to an operator does
	   not relieve us of the negative probabilities.  The negative
	   probabilities can only be alleviated and negative energies
	   absorbed in a method of second quantization with suitable
	   constraints, creating a many particle QFT with a structure
	   predicting particle-antiparticle structure.  Even then,
	   theory becomes as much black magic as it does a physics
	   which seeks to sidestep the very real question of physically
	   existing continua.  It is difficult not to be reminded of
	   Claudius Ptolomy's famous epicycles.

	   Yes, Feynman made a rather elegant weaseling out from the
	   negative probabilities by arguing that they only appeared
	   in intermediate steps, but he was also never completely
	   convinced by his own arguments, nor am I of these.

	   A linearization of the K-G equation à la Dirac,
	   (Cf. factorization of quadratic forms) predicts the
	   existence of spin-1/2, leaving the result that there is no
	   consistent relativistic single particle Q theory.  This is
	   a strange situation which invites the seeming necessity of
	   some sort of mystical holism, since theory and its philosophy
	   denies that.  Abandoning completely that idea that anything
	   can be ever be separated from the rest of the universe
	   leaves any quantified analysis of ontology impossible.
	   We must, at least, be able to do this approximately,
	   in a substantial number of situations, while understanding
	   at the same time the approximation of such assumptions.

	   As an aside, the factorization of the quadratic form that
	   allows the Dirac gamma matricies to emerge, is a general
	   method of getting to Clifford algebras, and has nothing to
	   do with either quantum theory or relativity, regardless of
	   what the standard physics texts may suggest, or say outright.
	   It took far too much time and effort to figure that out.
	   Classical Geometry & Physics Redux]

       10) The very idea of canonical field quantization on a Riemannian
	   manifold is not invariant w.r.t. change of coordinates, and
	   these are considered as classical parameters.

       11) In continua based theories a 0-dimensional point is required
	   to contain an infinite measure of information, which is
	   physically and logically absurd.  (How many angels can dance
	   on the head of a pin?  As many as want to?) One naturally
	   then wonders about the nature of the points ("events") of
	   any model of physical space and times, and the structures
	   of internal spaces that support the actions of "internal
	   symmetry groups" and their Lie algebras.  Again, see both

	   Classical Geometry & Physics Redux
	   Origins of the Species of Time

       12) Maybe, one of the most interesting overlaps between Q and R
	   theoretics is that in their individual senses, causes and
	   effects become disconnected.  In the case of R theoretics,
	   that may seem like a silly thing to say since R seems to be
	   exactly about local Poincaré invariant causal lightcone
	   structures.  But, in R theory, different observers can see
	   events A and B ordered as A < B or B < A.[Bergman 1942]
	   The idea that the future can cause the past is not a part
	   of our intuitive notions of causality, and indeed there are
	   several formulations of causality in physics, none of which
	   are as strong as that of classical logical philosophy.  Yet,
	   doing away with causality weakens the necessity of even the
	   local relativistic structures of isomorphic light cones.
	   The meanings of "causality", and there are a good number of
	   them across physics, mathematics and philosophy, are not
	   what you might suppose.

	   Some views of the early universe suggest no such fixed
	   collection of lightcones; local "spacetime" being so
	   highly curved that tangent spaces do not approximate
	   well enough any reasonable classical spacetime.  It is
	   hard to see how a manifold (continuous geometry) is
	   an applicable descriptor at the Planck level.

	   An adiabatic approximation may work well in a classical
	   regime, but fail fairly terribly in a combined Q and R

Putting all of this together, one comes to a seemingly lame cul du sac: there are strongly valid aspects of what I will loosely call orthodox Q and R principles; it is also apparent that these principles are seriously defective for any fundamental physics, each as they stand in their current formulations, as well as in any attempted concurrent validity.

Since it seems to me most certain that Q principles are far more dominant in consideration of a fundamental theory than are R principles, it may turn out that R theory, which is constructed on the level of continuum models, indeed describes an emergent symmetry that is not strictly present on, or close to the Planck level.

A serious axiomatic problem that exists in any R theory is the fundamental assumption of the existence of an absolute spacetime (ST), and in particular, the existence of the temporal extension that is still connected to the Newtonian time that flows not only uniformly, but coherently and synchronously at every point of space. Newton himself was bothered by having to make this assumption; in retrospect, at that time, he had little choice.

The advances of QM and R theories have done little but propagate this fairly mystical idea of Newtonian time in physical theories. In a relativistic spacetime, there is no motion. The idea of thinking in terms of dynamics of a relativistic theory is artificial at best, and misleading at worst. While we call these theories "relativistic", "absolute" would be more accurate and certainly less misleading. Relativity theory is not about things being relative, it is about absolutes, of a four dimensional pseudoeuclidean space as manifold and everything that it contains. The change of Newtonian physics to relativistic physics is "simply" about a specific change of absolutes.

You cannot really eat your cake and have it too by making some subsequent handwaving argument that waves away the absolute determinism of a relativistic physics as a 4D manifold. Either the manifold exists, in the model, or it does not. One should be suspect of such models, especially in matters of relativistic cosmology.

In the place of Newton's separately absolute space and time, one has a unified absolute four dimensional spacetime, thus obviating the absoluteness of classical space and time individually while creating a new, and more amorphous absolute. It should, however, be born in mind that the mathematical formulations of EMT and even of SR do not require the 4D concept.

The spacetime manifold was not an aspect of Einstein's original special theory, but supplied later by Minkowski. But, it appears to be conceptually required for Einstein's Equations of GR. Though this additional conceptual construct has taken hold in the thinking of physicists and others, there is good reason to suspect that it was ultimately a bad idea. The seemingly simple change of the time concept from Newtonian parameter to Minkowskian coordinate turns out to be more conceptually restrictive than appears on first thought.

One can see untoward consequences of this relativistic 4D fiction in any attempt to create the Hamiltonian formulation of Electromagnetic Theory (EMT); this results in a "frozen formalism" that can be not quite be avoided by inventing an ad hoc dynamical parameter.

One can see similar consequences in the combining of classical canonical formalism with relativity yet again in the "No Interaction Theorems" in Special Relativity due to Mukunda and Sudarshan. [Sudarshan 1974] Ashtekar variables are a different matter, but still hold the relativistic global absoluteness.

Modern physics has not yet addressed this fundamental mystery of what time actually is, how we get away with the logically complicated Newtonian "Zeitansatz", what space actually is, and why, locally and how at least down to about a hadron radius we get away with a Euclidean type model that is logically on the basis of physical theory itself, wrong.

The terminology of "is" here is simply a shorthand for "how does this arise or come about", not a presidential or authoritarian fudge. Obliquely, this question of the verb "to be", is a question that almost automatically creates the notion of "levels of ontology". We will look at these mysteries more closely later on. (Zu sein, oder nicht zu sein; das ist die Frage.) See also, Quantum Set Theory & Clifford Algebras

Despite what seem to be essential failures in the two approaches criticized above in meeting the problems of quantum gravity, there is a third approach that seems to avoid the problem that an a priori background needs to be given. That approach is through loop quantum gravity (related to topological quantum field theory) which may actually be conceptually allied with the present approach, and which consists of a fundamental generalization of the basic kinematical constraint(s) of quantum theory.

On Mending the Defects:

Physicists have tried to create a Q theory where space and time become discrete, or where formerly dispersion free states acquire dispersion using algebras other than Lie algebras, such as Jordan and Malcev algebras. [Section I] But, it turns out that this type of generalization is not necessary, and that Lie algebras can be still fundamental, provided one is willing to allow that a Q theory may be fundamentally finite and *local* (neighborhood local, and not "at a point" local).

This idea of locality, in a temporal sense, might be expressed as a "now" in an extended sense beyond pointlike - even ignoring the fact that a point in the Euclidean-Cartesian sense is perforce not so structureless and simpleminded as the common pictures would have them. This comes from the fact that spinors are classical geometric objects, not essentially Q or R in nature.

This comes from the simple classical mathematics of factoring quadratic forms, and is not mystical handwaving over debatable physical theory. It also comes from ideals in tensor algebras Classical Geometry & Physics Redux] - but, I digress.

An important philosophical aspect of the specific idea of nonlocality entertained here turns out to be that the evolution of "the universe", and physical reality generally, though it is necessarily statistically weighted, is fundamentally indeterminate, and that means, loosely speaking, indeterminate with regard to the evolution of "quantum states".

Argument can be made that such a property is, in fact, absolutely essential for the evolutionary development of complex life forms. Given that as hypothesis, it is then implied that complex life forms in the universe, as complex as those we know here on earth, are far from special or unique, and can actually be expected to be relatively common under more circumstances than might naïvely be suspected. To go further with that would be a longer digression in molecular biology than is appropriate here.

The idea of this particular restricting locality is as follows. In QM, the possible positions of a particle are within any neighborhood of any point of the space within which the particle resides.

Moreover, this set of possibilities is time independent. A ψ function for a particle suddenly appearing at any point of space is equally suddenly determined throughout space: similarly, when a particle suddenly disappears. Realistically, these are not adiabatic processes.

The current idea of locality is that this is not so, and that the range of determination of an appropriate ψ function can be restricted by, e.g., its lifetime, or by a field theoretic limitation in rate of propagation of information in the spirit of relativistic theories.

In trying to understand the situation on defects, I looked to the Newton v. Maxwell paradox that Einstein resolved in favor of Maxwell, giving us special relativity (the unfortunate nomenclature), and wondered which was wrong on a fundamental level, quantum theory or general relativity? If you keep asking misdirected questions, you never get any useful answers. I could see eventually that relativity of any sort, and also quantum theory, however formulated, were simply in utter, mutual contradiction.

About twenty some years ago, after a series of investigations, and a few structural noticings, what became clear is that GR is only exactly what it appears to be, a classical field theory that is *fundamentally* (i.e., on the Q level) wrong, though it may work very well on the classical level, and that QM, as formulated, truly is incomplete and deficient in what a fundamental theory should be - yet - it works so well in the atomic regime that it cannot simply be discarded. The misleading fact of GR is that if it looked at from the view point of QFT, it appears to be a nonlinear field theory of a selfinteracting massless field, the keyword being "massless", and the conceptual error being that mass (and its absence) considered as a classical concept can be transferred to the Q regime without problem.

A fascinating conceptual construction [Gupta 1952], begins with a free linear massless spin-2 field. By iterating the process of feeding the stress-energy-momentum tensor back as a source term, one essentially derives the Einstein equations as a consistantly selfinteracting nonlinear field theory.

The very notion of the so far experimentally undiscovered country of the Higgs particle and its necessary invention for the "standard model" to make any sense at all is exactly what tears that idea to shreds. If you believe in the Higgs, then you must understand that mass is a Q concept stemming from Q process; it is *not* and cannot then be a classical concept, and the masslessness of so-called classical fields (allowed precisely because they are massless) is perfect logical nonsense. (The Planck mass at about 1019 GeV is even more absurdly large than the large putative Higgs particle expected to be somewhere around 140 GeV.)

Perhaps, put more simply: it is impossible to pass unambiguously from a nonlinear classical theory to a linear quantized theory by any known procedure. That one can get to the same conclusion by any number of clear logical paths is in favor of the point itself.
"The truth points to itself." - Kosh Naranek, Vorlon ambassador to B5.
With complementarity, so does foolishness.
"Listen to the music, not to the song."
"Understanding is a three edged sword."

QM must be a limiting theory of any fundamental theory; not necessarily the only limit, but an accessible limit in a reasonable way.

What also became clear is that, given the Planck units of length and time which no current formulation of Q theory respects, that a truly fundamental theory of existence had to allow that space and time be in some sense discrete, but in a Q-like, not C-like way, and that it *be* a Q-theory. The next project was to examine existing Q formulations and extract in a genetic approach to finding a primary set of assumptions, only that which was absolutely necessary as a basic characterization of a Q theory.

What is included, though possibly erroneously, through mathematical necessity, and what is logically indispensable, is the question.

As far as I can see, the indispensable (though stated deliberately vaguely) aspects of QM are:

	1) A fundamental linear space, thus allowing superposition
	   of competing alternatives;

	2) A mathematical field of that linear space with a complex
	   local structure, allowing Q interference in superpositions;

	3) Uncertainty relations related to the Planck units;

	4) A probability interpretation, based on, but not
	   necessarily equivalent to a classical probability theory,
	   and not necessarily the exact same as that of QM.

There is sufficient folklore in physics that even professional physicists wind up believing things that are not true; it takes years, perhaps decades to unlearn such things.

One example is that representations of CCR suitable for quantum theory must somehow be infinite dimensional. The adjoint representation of CCR as a nilpotent Lie algebra is an immediate and simple counterexample. The immediate and ostensible objection would be that it is not a Hermitean representation; more on unitarity/Hermiticity and lack of it later.

Moreover, we construct an infinite number of other finite dimensional representations over finite Galois fields in [Appendix J]. The logical error is requiring unitarity of temporal propagation of some "state vector"; as well as assuming that selfadjointness of the propagator generator is equivalent to the unitarity assumption. Mathematically, that is not true, and obviously not true from Stone's theorem. [Stone 1932].

As another example, most seem to believe that the uncertainty relations must come from the CCR or CAR, because that is how the standard proofs appear to go. CCRs are directly related to the fundamental Poisson brackets of classical canonical formalism, and are the images of Poisson brackets under a map of canonical quantization (which is not uniquely defined), so it would seem quite natural to view CCR as the fundamental Q relationship. Early quantum theory, however, rather saw the uncertainty relations as being more primitive epistemologically, and indeed they are a more general concept derivable from almost any noncommuting algebraic relationship. The important physical question is under what group of transformations is an uncertainty relation invariant?

Physically, it is not specifically noncommuting operators that are required, but rather the uncertainty relations which are structurally responsible for the allowance of physical evolution, e.g. the production and creation in stars of heavier than the very lightest elements, the appearance of amino acids, and then of life, etc. There is a necessary component of fundamental randomness built into the fundamentals of existence; or, we ourselves and what we experience could not exist.

The textbook proofs of uncertainty from CCR, are mostly derived from the Cauchy-Bunyakovsky-Schwartz (CBS) inequality cf. [Section XIII] and already implied by the Paley-Wiener theorem [Wikipedia]. These proofs are rarely given with sufficient physical specificity or mathematical rigor to see where the essentials are and where the standard proofs fail, and why.

Why one has a time-energy uncertainty relation in QM, when there is not even a time operator, is not explained properly in the standard texts that I have seen. Most text book explanations are fancy footwork verging on mysticism. The answer is actually easy: uncertainty relations exist for any pair of Fourier related quantities. It is as simple as that; this has noting to do specifically with Lie algebras or groups, or really, even QM itself.

A Fourier relationship is more general than the CCR kinematical postulate that it implies logically.

In the context of a Hermitian (self-adjoint) representation of the nilpotent Heisenberg Lie algebra that is CCR, exp( i (π/2)(q² + p²) ) in the algebra's universal enveloping algebra is the required Fourier transform, by definition, a unitary operator, idempotent of fourth order.

The Fourier relationships are far more important for canonical uncertainty relationships than are the specifics of the commutators, where the essential element of the teasing out of the uncertainty relations is the CBS inequality.

Ultimately, and more generally, the physically important uncertainty relations are derived from Fourier relations, and not as folklore would have it, from the specifics of the Heisenberg algebra.

The uncertainty relation, in fact, permeates the deepest levels of all Harmonic Analysis stemming from a classical theorem of Hardy on pairs of Fourier pairs, or more generally the Paley-Wiener theorem. Fourier transforms can be generalized to transforms of functions defined on locally compact groups - which happen to include classical Lie groups, and for Abelian locally compact (topological) groups amounts to the theory of Pontryagin duality [Wikipedia], so vital to the mathematics of general harmonic analysis. For a direct and elegant approach to the fact, see Fourier Transforms and Uncertainty, recently [June 19, 2006] found.

This all seems to be just a fact of mathematics, not physics; but once taken into account, it frees the mind from an insistence of Lie algebraic CCR as being inherent to a genuinely fundamental Q theory. One can actually derive "invariant uncertainty relations" from Lie algebra structures other than CCR. This is one of the points of the entire construction of the generalizing FCCR, about which more presently.

Here is a brief reminder of the general importance of the invariance of the uncertainty relations of QM derived from CCR. [Uncertainty principle - Wikipedia] The q and p operators of CCR are constrained to be formally Hermitian (selfadjoint, in the specifics of analysis), a condition left invariant under the adjoint action of an infinite dimensional pseudogroup of operators which contains a unitary representation of the affine group ISO(3), for one particle, which is a subgroup of the Gailiean relativity group. The unitary pseudogroup also contains that Galilean group. This is to say that the unitary pseudo group contains all the mathematical equipment necessary to implement transformations of frame that are: translations, rotations and "boosts", with related inner and outer actions on the algebra.

As it turns out, because the RHS of the fundamental CR is "the identity operator", any nonsingular linear transformation of it, unitary or not, leaves CCR form invariant; yet, only unitary transformations will preserve the formal Hermiticity of the p and q operators, similarly with CAR.

Reasonably, this seems like entirely too strong a requirement; unitarity should be both necessary and sufficient; but it is not necessary. Nevertheless, this "universal invariance" of the QM uncertainty relations implies a convenient dynamical conservation of probability.

The question is, whether this convenience is a necessity of theory or whether it is ultimately a subverting technical device. The suggestion here is that the latter is true:

   In QM,

   1. The Central matter of focus on the mathematical structure of
      QM is expectation values, and the particular way the probability
      distributions are derived.

   2. As remarked below, transformations that do not conserve probability,
      are not failing to conserve anything physical, and there is no
      reason to suppose that an observer cannot normalize/renormalize a
      probability distribution - his own, or that of someone else.

   3. Information that defines a probability distribution is given
      by strings of unambiguous alphabetic symbols of a mathematical
      language, each symbol transforming as a scalar under any
      transform of reference frame.  Such information then transforms

Below, you will see that the RHS identity operator of CCR is replaced, in FCCR, with a Lorentzian type metric form, and that the invariance group of FCCR is a finite dimensional group, conjuagte to a pseudounitary Lie group.

It is necessary and sufficient, and it will necessarily include a group conjugate to SL(2, C), formally providing a Q and R unification. The issue and focus of the probability interpretation becomes shifted from expectation values to transition amplitudes.

Now - onto the specific mathematics of the FCCR(n) model.

FCCR: Theoretical Background for the Main Result

There is nothing particularly new in the interpretation here of uncertainty relations since we have all that is required for them in noncommutativity of two Fourier related quantities (operators), and a group of transformations that preserves the commutator. Some additional understanding of their mathematics can be had from the remarks concerning the second, and new, (E-T) Fourier transform Υ(n).

About 15 years ago, I remembered a simple calculation that I once did out of semi idle curiosity, about 27 years ago truncating the Q & P operators of QM in the harmonic oscillator representation, as can be found in almost any standard textbook on QM, and being mildly amused that one could write for any of these nxn Hermitean matrices [Section II]

[Q(n), P(n)]  :=  Q(n) P(n) - P(n) Q(n)  =  i ħ G(n)

where G(n) = Diag[1, 1, 1, ..., -(n-1)]; so that Tr( G(n) ) = 0. G(n), of course, looks almost like the identity matrix I(n), but blocks the Heisenberg Lie algebra structure, since, for the commutators,

[Q(n), G(n)]  ≠  0, and  [P(n), G(n)]  ≠  0

[Section IV]

It happens that if you continue commutating with G(n), you reach the full space of the defining Irreducible Representation (IRREP) of the Lie algebra su(n), so Q(n) and P(n) can be said to generate (close on, by commutation) the defining IRREP of su(n). For every level of commutation with G(n), a fundamental IRREP of su(k+1) appears that contains the prior fundamental IRREP of su(k). For example,

[Q(n), G(n)],   [P(n), G(n)],

[[Q(n), G(n)], [P(n), G(n)]]

are closed under commutation in a scaled fundamental IRREP of su(2). These can be considered to be components of a vector operator under su(2). Commuting these with the original set of three operators produces similarly an su(3) vector operator of eight components if n ≥ 3. If n ≥ 4, repeating the commutation procedure gives another vector operator for su(4), and so on, until a final closure on su(n) for any n > 1. [Section IV: theorem 4.1]

It turns out that within the operator set Q(n), P(n), G(n), though these are not generally closed under commutation, you already have three essential properties of a proper Q theory, and almost all the properties of QM can be rederived - except that now, for all finite n, there really does exist a well defined "time" or phase operator that is Fourier related to the number operator; the structure of G(n) is obviously indefinite as a choice of ground form for a complex finite dimensional Hilbert space. That last fact disappointed me until later, when I started to understand the implications of having a time operator, one of which is that energy eigenvectors are no longer stationary with respect to that operator's time, since [N, t] ≠ 0, and so it is proper to understand the vectors of the finite dimensional Hilbert space as representing processes, and not states as they do in QM. Eigenstates of course become eigenprocesses (Eigenprozessen?), or eigenevent (Eigenereignis?). These processes are, of course, both finite and local, the sense of local meaning confined to a region, and not to a point - or, classically to a spacetime point, but not as it is usually pictured to be, without structure. Such points, as events, are smeared in spacetime since [Q, t] is not zero, making spacetime a noncommutative space exactly as Q and P not commuting turns classical phase space into a noncommutative syplectic space - we also have here. For further explanation of that point, see Classical Geometry & Physics Redux]. Remember that classically, p is also defined, one way or another, in terms of q (and t).

   Leaping ahead for a moment, for a 2 dimensional spacetime,

	Q(n)    -->    P(n)    (Classically, in the tangent space
					of phase space)

	t(n)    <--    E(n)    (Classically, in the cotangent space
		 y			of state, now process space)

   for Fourier transforms f and y.  The relationship between the
   Q-P pair and the t-E pair is determined by the definition of E,
   as a function most generally of the operators Q, P, t.

   E actually defines t, since y has an invariant construction
   from the eigenbasis of E, so having E be a function of t could
   be quite messy computationally, even in these finite dimensions.

   Theoretically speaking, once E is defined, there is "only" the
   matter of computing transition amplitude, expectation values and 
   uncertainties (standard deviations).  There is no room for the
   standard relativistic invariants

		  -s²      :=  Q² - (ct)²

		  -(mc²)²   :=  (E/c)² - (cp)²

   except as definitions.

   A suspicion and expectation is that either an appropriate group
   representation exists for which these define invariants, or
   that they can be derived in some form to show that relativity
   theory is an emergent phenomenon, and not fundamental.

This change of viewpoint from state to process likely explains the peculiar conceptual state of affairs already seen in the CCR of QM: the momentum p is a construct that depends not only on a concept of time, but also on a time which is not only simply a parameter, but also a parameter that allows limits of it being taken as in elementary calculus. Conceptually then, even before considering matters of QM dynamics, and the explicit use of the tacitly assumed Newtonian nature of time, the kinematic foundation of CCR conceptually demands such a Newtonian time; it is no particular wonder then that reasonable time operators in QM do not exist, since whatever time is to be used for the operator must be consistent with whatever time it is that allows us to conceptualize p as momentum. It may seem that these concepts may somehow have become disconnected, and that is exactly what it looks like in the CCR limit of FCCR(n): the limit (in the strong operator topology) performs the disconnection.

The necessity of eigenprocesses replacing eigenstates generally rather enforces that very philosophical viewpoint of David Bohm that lies underneath his notion of implicate order. Is FCCR a model of Bohm's implicate order? Probably not. The implication seems to be that "order" (the nature of things) is not a thing, but a hierarchy of process. See also recently found [June 20, 2006] arguments to the nonexistence of QM eigenstates, Choice Without Context?

An additional consequence of a free time parameter being replaced with a proper operator of quantization is that there can exist no simple one parameter dynamical group as there is in all currently existing quantum theories; the very concept of dynamics must undergo a paradigmatic and calculational shift. More on that specifically below.

Interjected Remark:
Although the following discussion of kinematics/dynamics is being restricted to that of an oscillator, the fundamental role of oscillators in classical Hamiltonian physics, that any canonically expressible physical system can be transformed to a set of oscillators can be found in Hamilton-Jacobi theory. The most recurrent, stable and therefore important structures are cyclic (periodic), pseudo cyclic and almost cyclic ones.

The Concept of Time:
The word "time" has a problematic similarity with "consciousness": one word is used to denote many more than several different meanings, Origins of the Species of Time, [Appendix K] and possible defintions, leading to much unnecessary argumentation. I see, e.g., in the development of the above simple mathematics, three completely distinct notions of "time": G(n) classifies processes and through inheritance, the linear operators of the associated C*-algebra of linear operators on the Hilbert space, as "chrononic", "photonic" and "toponic" [a working nomenclature]; the time operator t(n) is the "time told by the oscillator as clock"; as n becomes large, the transition amplitudes of this clock time, statistically, favor the |tk> → |tk+1> and |tk> → |tk-1> transitions of the clock in a very peaked distribution giving an origin to a localized Newtonian time in a quantum ontology of a finite state oscillator. The |tk> are ket vectors in Dirac notation symbolizing an ordered eigenbasis of the Hilbert space associated with the time operator.

But, I am already getting ahead of this condensed exposition. Here, without most of the proofs (some are easy, some are not), with reference links to appropriate parts of the somewhat prolix mathematical work, are the most basic and primitive of the structural relationships in the online: lengthy FCCR ToC. Most of theorem-proof material is relegated to [Section VIII].

The Finite Canonical Commutation Relation FCCR(n) above exists for any integer n > 1, [Section I], [Section II] and the following is true:

FCCR(2) = CAR, the Canonical Anticommutation Relations. [Kaempffer 1965]

The limit of the sequence of algebraic relationships FCCR(n) → CCR, the Canonical Commutation Relations exists for unbounded n in the strong operator topology (SOT), which is equivalent to convergence in the Ultraweak (aka Weak-*) Topology, or "convergence in matrix elements", which is also to say "convergence in transition amplitudes" that happens to be so useful in quantum field theory. [Section III] The SOT is stronger than the Ultraweak Operator Topology, which is stronger than the weak operator topology, but SOT is weaker than the Norm Topology, in which such convergence cannot exist.
[Topologies on the set of operators on a Hilbert space - Wikipedia],

That FCCR encompasses both CCR and CAR in its extrema is a salutary aspect of the generalization, that did not at first appear to be a generalization.

The number operator N(n) below survives in the limit of unbounded n as an unbounded operator, and with an appropriate, and rather natural choice of an invariant domain of definition, its exponentiation remains well defined. Ergo, the Fourier transform between Q(n) and P(n) survives the limit, as a special case of the one parameter rotation group in the Q-P plane generated by N(n).

In this SOT limit, however, neither the time operator nor "phase operator" existing for every finite n, survive the CCR limit; [Section XVII] but, the Fourier transform Υ (upsilon) [defined below] relating the eigenbasis of the Number operator and the eigenbasis of the time operator does survive; this Fourier transform representation is then seen algebraically to be responsible for the exp( i (Ea t)/ħ ) phase factors of time dependent energy eigenstates appearing in QM. Interpretation of these Fourier factors as limiting recurrence amplitudes is discussed later on.

Define a putative (not to worry: the mathematical facts will obviate most of the implied doubts) oscillator Hamiltonian, taking the same form as both standard Classical Mechanics (CM) and QM, in the expression,

H(n)  :=  (1/2)(Q²(n) + P²(n)),
H(n)  =  N(n) + (1/2) G(n)

where N(n), perhaps not unsurprisingly turns out to be the nxn truncated number operator Diag[0, 1, ..., (n-1)], with eigenvalue statement in Dirac notation,

N(n) |n, k>  :=  k |n, k>.

[Section II: (2.3)]

This will be seen quickly to imply that the sum of the squares of the roots of n-th Hermite polynomial [see below] is equal to [Section IX]

Tr( Q²(n) )  =  Tr( P²(n) )  =  n(n-1)/2

where Tr(.) is the trace functional on the finite dimensional C*-algebra of linear operators on an obvious finite dimensional complex Hilbert space; this is a result that I have never happened upon in any literature. Also then, the commutator equations [Section VII]

[N(n), Q(n)]  =  -i P(n)

[N(n), P(n)]  =  +i Q(n)

hold, exactly as in QM, meaning that N(n) is a generator of rotations exp( i α N(n) ), in the Q-P hyperplane of the operator C*-algebra. These last two equations correspond, in fact, to the classical dynamical QM oscillator equations in Hamiltonian formalism.

Do notice that the true cognate Hamiltonian equations with H(n) instead of N(n) do not hold for finite n, precisely because G(n) is not the nxn identity. Moreover, we do not have (or introduce arbitrarily) a smooth, continuous time parameter from which a derivative operator (d/dt) can be constructed. This will make more sense later on.

The point here is that though a "Hamiltonian" like operator is constructed for such an oscillator, it is not a real Hamiltonian operator, and this is not a Hamiltonian system. It is really an open system, and might be considered "almost Hamiltonian", more "almost" as n increases.

By the way, this formal Hamiltonian, H(n) and the cognate formal Lagrangian,

L(n)  =  (1/2)(Q²(n) - P²(n)),

together with their commutator almost provides (and in the SOT limit does provide in the algebra of Heisenberg algebra bilinears) a REP of the oscillator dynamical algebra su(1, 1).

Particularizing to a specific value of α = π/2, for the Q-P
rotations above, define

φ(n)  :=  exp( i (π/2) N(n) )  =>  φ4(n)  =  I(n)

     φ(n) Q(n) φ(n)  =  + P(n)

     φ(n) P(n) φ(n)  =  - Q(n)

which then shows φ(n) to be, exactly as it is in QM, a Fourier transform, but now a Finite (discrete) Fourier Transform [Section VII] connecting Q(n) and P(n) in the usual way; also, by the usual construction, so φ²(n) maps Q(n) and P(n) to their negatives, affecting a spatial inversion, and so is a spatial parity operator.

The matrix elements <q(n, k)|p(n, j)> of φ(n), in the limit of unbounded n, express an expansion of the Fourier kernel exp( iqp/ħ ) in Hermite polynomials. [Appendix E]

   Define, again as in QM, the creation and annihilation operators,
   [Section II]

	B(n)   :=  1/(√2) ( Q(n) + i P(n) )

	B(n)  :=  1/(√2) ( Q(n) - i P(n) )

   where '†' superscript signifies Hermitean conjugation,
   then [Section II: (2.4)]

	[B(n), B(n)]  =  G(n)

   Being a finite dimensional matrix, B(n) possesses a polar decomposition
   (This decomposition fails in QM),
   [Appendix D: (D.11)]

	B(n)  =  N(1/2)(n) CN(n)

   where CN(n) is the unitary, cyclic operator of the N(n)-eigenbasis,
   defined by the mappings, [Section II: (2.17)]

	CN(n) |n, k>   =  |n, k-1>

	CN(n) |n, k>  =  |n, k+1>

   where k is understood as mod n.  CN(n), being unitary, has a form
   [Section II: (7.40)]
	CN(n)   =  exp( +i (2 π)/n T(n) )

	CN(n)  =  exp( -i (2 π)/n T(n) )

   where T(n) is an Hermitean matrix.  The eigenvalues of CN(n) are the
   nth roots of unity, since (CNn(n)) = I(n), and for no power less than
   n, meaning that this last equation expresses the minimal polynomial
   of CN(n), and so the eigenvalues of T(n) also give the eigenvalues of N(n),
   and vice versa.

   Substituting the polar decomposition of B(n), The fundamental CR relation
   of FCCR(n) becomes,

	CN(n) N(n) CN(n) - N(n)  =  G(n)

   and then for any integer k,
	exp( +i k (2 π)/n T(n) ) exp( i N(n) ) exp( -i k (2 π)/n T(n) )

		=  exp( i (N(n) + k I(n)) )

   as an "integrated" or Weyl type form of the previous equation, and an
   expression of discrete translations in N-space generated by T.
   Cf. [Section VIII: Corollary 8.11.2]

So, T(n) is a generator of discrete energetic translations "in the Weyl sense" of FCCR replacing CCR, and can therefore be associated with a time operator with a discrete, equally spaced spectrum. But, notice that the discrete variable k that appears in the formal position of time in this analog of the QM expression is not conceptually connected to an eigenvalue of T(n). It is rather an index of iteration in the application of a functional transformation.

Application of the second Fourier transform (which does not exist as an operator in QM) Υ(n), q.v. below, to this last formula exchanges the symbols N and T.

From here on I shall distinguish, symbolically, a T(n), which always has integral eigenvalues like the number operator N(n), from a t(n) which is a physical time operator attached to the local oscillator clock of size n that is a scaled T(n).

To distinguish T(n) by its physical meaning, we might best call it a "clock operator" or as of old, a "phase operator", and not a directly named "time operator" since it will not directly define a Newtonian like time.

   Now, diagonalizing CN(n) is equivalent to diagonalizing T(n), and the
   diagonalizing transformation Υ(n) of CN(n) is shown by a simple
   inspection in the N(n) eigenbasis to be given by its components,

	<n, k| Υ(n) |n, j>  =  1/(√n)  exp( i 2π/n kj )
	|tk>  =  1/(√n) Σ   exp( i 2π/n kj ) |n, j>

   |tk> is then interpretable as an equidistribution of energy "states",
   with specific phases.  [Section VII: (7.21)]

   Υ(n) can be verified also to be a Fourier Transform connecting

	N(n) and T(n)

   so that

	Υ(n) N(n) Υ(n)  =  T+(n)  :=  + T(n)

	Υ(n) N(n) Υ(n)  =  T-(n)  :=  - T(n)

where T+(n) and T-(n) are readily interpreted as forwards and backwards clock operators, respectively. Notice that having representations of clocks running backwards and forwards is much like the notion of Schrödinger's cat which is in a linear combination of states "dead" and "alive". It is a necessary aspect of the superposition principle.

The square of Υ(n) gives a kind of energetic reversion that leaves |n, 0> invariant, while complex conjugation is a consistent time (clock) inversion, as again it is in QM. A short but elegant paper [Chaturvedi 1998] shows that the Υ(n) transform of N(n), interpreted as a passive change of basis, expresses N(n) in a maximally off diagonal form, i.e., in a form where transition elements are maximally dominant over expectation values.

   The reverted number operator is:

	Υ²(n) N(n) Υ²(n)

   For a SHO in QM, the Newtonian time is a continuous variable, not
   discrete, and a long winded dissertation is possible where a physical
   t(n) can acquire a scaling function so that, e.g.,

	t(n) = (1/nx) T(n),  0 < x ≤ 1

   Υ(n) then no longer maps operator N to t, but it *does* connect their
   orthonormal eigenbases.  Matrix elements can be given by,

	<n, k| Υ(n) |n, j>  =  <n, k|tj>  =  

	=  1/(√n)  exp( i (2π)/n kj )

   If the energy values remain as they are in QM, consistent with those of the
   oscillator in QM despite being finite, then we must define a time operator

	t(n) = (1/n) T(n),

so that the eigenvalues of t(n) measure the pointer positions of the limiting clock over one cycle of (2π), behaving as one would expect, as a "phase operator". With appropriate topological sorcery in the limit of unbounded n, t(n) becomes the generator of the "circle group" U(1).

By choosing a period appropriate to some arbitrary ideal classical clock of arbitrary precision (think a regular n-gon), and passing to the covering group of U(1) (the completion of the limiting circumscribing circle of the sequence of n-gons), one regains the classical Newtonian time parameter, but one associated with a clock of bounded cycle. In passing to the covering group, the assumption must be made that deck transformations can be made, and that therefore, physically, there is something (conceptually, a secondary lab clock or a memory accumulator) that will count or record the periods of the SHO system's own clock.

An alternative that speaks in terms of cosmological QM is to set the period of a classical clock to the period of the universe as clock. That would, of course, automatically suggest a closed universe in the manner of a closed Robertson-Walker universe of GR. There is a more detailed discussion on this below.

I do not want to go any further here with limits of finite Fourier transforms, and limits of operators. If I did, I would never finish. There are things yet to discover and things yet to figure out. Besides, I have not yet created an encyclopaedia out of all the topological possibilities and technicalities. They are not exactly trivial. Any attempt at such an exposition now would simply terminate in medias res.

Returning to the unitarily equivalent Q(n) and P(n), both have eigenvalues that are the roots of the nth Hermite polynomial. The diagonalizing transformation for Q(n), has matrix elements Hk( qj ), k = 0, 1, ..., (n-1), and qj the roots of Hn(z) = 0, modulo some normalization business. Q(n) being real and symmetric, the diagonalizing transformation of Q(n) is a real orthogonal transformation whose components can be calculated. For that calculation, the roots of Hermite polynomials are needed. [Section IX].

These roots of the Hermite Hn(z) have the (large n) asymptotic form

	q(n, k)  =  ( π/(2√n) ) ( (n-1)/2 - k )

   The actual roots approach an equal spacing rather rapidly; though for
   small, approximately n < 12, a better approximation turns out to be

	q(n, k)  =  √(6/(n+1)) ( (n-1)/2 - k )

   [Appendix E]
   I cannot see these results anyplace else, and must presume
   that they are either new, or were just simply lost.  Getting
   them is not exactly trivial, but it is not exactly tremendously
   difficult either.

All of the eigenvectors of Q(n), P(n) and T(n) are G(n)-null, "photonic" (<k|G|k> = 0); N(n) has one "chrononic" (<k|G|k> < 0) eigenvector |n, n-1>, the eigenvector associated to its maximal eigenvalue, while the others are "toponic" (<k|G|k> > 0).

A pseudoinner product on the carrier space of vectors defined by G(n) extends to a (trace) pseudoinner product on the algebra of operators, which determines a pseudonorm on the algebra. [Appendix A] So, for the operators Q(n), P(n), T(n), one could instead say that they are null operators in the G(n) trace pseudonorm: for X = Q(n), P(n), T(n); Tr( X G X ) = 0. The G-norm of N(n) turns out to be -n(n-1)(4n-5)/6, (n > 1) classifying N(n) as a chrononic operator, since its G(n)-norm is negative. G(n) itself is classified as chononic, from the trace of G3(n). [Appendix I]

The operator G(n) then behaves as a background energy both in the usual SHO QM sense, as well as in the sense of a cosmological constant of GR.

Every carrier space, for given n, supports an IRREP of su(2), [Section XIV] hence the group SU(2), which is, of course, not true for any su(n), SU(n) where n > 2. By a complex extension then also a finite dimensional IRREP of algebra sl(2, C) and group SL(2, C), all of which are usually associated with the spin (n-1)/2 and mass zero. (Massless particles are their own antiparticles.) For general direct products FCCR(n, m) = FCCR(n) X FCCR(m) the value of the "mass" Casimir operator is not zero, but being imaginary, suggests a massive resonance. This, and the previous paragraph indicate that the oscillators spoken of in FCCR(n) are massless. Since a massless oscillator is not possible in a nonrelativistic theory, the conclusion should be that FCCR(n) is not only a viable Q theory, having all the necessary characteristic properties, but that it is also necesarily relativistic in a fundamental, but as of now, arcane sense. [Section XIV]

Fundamental Velocities (speeds)

G(n) determines the vertex angle of the null cone θ(n) in the implied n dimensional Hilbert space with indefinite inner product ground form G(n) to be given by

	tan( θ(n) / 2 )  =  1/√(n-1),

so as n→∞, the cone opens up and θ(n) → π. If this determines a fundamental local velocity, that velocity becomes unbounded with n, with an order of n(1/2). This makes perfect sense for a limit of a nonrelativistic quantum theory where the propagation velocity is fundamentally unbounded.

Also, for the oscillator, there is a fundamental speed determined by the ratio of eigenvalue spacings of Q(n) and t(n).

	c(n)  :=  (Δq(n))/(Δt(n))

               =  [ π / (2 √n) ] / [ 2πn / n t(n) ]

               =  [ π / (2 √n) ] [ n t(n) / 2πn ]

               =  (n t(n) π)/(4 π √n)

               =  (n t(n) )/(4 √n)

               =  (1/4) )(t(n) √n)

               =  T (√n2)

   Cf.  History of Light-Speed Debate
        Speed of Light

   where T is the time of the fastest run of a large clock in one
   cycle, and t(n) is the finite time of a fastest possible QUANTUM
   JUMP.  There is no "transition" of stasis for this path (It is
   therefore unlikely).  The time interval t(n) can be looked up
   as the average time of a jump |n, t> to |n,  (t ± 1)>. or a
   refractory time (See De Broglie Redux) at every pointer position.

	t(n)  :=  T/n

   Since we may not look between these pointer positions, it appears
   that we cannot distinguish between these two interpretations of

that, asymptotically, is of the order of n(-1/2)/n(-1) = n(1/2), exactly as before. The existence of a maximal speed for fixed n is of course typical of a relativistic theory. Assuming, possibly wrongly, that scaling functions for operator "observables" that depend on n are not used, this seems to imply (the mathematics being applied to reality) that

	A fundamental finite, bounding velocity is intimately
	connected with the finiteness of the universe.

This may explain why the simple combination of QM and SR leading to the Klein-Gordan equation does not lead to something in itself sensible: it involves an infinite universe and a finite bounding velocity, and that these conditions are somehow incompatible and contradictory, thus pointing to fundamental and rather specific way of understanding the essential incompatibility of the Q and R principles as customarily conceived.

Note that while some existing theories of a variable speed of light predict that the speed decreases with the size of the universe, FCCR simplistically invoked, seems to predict that it increases. This is, at very least, a point of investigation, since from about 1998 (Victor Flambaum and John Webb, et al.) have suggested, in the expansion of the universe over the past 12 billion years, that α, the fine structure constant has increased a few parts in 105. See also, the discussion of the Planck Units and the reference to the theoretical derivation of fine structure constants by James Gilson therein cited.

See also, below, in the more detailed discussion of the 1-dimensional universe as an FCCR clock, the speed of the clock also increases with the increase of n increases without bound, while remaining bounded for any finite n.

These results and arguments notwithstanding, the observed and therefore hypothesized invariant speed of light is not necessarily directly associated with the discussed speed, and more than likely arises from a further statisticalization of reasonably long term propagation through an excited medium (space) that is not uniform, and specifically not uniform in a local modeling by the FCCR(n) dimension, 'n'.

The arguments for the increasing "speed of light", within the model are all made in the context of a 2-dimensional QST, and in the absence of any specific relativistic and EM pretext, so it is still difficult to say that what is being analyzed is, in fact, "the" speed of light in anything.

One of the nasty aspects of having the present kind of time operator is that it seems difficult to avoid having a time operator for every spatial operator, and while Qk and tk are both Hermitean, ( Qk + itk ) are not even normal, since the commutators [Qk, tk] do not vanish.

This "speed of light in a vacuum" (a rather artificial concept to begin with, since a vacuum, specifically an EM vacuum does not exist) is not some determined aspect of the universe's birth, but an emergent cooperative consistency.

The permeability and permitivity of a "free" space in classical EMT have specific nonvanishing bounded values for a spacetime that is putatively featureless, empty of everything and anything. Can this possibly be acceptable in any legitimate physical theory, and be swallowed whole, without comment? This is not the only problem with classical EMT. The very concept of EM interaction (See the work of Feynman and Wheeler) is problematic as is the concept of the canonical angular momentum tensor (PhD Thesis of Glenn Schmieg). Sudarshan and Mukunda [Sudarshan 1974] have shown that a system of interacting particles is physically equivalent, by legitimate transformation, to a system of noninteracting particles. Is the concept of EM interaction even theoretically real? Is it an illusion (delusion)? If so, is Maxwellian EMT actually a physical theory, or is EMT yet another delusion? I am NOT joking!

As a long time friend and teacher of musical composition (Raoul Pleskow) first told me many years ago, rather gently but emphatically: "It doesn't all have to sound like angels singing, but it DOES have to make sense."

Either a physical theory stands on its own terms (consistency), or it does not; this is not a matter of opinion or point of view. It is a matter of excruciating, absolute mathematics (apologies to Kurt Gödel); we don't actually have physical theories, we have only used-to-be-cool ideas - and a fair amount of transparent bullshit.

At any rate, if we understand this "speed of light" nonsense to have some meaning, fantastic though that is, then the essential variation of this speed in space and time (that it is not a fundamental constant) is also understood. Relativistic prescriptions and proscriptions are also then emergent matters, and not a priori, given matters of fundamental ontology.

The fundamental distinction of FCCR from any simple relativistic theory is that although fundamental bounding velocities exist, they are plural, and there is no one invariant velocity that can be taken as a universal constant, except the one specified by a possible maximal n corresponding to the "size of the universe" (which changes). The measured invariant velocity seems to depend on the size of the region in which it is measured, since the number of velocities that can be measured is finite, as are all those velocities.

Taken as a simple model of the universe à la Robertson-Walker, where an increasing n measures the size of the universe, in observation of light from "older objects", the emission speed of light would be slower than it is now, and have increased as the universe expanded to its present "state" (quoted since we no longer really believe in states of systems, only processes). Taking such an interpretation of FCCR would turn out to look very much like an Inönü-Wigner contraction of the Lorentz group to the Galilei group of Newtonian QM.

The Newtonian time of both CM and QM is often described as "central", meaning that it is an observable quantity that commutes with all observables in the algebra of observables, and is therefore by definition in the center of the algebra of operators corresponding to observables; this, despite the fact that a time operator simply does not exist in any reasonable sense within the standard mathematical framework of QM. It is a cute phraseology, but it may be mathematically misleading.

In QM, and standard QTs generally, it is more natural to look at the quantum "state space" in analogy to classical mechanics as a one parameter Hilbert bundle, where the parameter base space, of course, is time. Dynamics in such QTs is then the story of cross sections of the bundle. A bundle concept can be maintained in FCCR, but the cross sections are certainly not continuous functions.

So far then, FCCR(n) describes a local Q structure with a noncentral clock operator, with derivable uncertainties as in QM. The apparent fly in the ointment is that G(n) as an indefinite ground form of a finite dimensional complex pseudohilbert space, appears to interfere with the standard probability interpretation and, apparently as well, the conservation of probability under unitary dynamical transformations.

However, since T(n) "time" is no longer a parameter, but is instead an operator that is Fourier conjugate to N(n), the time is that told by this implied local proper clock is a physical aspect of the local oscillator system. The time of dynamics of the system is then not a simple parameter, and so the customary mathematical form of the dynamical problem ceases to exist. By itself, this time is not exactly Newtonian, but only apparently Newtonian to a localized observer understanding quantum phenomena, and observing (whatever that means) localized phenomena.

In FCCR, there is no longer a dynamical equation of evolution which moves the "process vector" smoothly, but there is instead, a stochastic, energetically motivated process that operates according to the transition amplitudes for the clock eigenprocesses. Here we depart formally and conceptually from QM, by understanding that every physical systen is also essentially a clock. As physical systems cannot be truly isolated, nor can their clock aspects: the universe is an interconnected plexus of finite, Q discrete clocks. I don't see that there is any real way out of this conclusion.

As Max Born put it in the case of QM, "The motion of particles follows probability laws, but the probability itself propagates according to the law of causality," Born, of course, meant a Cauchy propagation that is expressed by the abstract Schrödinger equation that propagates the QM state vector by an implicitly unitary transformation. It is Born's "causal" propagation as a differential equation that FCCR breaks, and then restores, in the limit of large n, fairly obviously, not in a limit of any norm topology of any Hilbert space.

Born was, of course, talking about the model, not any ultimate reality.

Is there a further stochastization possible or required? Consider, e.g., stochastisizing the operator elements themselves, making a stochastic operator algebra related to Q fuzzy concepts. Do we need to go there? Offhand, I do not see a motivating force to do that in terms of physical models right now; but it is the sort of thing that one does now in passing from QM to QFT.

Invariance Groups

It is clear that the form of the basic FCCR(n) relationship is invariant under any change of basis, and that the transformation group is the general linear Lie group GL(n, C).

Almost as clear is that group of transformations which leaves G(n) invariant is a Lie subgroup of GL(n, C) that is conjugate in GL(n, C) to its subgroup SU(1, n-1). This conjugate subgroup also preserves the above velocity, as the Lorentz group preserves 'c'. [Section XI]

However, only the maximal compact subgroup of the conjugate group will preserve the Hermiticity of Q(n), P(n) and G(n). Hermiticity inplies reality of the spectra of these three operators, already taken as Hermitean.

If G(n) is understood as a bilinear ground form for the underlying Hilbert space, as it is in QM, and as a pseudo inner product as here, one might suspect it to be associated with some "conservation of probability" as indeed it is in the context of QM.

This putative sum of all probabilities would be conserved if transformations were restricted to the maximal compact subgroup, but it would not be conserved under the full conjugate subgroup. How bad would these extra, nonunitary "boosts" be? It would be a poor observer who could not be counted on to renormalize his observed frequencies; in fact, he should always be expected to do just that, having the invariant velocity at his fingertips.

I use the concept of observer here as a recognition that our fundamental physics cannot ever be anything but a model of that which we can never experience in toto, and that this limitation automatically means that we can only rely on measurements (values that macroscopic devices spit out or point to) that we can make to negate the validity of our models.

As it turns out, we will be forced to take a different view of quantum proabilities anyhow, so I will pursue this problem no further here.

A lingering problem remains, however, under transformations by the nonunitary subgroup conjugate to SU(1, n-1). The eigenvalues of the fundamental Q(n) and P(n) can become complex, and we wonder how terrible this is.

In Classical Geometry & Physics Redux we argue that even our pictorial models of Euclidean space are inconsistent with the Cartesian-Euclidean axiomatics of it, and that in sooth, these pictures need to be revised to accomdate what that formal mathematics actually tells us. We come there to the conclusion that appropriate mathematical machinery to read is complex Clifford algebras. These, combined with combinatorics allows for appropriate room to deal with fluctuations of local spacetime signatures.

Many of the requirements of a relativistic expression are somehow already built into FCCR(n), and my suspicion is that connection to the usual formalistic statements appear by statistical means from here.

This is to say that SR is not after all a fundamental aspect of physical reality, while the quantum principle is, and further that SR structure is an emergent phemenon when a smooth concept of time arises through combinatorial statistics of a decidely Q flavor.

Notice that the natural contextual action of the group conjugate to SU(1, n-1) is on the full finite dimensional C*-algebra of complex nxn matrices, that contains the su(n) official algebra of Hermitean "observables", which is invariant, under the action of the maximal compact subgroup of the noncompact conjugate subgroup. G(n), an element of su(n), is by its construction, invariant under the full noncompact conjugate subgroup.

With the understanding and assumption that still, "energy is the generator of time translations", (or slightly more generally, one might understand that energy is the motivic source of temporal extension) as has been already shown, we will need to evaluate some matrix elements in order to reformulate a concept of quantum dynamics: [Section VIII, theorem 8.5] [Section VIII, theorem 8.18]

	<tk| N(n) |tj>  =

	- (1/2) ( 1 + i cot( π/n (k-j) ) )   for k ≠ j

	 (n-1)/2                             for k = j

   This can be shown by first noticing that, as a geometric series,

	n-1                  exp( i n z ) - 1
	 Σ  exp( i k z )  =  ----------------  =
	k=0                   exp( i z ) - 1

	            sin( n z/2 )
	           ------------- exp( i (n-1) z/2 ).
	             sin( z/2 )

   Differentiating w.r.t. z, and then letting z = (j-k)(2 π)/n, does
   the trick.

   The transition frequencies

	|<tk| N(n) |tj>|²  = <tk| N(n) |tj> <tj| N(n) |tk>

need to be normalized to define transition probabilities, and there is a routine, well defined way of doing that, namely by dividing the frequencies by the factor (1/n)Tr( N²(n) ). The structural capacity to work a stochastic dynamics entirely in transition amplitudes is clearly lost in a limit of unbounded n since CCR will always imply some physical observables to be represented by unbounded operators, with unbounded trace, however, it is seen below that the standard differential equation takes over for very large n as an approximation. [Section VIII, theorem 8.21] [Section VIII, theorem 8.22].

In QM, the integrated Heisenberg solution for an observable O where the Hamiltonian is functionally independent of time is determined in [Section VIII, theorem 8.23] and [Section VIII, theorem 8.24]

Q(n) & P(n) Expectation values

NB: From [Section VIII, theorem 8.11], the expectation values of Q(n) and P(n) on the phase eigenprocesses |tk> behave sinusoidally like the classical oscillator, and also then like the coherent states of the QM harmonic oscillator. The index k marks the discrete progression of the semiclassical operator values.

From CCR to FCCR

   While the preceding theorems work forward from FCCR, we can also
   work oppositely starting from QM.  With a time independent Hamiltonian,
   H, a dynamically propagated observable in the Heisenberg picture
   is expressed, more generally, see notes on the [Lax equation].

	O(t)  =  exp( +i t H/ħ ) O(0) exp( -i t H/ħ )

   where O(0) is the initial value for the operator O.

   [The following is admittedly a bit tortured, the same points leading
   to the choice of k-click propagator are approached from various sides,
   making a strong case for the choice, natural though it may appear; for
   a leaner variant in approach, look here.]

   Small t (t→dt) and the BCH formula reproduces the Heisenberg
   equation of motion when only terms linear in dt are retained.

	dO(t)/dt  =  +i/ħ [O(t), H]

   Determine the rule for manipulating <t|H|t'> so that the above
   dynamical relation is restored for large n.  Only the first Planck scale
   transition is completely clear.

	<tk| (TH) |tj>  =  tk <tk| H |tj>

	<tk| (TH)² |tj>  =  tk <tk| HTH |tj>

		    =  tk <tk| THH + HTH - THH |tj>

		    =  (tk)² <tk| HH |tj> + tk <tk| [H, T]H |tj>

	exp( +i t H/ħ )  = exp( +i k t0 H/ħ )

	                     = (exp( +i t0 H/ħ ))k

	                     = (CT)k


	O(k t0)  =  (CT)k O(0) (CT)k

	O(k t0)  =  (CT) O( (k-1) t0 ) (CT)

	O(k t0) - O( (k-1) t0 )  =
		(CT) O( (k-1) t0 ) (CT) - O( (k-1) t0 )

   For very small t0,

	CT  =  exp( -i t0 H/ħ )  ≊  1 - i t0/ħ H

	CT =  exp( +i t0 H/ħ )  ≊  1 + i t0/ħ H


	O(k t0) - O( (k-1) t0 )  =

		(1 - i t0 H/ħ ) O( (k-1) t0 ) (1 + i t0 H/ħ)
		 - O( (k-1) t0 )

		=  + i t0/ħ [O( (k-1) t0 ), H]

	O(k t0) - O( (k-1) t0 )
	-------------------------  =  + i/ħ [O( (k-1) t0 ), H]

	O( (k+1) t0) - O(k t0 )
	-------------------------  =  + i/ħ [O( k t0 ), H]

   which is a finite and discrete form of the Heisenberg equation of
   motion of the QM.

   So, propagation in the arena of FCCR is affected by

	<tk|CT|tj>  =  <tk| exp( -i t0 H/ħ )| tj>

	               =  exp( -i t0/ħ <tk|H|tj> )

	<tk|CT|tj> =  <tk| exp( +i t0 H/ħ ) |tj>

	               =  exp( +i t0/ħ <tk|H|tj> )

	exp( +i t0/ħ <tk|H|tj> )  =

	lim ( 1  +  (i t0/ħ <tk|H|tj>/m) )m

	    ∞   (i t0/ħ <tk|H|tj>)k
	=   Σ   ---------------------------
           k=0             k!

	    ∞              <tk| Hk |tj>
	=   Σ   (i t0/ħ)k ---------------
           k=0                    k!

   Clearly then, the transition amplitude elements

	<tk| Hk |tj>

   will be essential to any sense of propagation or time evolution in
   the generalized FCCR context.

Now we examine constructive details, meanings and interpretation of the notions of evolution and propagation. You can skip forward if you like and go directly to the problem of normalization and to a look at the significance of the associated probability distributions by taking this link.

The amplitudes of Q theory combine regarding conjunction, disjunction, dependence and independence as do the probabilities of classical probability theory. This understanding shows transition frequencies being related to a conjunction of a temporal forward and backward transition, since complex conjugation is a time reversal.

It can be seen in the normalization procedure below that the distinctions between forward and backward time propagation are quite real, and that mathematically this distinction must be taken into account when "ennumerating the complete set of possibilities", as is always necessary in any quantum theory.

		Q Counting becomes Exponential Propagation:
		Another subtlety of time and clocks

	The cyclic operator on the N eigenbasis
	CN(n)  =  exp( -i (2 π)/n T(n) )

	The cyclic operator on the T eigenbasis
	CT(n)  =  exp( -i (2 π)/n N(n) )

	      =  exp( -i t0 (2 π)/(n h) (E0 N(n)) )

	      =  exp( -i t0/(n ħ) (E0 N(n)) )

	      =  exp( -i H(n) dt / ħ )

		    With H(n)  =  E0 N(n), dt  = t0/n

	          ∞   ( -i H(n) dt / ħ )j
	      =   Σ   ------------------------
	         j=0            j!

	      =  1  -i ( H(n) dt / ħ ) - (1/2)( H(n) dt / ħ )² + ...

   For any value of n > 1, think of dt as an infinitesimal, (dt)j will be
   smaller than the physically allowed dt required for a transition to be
   possible; dt is conceptually and physically a "temporal refractory quantum"
   which must "elapse" before a transition is possible; this is metaphor.

   Thus, only the first two terms can be physically effective in the
   expression for only one possible transition (k=1), where

	(CT(n))k  =  exp( -i H(n) k dt / ħ )

   expresses exactly the transition for exactly k possible transitions.
   If we keep the zeroth term and the first term for one transition,
   we must keep the zeroth term through the kth term for k transitions in
   the expansion,

	                k  ( -i H(n) k dt / ħ )j 
	(CT(n))k  =  Σ  --------------------------
	               j=0           j!

	      =  1  - i ( H(n) k dt / ħ ) - (1/2)( H(n) k dt / ħ )²

	         + ... + ik (1/k!)( H(n) k dt / ħ )k

   which is then a sum of k+1 terms each of which is exactly j transitions.
   These terms represent a collection of mutually exclusive events; we do
   not know given a clock time how many transitions j have taken place since
   any prior setup or observation within k clicks.

   Moreover, if the j transitions are ordered by labels (1, 2, 3, ..., j)
   in each term, we also cannot know the order of any specifically labeled
   transitions; therefore, we divide the jth degree term by the number of
   permutations of the labeled transitions, j!.  The product
   ( -i H(n) k dt / ħ )j expresses the *probabilistic independence*
   of the individual transitions in any sequence of j of them.

   Physically then, one approximates the k-transition propagator

	                k  ( -i H(n) k dt / ħ )j 
	(CT(n))k  =  Σ  --------------------------
	               j=0           j!

   From a slightly different point of view, from the binomial theorem,

	(1 + A/k)k  =  Σ  (k j) (A/k)j

   where (k j) := k!/(j! (k-j)!), is a binomial coefficient.  When k is
   very large, we can use a Stirling's approximation for factorial
   (or Gamma function) applied to these binomial coefficients, so that

			(k j)/kj  →  1/j!

   So for large k, and arbitrary A,

	(1 + A/k)kΣ  Aj/j!

   with a limit of unbounded k,

	lim (1 + A/k)k  =  Σ   Aj/j!  =  exp( A )
	k→∞                j=0

   Take then, the single transition propagator to be

	( I - i H(n) dt / ħ )  =  ( I - i t0/n  H(n)/ħ )

   and the k-transition propagator, where dt := t0/n, a "clock
   phase unit", to be exactly as follows:

	( I - i H(n) k dt / ħ )k  =

		( I - i (2 π t0) (k/n)  H(n)/ħ )k

   Correct form is indeed a power:

	( I - i H(n) dt / ħ )k  =

		( I - i t0/n  H(n)/ħ )k  =

   (inserting and distributing k/k)

		( I - i t0 (k/n) (1/k)  H(n)/ħ )k  =

   Then with s(n) :=  t0 (k/n) in the set t0 [0, ∞) for any finite n.

	( I - i H(n) dt / ħ )k  =

		( I - i (1/k)  H(n) s(n)/ħ )k  =


   This is an Important Choice of Theory so that evolution of FCCR by its
   propagator has the correct form of QM evolution for very many
   transitions.  Then this is (referencing above), using (k j)/kj → 1/j!,

	=  Σ  (k j) (1/k)j ( -i s H(n)/ħ )j

	=  Σ  ( -i s H(n)/ħ )j / j!

	→  exp( -i s H(n)/ħ )

   approximately in the second equality for large k.  In the k-transition
   propagator, notice the two canceling appearances of k; the k-transition
   propagator is still just a k-fold product for single transitions, and
   setting k=1 gives back the single transition propagator.

   The limit of the k-transition propagator as k → infinity is then

	lim ( I - i H(n) dt(n) / ħ )k │  =  CT(n)
	k→∞                          │ s=1

	        =  exp( -i (t0)/n  H(n)/ħ ).

   While n is a size parameter which divides the full cycle of a clock
   of n Planck clicks, k is an integer which lies in the covering group of
   the clock's cyclic group Zn, which is the real line.

   Before continuing with that, we can interpret the propagator:

   First notice that the propagator, call it V, is an operator valued
   function of H(n), the supposed energy operator that motivates a
   translation in "time" (as so defined), and so is indirectly
   a function of n, as well as k, so write a propagation by
   multiplicative iteration:

	V(0, H(n))  :=  I

	V(1, H(n))   =  ( I - i H(n) s(n) / ħ )

	V(k, H(n))   =  ( I - i H(n) s(n) / ħ )k

	             =  Vk(1, H(n))  =  V(k-1, H(n)) V(1, H(n))

	V(k, H(n))  =  ( I + i H(n) s(n) / ħ )k

   In expansion, using the binomial theorem and Stirling's asymptotic
   approximation, we isolate the factors, terms and discrete variables
   for combinatoric interpretation:

	V(k, H(n))  =

	=   Σ  (k j) (1/k)j ( -i s(n) H(n)/ħ )j

   n        is the size of the clock in "Planck clicks"
   k =< n    is a total or final number of "Planck clicks" waiting time
   j =< k    is the number of transitions during the absolute "bean counting"
            number k of Planck clicks.
   (k j)    is the number of ways of selecting j unordered, but
            distinguishable objects from k of them.
   kj      is the number of ways of filling j unordered but distinguishable
	    slots with an alphabet of k distinguishable symbols; or the
	    number of ways of labeling (with repetition) j objects with k
	    possible symbols; or, the number of ways of assigning j
	    distinguishable transitions to k Planck
	    clicks.  That k ≥ j indicates that there can be Planck clicks
	    that are somehow skipped by transitions.  This is to say that
	    while a transition of some sort must wait a Planck click, it
	    can wait more than one click before happening.
   (1/k)j  divides out the distinguishability of the k Planck clicks
	    AND the distinguishability of the actual j transitions.

	    This lets us know abstractly that there will be a non vanishing
	    probability of "no transition" or stasis associated with any
            Planck click.

   That we sum over j from j=0 to j=k means that there are weighted
   mutually exclusive Q alternatives that are possible.  One does not have,
   however, control or knowledge over which of them may actually happen.
   Think of weighted sums of Feynman paths.

   This also means that j is limited by k, that during k Planck clicks,
   no more than k actual transitions can take place.  At k transitions
   in k Planck clicks, the clock runs (in terms of a transition rate)
   uniformly at its maximal rate.

			0  =<  j  =<  k  =<  n.

   The number (j/k) =< 1 is then a dimensionless measure of
   "the transition rate of the clock", and thus of "the rate of time"
   by which it measures its own rate of change of "pointer positions".
   Such a dimensionless rate exists and has meaning because there is
   a maximal clock rate, and has nontrivial meaning because the clock
   rate has a range that is dimensionless.

			0  =<  (j/k)  =<  1.

   If the number of Planck clicks for the clock is indeed n, then it
   has a period of (n tP) seconds in the Planck regime.
   No one is pretending to be able to do time measurements within the
   Planck regime: this is physical Q algebra at the boundary of the
   regime.  Then, the maximal clock rate becomes infinite as n becomes
   infinite, as the probabilities for that clock rate approach zero
   as O(1/n²); this is the clock limit of a universe that is a clock,
   as the universe, and the clock becomes infinite in a strong operator

   Note that this combinatorial interpretation of the propagator does
   not involve any approximations, and is at the level of fundamental,
   physical and quantum theoretical structure of Q discreteness.

   This stochastic, dynamical propagator V, is then clearly the exact
   operator solution to a quantum, polychotomic, random flight problem,
   and is also a localized, and generalized S-matrix.

   If V(s, H(n)) is the s-click propagator, there is a nontrivial
   operator valued function, T(n, s) that corresponds to the time told
   by the clock as a function of s, the number of clicks of a clock of
   size n, where suppressing the notation of n dependence,

	T(s)  :=  V(s) T(0) V(s)

   In fact, since the propagator multiplies, more generally,

	T(s + s')  =  V(s') T(s) V(s')

	           =  V(s) T(s') V(s)

   This becomes obvious when the operator T(0), for any finite n,
   is put in its spectral representation

	T(0)  =  Σ |tk> tk <tk|

   For a single click and any n,

	T(n, 1)  =  ( I - i H(n) / ħ ) T(0) ( I + i H(n) / ħ )
	         =  T(n, 0) + (i/ħ) [T(n, 0), H(n)]
	                    + (1/ħ)² H(n) T(n, 0) H(n)

   If V(m, H(n)) is the m-click propagator, and is well approximated
   for large m by exp( -i s H(n)/ħ ), we can approximate the "waited for
   local clock phase operator" by

	   T(s)  ≊  exp( -i s H(n)/ħ ) T(0) exp( +i s H(n)/ħ )

   With the unitarity of exp( -i s H(n)/ħ ), T(s) has the same spectrum
   as T(0).  The system's intrinsic clock progresses, as least structurally,
   invariantly over all its possible sequences of transformations
   through any finite number of Planck clicks.

   The (Upsilon) Υ(n)-Fourier duality then also tells that under the
   same conditions for very large n,

	   H(s)  ≊  exp( -i s T(n)/ħ ) H(0) exp( +i s T(n)/ħ )

   It is also reasonably clear that while a unitary transformation of
   FCCR(n) by exp( -i s H(n)/ħ ) leaves FCCR(n) form invariant, a similar
   transformation by exp( -i s T(n)/ħ ) does not: H(n) commutes with G(n),
   but T(n) does not commute with G(n).

   If n is sufficiently large so that s can be considered sufficiently
   smooth, one can write with fair legality, by taking derivatives:

   i ħ (d/ds) T(s)  ≊  exp( -i s H(n)/ħ ) [T(0), H(0)] exp( +i s H(n)/ħ )
                    =   [T(s), H(0)]

   i ħ (d/ds) H(s)  ≊  exp( -i s T(n)/ħ ) [H(0), T(0)] exp( +i s T(n)/ħ )
                    =   [H(s), T(0)]

   These are Lax equations the solutions to which we already know,
   and which can also be easily solved anew.

   Now, consider the finite propagator amplitudes:

   <ti| V(m, H(n)) |tj>  :=  <ti| ( I - i H(n) dt / ħ )m |tj>

	=   Σ  (m k) (1/m)k ( -i s/ħ )k <ti| Hk(n) |tj>

   where, <ti| Hk(n) |tj>
   is the amplitude for k nonvacuous transitions
   leading from ti to tj, which is obviously independent of m, the
   progression of Planck clicks.  These amplitudes of powers of H(n) will
   be important elements to be able to calculate.

   Note that this propagator V(m, H(n)) is clearly not unitary for finite k
   though it becomes so for infinite k, however, for finite k its Cayley
   analog is unitary:

	( V(m, H(n)) / V(m, H(n)) )^k  =

	( I - i H(n) s / ħ )^k
	-------------------------     →  
	( I + i H(n) s / ħ )^k

   Unitarity of the Cayley operator is obvious since the inverse equals
   the Hermitean conjugate.

   When it comes to making the messy calculations for relatively
   small n and m, this unitarized Cayley propagator will, I suspect
   be the best tool for actually making these calculations for a
   necessarily stochastic dynamics.

Limits of The Discrete Clock Phase

   Now, back to the discrete time variable that becomes dense as
   n → ∞.

   The integers wind countably infinitely on Zn.  In the
   previous exp(), there is a contribution from every possible winding
   that results in a 1-click transition.  From the viewpoint of what is
   told by the clock, however, the windings cannot be distinguished,
   i.e., clock phase time

	   (2 π)k/n  is the same as

	   (2 π)(k+mn)/n  =  (2 π)(k/n) + m(2 π)

   with m, an arbitrary integer.

   This is a bit peculiar and slightly uncomfortable since we have a
   situation necessarily real, but involving a *potentially* countable
   infinity that arises in a finitistic Q theory from the necessity of
   counting all *possible* potential realities: the number of Planck
   clicks waited is unknown by the clock; the clock can only "know"
   the time that it in fact displays.  That is to say that the waiting
   time is not a given or derivable observable associated with the clock.

   In particular, the clock is incapable of distinguishing values
   of 'm' in the above.  This might make sense as model interpretation
   if the local n-click clock were considered as being connected to
   or embedded into either a large (infinite) clock as a heat reservoir
   in thermodynamics, or similarly into a reservoir (infinite) of
   a collection of clocks that effectively make the single clock.

   The passage from finite k to unbounded k in the k-transition
   propagator is already part of a limit to be taken in the approach
   to QM, and is the passage to a covering group as spoken of above.

   More fundamentally, this potential infinity should not exist; the
   basic algebraic statement should therefore not really involve the
   unitary cyclic operator as a time propagator, but an operator that
   is finitely constructed from its generator.

   Finally, regarding the FCCR propagator and the QM propagator in the
   case of a time independent Hamiltonian, for the unitary operator

	   u  :=  exp( -i H/ħ t0 )

   and an ad hoc continuous variable 's' as exponent,

	   us  :=  exp( -i H/ħ t0 s )


	   t  :=  t0 s

   regains the standard one parameter unitary semigroup of QM

	   u(t)  :=  exp( -i (H/ħ) t )

   Do notice that this particular limit of FCCR(n) → CCR is a
   passage to QM which is not relativistic, and consistently, the
   fundamental velocity becomes infinite, as the Inönü-Wigner
   [Inönü 1953]
   contraction applied to the Lorentz group reclaims the Galilean
   invariance of of Newtonian physics.  This limit cannot be
   a cosmological model that conforms to what we currently understand
   about the universe.  On the other hand see [Section XVII],
   where it seems to be proven that a relativistic limit can exist.
   Unfortunately, the details of that limit are still elusive.

Normalizing Transition Frequencies

   Returning to the transition frequencies, when the normalization is done,
   and as is shown below, two structural properties appear.  First, there
   is a nonvanishing probability for clock stasis that vanishes as (1/n)
   for unbounded n, and second, while for n = 2, 3, the clock is fairly
   chaotic, for n > 3, the transitions

	|tk>  →  |tk+1> and |tk>  →  |tk-1>

   become very rapidly dominant for increasing n, showing that

	"large (massless?) oscillator clocks tend to run uniformly".

	NB: In Feynman's path integral construction for the propagator
	for a massive relativistic Dirac particle in a (1,1) Minkowski
	space, the massive particle is assumed to travel at the speed
	of light in free space following weighted Zitterbewegung paths.

This distinguishes n=4 as a transition point, where in the local quantum temporal behavior, order in the transitions of the fundamental clock eigenprocesses emerges from their "q-stochastic chaos".

             Suppose one asks, after a given number of unavoidable and
             indivisible "Planck clicks", what is the elapsed clock time?
	     [That is, zero "net" elapsed time.]

The fact that the +1 and -1 amplitudes are equal and dominant, leads to a behavioral picture similar to a discrete random flight problem with a periodic boundary condition. The difference is that the standard random flight problems do not allow for a probability of stasis. This should not be difficult, but I have not gotten to it in detail. It involves starting with a trinomial distribution rather than binomial.

The immediate idea that expectation values <tk| Nm |tk> should somehow be immediately interpreted, does not really work since in the normalized transition distribution, this gives the probability of zero elapsed time (stasis) after m Planck clicks. One must look instead at the "off diagonal" transition amplitudes. This ascendancy of transition amplitudes over expectation values is completely typical with FCCR, and is the key to maintaining a basic Q probability interpretation.

The almost ironic aspect of this is that reliance on transition amplitudes & probabilities, and process rather than expectation values and state becomes impossible once the limit of unbounded n is taken. Particularlty, the transition amplitudes above do not exist in standard QM. Physically, this actually makes good sense.

The clock's t-eigenprocesses, also happen to correspond to the pointer positions of Wigner's "best clock" of QM in an old paper in RMP where he determines the properties imposed on clocks by quantum mechanics and general relativity. Those clocks, he determines should have a mass of approximately the Planck mass. [Wigner 1957] Below this rather extravagant mass, Wigner concludes, a clock will not be very good. Nevertheless, these not very good Wigner clocks are exactly the ones we are examining here. Wigner was very much intent on the strict operation of standard QM while I am interested in where it necessarily fails, how that failure happens, and how a correction to QM changes things.

Another possible way of looking at Wigner's conclusion is that, rather simplistically, below the level of the Planck mass, classical notions of time, which happen to be indigenous to QM, fail. [Interpreting The Planck Mass]

The stochastic dynamics by transition amplitudes involves first computing them for k ≠ j, and for k = j,

	<tk| Nm(n) |tj>  =  (1/n)  Σ am exp( i (2 π)/n (j-k) a )

   and the normalizing factors for the then calculated transition
   frequencies |<tk| Nm(n) |tj>|²,

	 Tr( N(2m) )  =    Σ  k(2m)

   both of which are very closely related to Bernoulli numbers and the
   Bernoulli polynomials φk(z), indexed by k, and which can be defined
   by the expansion of a generating function:

	  exp( tz ) - 1      ∞
        t -------------  =   Σ φk(z) tk/k!
	  exp( z ) - 1      k=0

   E.g., let

	             n-1                  exp( i n z ) - 1
	fn(z)   =   Σ  exp( i k z )  =  ----------------
	             k=0                   exp( i z ) - 1

   summed as a standard geometric series.
   Taking d/dz gives the sum for the first set of transition amplitudes
   above when it is specialized to z = ((2 π)/n) (k-j); also

	Tr( N² )  =    Σ  k²  =   n(n-1)(2n-1)/6

   is the functional part of the normalization factor for the first order
   transition frequencies:

	|<tk| N(n) |tj>|²  =

	(1/2)² ( 1 + cot²( (π/n) (k-j) ) )  for k ≠ j

	[(n-1)/2]²                          for k = j

   So, the single click transition probabilities are then from
   [Section VIII, theorem 8.21],

	Pr( N(n): tk → tj )  :=

	 |<tk| N(n) |tj>|²
	--------------------  =
	(1/n) Tr( N2(n) )

	(1/2)² csc²( (π/n) (k-j) ) )
	----------------------------      for k ≠ j

	       --------------                for k = j  [system clock stasis]

			     3 (n-1)
	                =  -----------  →  (3/4)  asymptotically
	                    2 (2n-1)

	So, with t0 an arbitrarily chosen initial fiducial clock time,
	the 1-click transition probabilities are,

	Pr( N(n): t0 → tk )  =

	(1/2)² csc²( (π/n) k )
	-------------------------            for k ≠ j

	    --------------                   for k = j  [system clock stasis]

   This is a fundamental result.  Symmetrically, with k mod n,

   NB: this also a bit of a misrepresentation.  The eigenstates
   of the fundamental symbols are not only not strictly observable or
   preparable, they are, it would seem, somehow unreal.  They are
   algebraic constructs that lie beneanth what can actually
   be observed.  Everything has a limit in the precion with which it
   can be observed.  As a model, these transition probabilities are
   what should give rise to the continuous quantum theory.

   FCCR's physical states are a normalized l.c. of 2 or more eigenstates.

	Pr( N(n): t0 → t+k )  =  Pr( N(n): t0 → t-k )

	                          =  Pr( N(n): t0 → tn-k )

   so, assume the + and - signs in the following lists of probability
   transitions, when k ≠ 0.

   Let us look at the low order n cases specifically, and numerically.

   For n = 2,

	Pr( N(2): t0 → t0 )  = 1/2

	Pr( N(2): t0 → t1 )  = 1/2

   For n = 3,

	Pr( N(3): t0 → t0 )  = 3/5

	Pr( N(3): t0 → t1 )  = (3/20) csc²(π/3)   =  1/5
	Pr( N(3): t0 → t2 )  = (3/20) csc²(2π/3)  =  1/5

   For n = 4,

	Pr( N(4): t0 → t0 )  = 9/14

	Pr( N(4): t0 → t1 )  = (1/14) csc²(π/4)  = 2/14
	Pr( N(4): t0 → t2 )  = (1/14) csc²(2π/4) = 1/14
	Pr( N(4): t0 → t3 )  = (1/14) csc²(3π/4) = 2/14

   [It may or may not be significant that for n > 4 general
    rationality of the transition probabilities seems to fail.]

   For n = 5,

	Pr( N(5): t0 → t0 )  = (16/24)  =  2/3
	                  ≊  0.6666666666666

	Pr( N(5): t0 → t1 )  =  (1/24) csc²(π/5)
	                  ≊  0.1260113295833
	Pr( N(5): t0 → t2 )  =  (1/24) csc²(2π/5)
	                  ≊  0.04606553370834
	Pr( N(5): t0 → t3 )  =  (1/24) csc²(3π/5)
	                  =  (1/24) csc²(2π/5)
	                  ≊  0.04606553370834
	Pr( N(5): t0 → t4 )  =  (1/24) csc²(4π/5) 
	                  =  (1/24) csc²(π/5))
	                  ≊  0.1260113295833

   For n=2,3,4, the clock is chaotic and can never run very smoothly; but
   n=4 is the breakpoint above which a smooth behavior begins to emerge
   that stochastically defines an intuitive clocklike behavior.

   Numerically, for n=6,...,12, the transition probabilities can be calculated 
   using Maxima, or a similar calculational system as:

   For n = 6,

	Pr( N(6): t0 → t0 )  =  15/22
                                 =  0.68181818181818

	Pr( N(6): t0 → t1 )  =  0.10909090909091
	Pr( N(6): t0 → t2 )  =  0.03636363636364
	Pr( N(6): t0 → t3 )  =  0.02727272727273
	Pr( N(6): t0 → t4 )  =  0.03636363636364
	Pr( N(6): t0 → t5 )  =  0.10909090909091

   For n = 7,

	Pr( N(7): t0 → t0 )  =  9/13
                                 =  0.69230769230769

	Pr( N(7): t0 → t1 )  =  0.10215271366198
	Pr( N(7): t0 → t2 )  =  0.03146084242261
	Pr( N(7): t0 → t3 )  =  0.02023259776157
	Pr( N(7): t0 → t4 )  =  0.02023259776157
	Pr( N(7): t0 → t5 )  =  0.03146084242261
	Pr( N(7): t0 → t6 )  =  0.10215271366198

   For n = 8,

	Pr( N(8): t0 → t0 )  =  7/10
                                 =  0.70000000000000

	Pr( N(8): t0 → t1 )  =  0.09754895892495
	Pr( N(8): t0 → t2 )  =  0.02857142857143
	Pr( N(8): t0 → t3 )  =  0.01673675536077
	Pr( N(8): t0 → t4 )  =  0.01428571428571
	Pr( N(8): t0 → t5 )  =  0.01673675536077
	Pr( N(8): t0 → t6 )  =  0.02857142857143
	Pr( N(8): t0 → t7 )  =  0.09754895892495

   For n = 9,

	Pr( N(9): t0 → t0 )  =  12/17
                                 =  0.70588235294118

	Pr( N(9): t0 → t1 )  =  0.0942863842325
	Pr( N(9): t0 → t2 )  =  0.0266942274867
	Pr( N(9): t0 → t3 )  =  0.01470588235294
	Pr( N(9): t0 → t4 )  =  0.01137232945727
	Pr( N(9): t0 → t5 )  =  0.01137232945727
	Pr( N(9): t0 → t6 )  =  0.01470588235294
	Pr( N(9): t0 → t7 )  =  0.0266942274867
	Pr( N(9): t0 → t8 )  =  0.0942863842325

   For n = 10,

	Pr( N(10): t0 → t0 )  = 27/38
                                  =  0.71052631578947

	Pr( N(10): t0 → t1 )  =  0.09186084171052
	Pr( N(10): t0 → t2 )  =  0.02538971220175
	Pr( N(10): t0 → t3 )  =  0.01340231618421
	Pr( N(10): t0 → t4 )  =  0.00969800709649
	Pr( N(10): t0 → t5 )  =  0.00877192982456
	Pr( N(10): t0 → t6 )  =  0.00969800709649
	Pr( N(10): t0 → t7 )  =  0.01340231618421
	Pr( N(10): t0 → t8 )  =  0.02538971220175
	Pr( N(10): t0 → t9 )  =  0.09186084171052

   For n = 11,

	Pr( N(11): t0 → t0 )   =  5/7
                                   =   0.71428571428571

	Pr( N(11): t0 → t1 )   =   0.08999075406524
	Pr( N(11): t0 → t2 )   =   0.02443736086873
	Pr( N(11): t0 → t3 )   =   0.0125059342723
	Pr( N(11): t0 → t4 )   =   0.00863257795214
	Pr( N(11): t0 → t5 )   =   0.00729051569874
	Pr( N(11): t0 → t6 )   =   0.00729051569874
	Pr( N(11): t0 → t7 )   =   0.00863257795214
	Pr( N(11): t0 → t8 )   =   0.0125059342723
	Pr( N(11): t0 → t9 )   =   0.02443736086873
	Pr( N(11): t0 → t10 )  =   0.08999075406524

   For n = 12,

	Pr( N(12): t0 → t0 )   =  33/46
                                   =   0.71739130434783

	Pr( N(12): t0 → t1 )   =   0.08850713377634
	Pr( N(12): t0 → t2 )   =   0.02371541501976
	Pr( N(12): t0 → t3 )   =   0.01185770750988
	Pr( N(12): t0 → t4 )   =   0.00790513833992
	Pr( N(12): t0 → t5 )   =   0.00635452630271
	Pr( N(12): t0 → t6 )   =   0.00592885375494
	Pr( N(12): t0 → t7 )   =   0.00635452630271
	Pr( N(12): t0 → t8 )   =   0.00790513833992
	Pr( N(12): t0 → t9 )   =   0.01185770750988
	Pr( N(12): t0 → t10 )  =   0.02371541501976
	Pr( N(12): t0 → t11 )  =   0.08850713377634

              Transition Probabilities For Asymptotically Large n

   For very large n, and |k| << n,

	Pr( N(n): t0 → t0 )  =   3/4

	Pr( N(n): t0 → t±k )  =  (6/(8π²)) (1/k²)

   so genuine transitions are dominated by the transitions k = ±1,

	Pr( N(n): t0 → t±1 )  =  (3/(4π²))

   which in turn are actually dominated by the probability of stasis.
   The probability deficit for "all other events" is about


   Not coincidentally, from the standard one sided infinite summation

         Σ 1/k²  =  π²/6

   and its double sided companion, with k ≠ 0,

         Σ 1/k²  =  π²/3

   we have, for k ≠ 0,

         Σ (6/(8π²)) (1/k²)  =  1/4

   and the sum, in asymptopia of all "single click" transition
   asymptotic probabilities, including k = 0, the probability
   normalization condition,

	Σ Pr( N(n): t0 → tk )  =  1

   holds exactly, even using their asymptotically approximate values,
   implying that the assertion of the ±1 transition dominance
   was a good one.  Note that without a nonvanishing probability of
   of stasis, this clock would itself be structurally problematic
   in its consistent "speed", in the limit of unbounded n.

   A clock whose existence avoids an infinite regression of fiduciary
   clocks must have its own speed selfdefined.
   Origins of the Species of Time,

   This "clock aspect" of the discrete quantum oscillator will tend
   to run smoothly in intuitive clocklike fashion for large n in that
   the two dominant behaviors are stasis, and the transitions

		|tk> → |tk±1>

   There is, of course, a nonvanishing variance to the distribution of
   transitions.  Further elaboration and discussion of special cases
   for small values of n are available.

                 1-click Root Mean Square Expectation Values

   Using these probability distributions, we can caclulate an expected
   time told by the clock, understanding that its essential toroidal
   structure means that between any two "pointer positions" there are
   always 2 possible, distinct connecting transition paths.

   If we try to write an expectation value for the clock time told after
   one click,

		<t>  :=  Σ t±k Pr( N(n): t0 → t±k )

   symmetry will conspire to make

		              <t>  =  0

   which is not the useful number one would want.  This is due to
   the fundamental time reversibility, as it exists in QM and all
   the other fundamental equations of physics (Do keep in mind the
   CPT invariance of QFT.)  In the context a quantum theory of any
   Pursuation this seeems to imply that the past is just as
   indeterminate as the future, and that the greatest determinancy
   can be found in a "now", a concept that is physical nonsense when
   the theory in question is fundamentally based on continua.

   Instead of defining the expectation value in the usual QM way,
   we do what is customarily done in such cases, define and compute
   the root mean square as an expectation value.

		             sqrt( <t²> )

   We can compute a few of these single click expectation values,
   numerically, using Maxima, for a few low values of n:

	 n           sqrt( <t²> )

	 2            1.000000000000000
	 3            1.414213562373095
	 4            1.851640199545103
	 5            2.301769405696403
	 6            2.760105399832008
	 7            3.224230072370140
	 8            3.692675506984599
	 9            4.164478177524289
	10            4.638969369008194
	11            5.115664973553442
	12            5.594202696976605

   What do these numbers represent?  They represent an expected
   clock pointer displacement from 0, for a finite SHO clock with n
   pointer positions (at least n > 12 ) after 1
   fundamental click, when the clock is a "good" clock with
   equidistributed energy process proabilities.

   At least for these small values of n, sqrt( <t²> ) is just
   a bit short of n/2.  For very large n, I conjecture that sqrt( <t²> )
   is more like n/3, if not exactly that.  Currently, I have no proof
   of this.

			Clock States & Entropy

   If, as above, we know and can discuss clock pointer positions at all,
   from the uncertainly relationship, or simply by observation of the
   Υ(n) (Upsilon) Fourier transform, energetically, the clock exists in
   any one of its pointer positions as a state of equidistributed
   energy processes, with specific phases.  While, of course systems do
   not always exist in such convenient configurations, these are the
   best clock states, even in QM as mentioned before, and since the
   probabilistic entropy associated with energy is a uniform distribution,
   these clock states happen to be fairly well expected as commonplace
   since they are states of maximal entropy, and minimal information
   about the distribution of energy - assuming that you are interested
   in the "clock nature", i.e., the self dynamics of the system.

   The concept of "dynamics" has already made a leap to a higher level
   of generality: the quantum states which have now become processes
   cannot be fixed at a point of time, and so the idea that these
   processes evolve "smoothly in time" becomes an invalid idea.  In fact
   their evolutions are stochastic processes, generally, complex
   weighted sums of stochastic eigenprocesses.

   There may be a bit of a fudge in this since these clocks are already
   acknowledged as being open systems; on the other hand, this entropy
   is not your standard thermodynamic entropy, though it has the same
   mathematical form and properties.  This entropic form will not
   necessarily be computable, and will, most generally, be divergent in
   the limiting case of QM, but not here in finite dimensional spaces.

   The expected energy in any spanning set of clock eigenprocess |t_k>
   (for all k) is proportional to

	(1/n) Tr( N(n) )  =  (1/n) Tr( N(n) + (1/2) G(n) )

	                  =  (1/n) Tr( H(n) )

	                  =  (n-1)/2

   if one accepts either H(n) or N(n) to represent the energy operator,
   (it does not matter which since the result is the same since Tr(G(n)=0).
   Remember that in QM this calculation cannot be performed since there,
   N is an unbounded operator, and its trace is divergent, as is this
   computed expectation value.

   This probabilistic entropy for the collection of clock state eigenbasis
   of these conceptually isolated n-state clocks, performed in the energy
   eigenbasis is proportional to

	- (1/n) n ln(1/n)  =  ln( n ).

   which is also divergent in the limit of unbounded n.

   Notice, this entropy is neither a classical entropy of thermodynamics,
   nor the entropy associated to classical or quantum statistical mechanics,
   but is rather the negative of a Shannon Information functional defined
   on an arbitrary discrete probability distribution, and connected with an
   idea of FCCR quantum microcanonical ensemble.  The value of the
   functional depends not only on the process, but the basis set upon
   the process, and the operators to which physical variables are mapped,
   are represented; it is not generally invariant under the group of
   basis transformations; it is calculated below the level of quantum
   statistical mechanics, and within the level of the statistics of FCCR
   quantum mechanics.  This cannot be done consistently and meaningfully
   in the context of standard QM based on the Heisenberg algebra, as
   do the calculations of transition amplitudes, because the mathematical
   expressions of such an entropy diverge, or fail to exist in any
   meaningful way, because of the necessary appearance of unbounded
   operators with CCR.  This is a new and meaningful concept, a variable
   that can be calculated/computed in the context of FCCR.

   The entropy associated with the probability distribution for multiple
   click energetically induced transitions is clearly much more complicated.

   With p(n, k) := Pr( N(n): t0 → tk ), this entropy is proportional to

	S(n)  =

	- Σ p(n, k) ln p(n, k)  =

	- p(n, 0) ln p(n, 0) - Σ p(n, k) ln p(n, k)

   Unsuprisingly, this expression also diverges in a limit of unbound n,
   consistent with an idea of increasing entropy of an expanding
   localized universe.  This means that carrying this construction
   of an entropic form in FCCR(n) becomes meaningless in the limit
   of unbounded n.  BUT, only large n is actually physically required.

   This entropy, or "negative quantum information" is a generalization
   of the standard quantum information of quantum computation,
   determined by binary "qubits", or "q-bits", except that the quantum
   system is not just irreducibly binary, as in spin (1/2), but
   irreducibly n-ary, as in spin ((n-1)/2); alternatively think,
   perhaps, of an n-slit experiment, and then go to a construction of
   feynman paths. [Feynman 1965]

   A few numerical values of this S(n), evaluated in a t-eigenbasis
   for low values of n (just for feeling):

	 n            S(n)
	 2 	      0.69314718055995
	 3 	      0.95027053923323
	 4 	      1.028513764024847
	 5 	      1.06406805705981
	 6 	      1.083793457927882
	 7 	      1.096137608877039
	 8 	      1.104510677687318
	 9 	      1.110526856843848
	10 	      1.115040177894637
	11 	      1.118541341607914
	12 	      1.121330828050831

   It is clear that for this entropy functional, its value increases
   monotonically as more clock states become available:

	 S(n) < S(n+1), for all n > 2.

   One click is the minimal possible discernable (resolvable) waiting time
   (Think, possibly, of a refractory period [De Broglie Paradox Revisited]);
   for longer waiting times, we need to account for more alternative paths
   (sequences) of processes.

			FCCR(n) Clock Dynamics

   Notice that the two alleged Hamiltonians N and N + G/2 while most
   often behaving the same, have an interesting difference.

   N always has a '0' in its spectrum, and all eigenvales are distinct.

   N + G/2 never has a zero in its spectrum, the eigenvalue of |n,n-1>
		is now (n-1)/2 instead of (n-1); it duplicates one of
		the lower eigenvalues.

   For example, for n=2,3,4,5:

   For n = 2:
   Diag[0, 1) + (1/2)Diag[1, -1]  =  Diag[1/2, 1/2]

   For n = 3:
   Diag[0, 1, 2) + (1/2)Diag[1, 1, -2]  =  Diag[1/2, 3/2, 1]

   For n = 4:
   Diag[0, 1, 2, 3) + (1/2)Diag[1, 1, 1, -3] =

   		Diag[1/2, 3/2, 5/2, 3/2]]

   For n = 5:
   Diag[0, 1, 2, 3. 4) + (1/2)Diag[1, 1, 1, 1, -5]  =

   		Diag[1/2, 3/2, 5/2, 7/2, 3/2]]

   The dominant refractory time caused by the high probability
   of stasis keeps the clocks speed from becoming infinite, and
   also provides a concept "circular causality" for the clock.

   The picture is generally that of the clock's pointer jumping
   stochastically between its allowed positions according to
   the transition probabilities already calculated.

   While no energy leaks from the clock, its existence is ensured,
   as it continues its stochastic jumping.  However, if the
   something could inject an energy higher that the maximal
   energy (n-1)/2,  it would have to reflect that energy or
   suffer a catastrophic event, and at least cease to exist.

   One has a picture of a universe filled with clocks that
   blink in and out of existence with a vacuum much like that
   of a standard QFT vacuum - but that exists by virtue of
   its energy content, which is finite but large.

   Choosing N as the energy eliminates the zero point of QM,
   while choosing instead N + G/2, retrieves a zero point
   energy, but leaves a pair of degenerate eigenvalues
   at k = n-1 and k = (n-4)/2 for n odd, and n/2 for n even.
   For n large enough, we will never see this energy degeneracy.

   The value of n, is small on the Planck scale, while  it is very large
   on the scale of the elementary particles with which we are familiar.

       Some Observations and Questions Regarding Calculation of the
                        Transition Amplitudes

   It is not difficult to see that differentiating fn(z) as defined

	(d/dz) fn(z)  :=

	fn'(z)  = i n fn(z) + i (n+1)/( exp( iz ) - 1 )

   This gives a recursive method for calculating higher derivatives.

	fn''(z)  =  i n fn'(z) +

			(-1)(i exp( iz )) i (n+1)/( exp( iz ) - 1 )²

	          =  i n ( i n fn(z) + i (n+1)/( exp( iz ) - 1 ) )
			(-1)(i exp( iz )) i (n+1)/( exp( iz ) - 1 )²

	          =  - ( n² fn(z) + n(n+1)/( exp( iz ) - 1 ) )
			+ (n+1) exp( iz )/( exp( iz ) - 1 )²

   Is there a better way?  Evaluate derivatives using a Cauchy integral?
   Using the Bernoulli expansion?  I do not know what is best.

   Bernoulli polynomials, φk(iz), can be defined by

	  exp( i tz ) - 1      ∞
        t ---------------  =   Σ φk(iz) tk/k!  = t ft(z)
	  exp( i z ) - 1      k=0

	φk(x+1) - φk(x)  =  k xk-1

   where φk(iz) is an analytic continuation, by rotation of the
   complex plane, of Bernoulli polynomials.
   When n is odd,
	 Tr( N2m )     =   Σ k(2m)  =  (1/(2m+1)) φ2m+1(n)

   so when n+1 is odd,
	               =    (1/(2m+1)) ( φ2m+1(n) + n2m )

   giving expressible but perhaps difficult to compute normalizing factors.

   Of possible analytic value is the relation,

    Σ kp  =  ζ(-p) - ζ(-p, 1+n)

   where ζ() is a Riemann zeta function, and then an extended (Hurwitz)
   Riemann zeta function.  Hurwitz Zeta Function -- from MathWorld
   Riemann zeta function - Wikipedia
   Riemann zeta function

In this view point of dynamics from transition amplitudes, there is an intrinsic and absolute proper waiting time that is counted in indivisible Planck time or other units, which act much like refractory periods after which either a transition takes place, or doesn't.
[See De Broglie Paradox Revisited]

While this essentially dimensionless counting time remains absolute, a hyperbolic basis transformation (an element of the invariance group of G(n)) can make the clock *appear* to run more slowly by altering the calculated transition amplitudes. I have not yet attempted the appropriate calculations.

The unitary transformations of the maximal compact subgroup of the invariance group of G(n), of course, preserve the transition amplitudes, hence also the transition probabilities, all as a "relativistic substitute". An emergent relativistic behavior as we currently conceive of it, would have to be demonstrated in terms of transition amplitudes.

The alteration of transition amplitudes by the hyperbolic transformations of the invariance group of G(n) may seem like some kind of paradox, but since R theories have very much to do with how things *appear* to observers in noninertially (in SR) or inertially (in GR) related frames, I do not see this as a logical or interpretational obstruction, since, for every n, the invariance group of G(n) is conjugate to the full Lorentz group.

An observer, however that might be more refinedly defined, is a thing of finite extension, presumably equipped with a clock with which the observer is stationary. The observer receives signals, and on the basis of his known theory compares and intereprets those signals, according to some intelligence that might be represented by a generalized local information. This looks reasonable in SR, but in QM, the concept looks less reasonable. Cf. the entropically Battered Bride of statistical mechanics, the Maxwell Demon.

An important fact to bear in mind when local clocks become the sole arbiters and fiduciaries of time is that a usable "good" ( A good clock is one that is not bad, and bad clock is one that is either erratic or not linear in its successive "pointer positions", one that lacks precision.) clock is necessarily local *and* stationary in the local frame. If you transform the frame you must per force *also* transform the reference clock. With almost no thought, I will guess that these are necessarily transformed relatively contragrediently.

A conceptual problem regarding nonpreferedness of frames of reference that is introduced by the big bang in GR is that there *is* a preferred frame of reference. A solution to the equations of cosmic existence fails to have the full symmetry of the equations, but this is the usual situation of a spontaneously broken symmetry that gives rise to a specific system. Consider then a dynamically broken symmetry.

Furthermore, in QFT, the fairly ready consequence that the "vacuum state" is not empty leads to the breaking of the symmetry that the vacuum state would otherwise have. If the QFT is QG, then it is difficult to see how this symmetry breaking can be either repaired through a redefinition or glossed over without understanding an underlying falsity of the general diffeomorphic invariance required by GR. A diffeomorphic invariance in the context of manifolds becomes an invariance under the group of permutations of a discrete set were it to replace the manifold. It would seem that group would then become a GL(n, C) for the change of basis in a Hilbert space here and GL(n, C) X GL(n, C) for a C*-algebra here.

The stochastic dynamics here has similarity to a Markov process, except that it is not that the probabilities fold, but rather the amplitudes, and that makes an immediate connection with the folding of a Feynman kernel in the path integral formulation of quantum theory.

In [Feynman 1965] p. 24, is written, "Today, any general law that we have been able to deduce from the principle of superposition of amplitudes, such as the characteristics of angular momentum, seem to work. But the detailed interactions still elude us. This suggests that amplitudes will exist in a future theory, but their method of calculation may be strange to us." In this very assertion, the belief is implicit that standard QM is actually neither sufficient, nor necessary. Richard Feynman (1918-1988) [Wikipedia]

Answering Feynman's question with a simpler question might be: How detailed in matters of interactions and even theory can we be? Are there not limits? Realities and theory both suggest that there are such limits to both our predictions and to our measurements.

In form, the Feynman kernel, when expressed as an expansion in energy eigenfunctions, has a clear relation to the Bergman kernel relating the ground form of a Hilbert space of holomorphic functions defined on a compact Kaehlerian manifold to the Kaehlerian metric on that manifold. This looks, to me, to be an interesting place to look further, since the concepts of analyticity and holomorphy have to do with how well a local thing can be extrapolated to something far, (or interpolated to something near), and so also to do with relationships between separated things, generally. Bergman kernels, however, to my knowledge are not spoken of as defined on Kaehlerian manifolds with indefinite metrics.

So, for FCCR, in Wheeler's phrasing, there is dynamics without dynamics. A major difference between QM dynamics and the dynamics here is that the vector that now presumably is the symbol of a physical process, is not propagated smoothly and deterministically in the time of counting Planck units, but a bit jerkily and stochastically in accordance with the principle that energy is the generator of time translations. That the clock actually undergoes transitions is guaranteed by the fact that the commutator [N, t] is *not* zero. This stochastic dynamics is a further way in which the future (and the past) is indeterminate. If the system's clock is stopped, it does not undergo transitions. This *would* be the case iff the amplitudes have the form,

	<tk| N(n) |tj>  =  exp( i θk ) δkj

But this is not true, and cannot be true, since it would imply that [N, t] = 0, which is never the case here, because of the explicit Fourier relationship between N and t that is necessary for the limiting structure of QM to be regained in the final asymptopia.

This is to say that these systems, as clocks, cannot be stopped, and actually enforce their local phase progressions that eventually, and statistically give rise to the large scale illusion of a local continuum that we can call a proper Newtonian time.

Noncommutative Geometry of Higher Dimensions

In FCCR, there is also "geometry" without geometry in the usual intuitive sense. An extension of spatial dimension can be affected by introducing the SU(2) rotation operators so that Q(n) and P(n) become the third components of vector operators which can be related by a Fourier transform. The Q-space geometry is then quantized in the sense that the coordinate operators do not commute. In an n→∞ limit, the commutators go to zero; for large n, the Qa(n) are conformally related to the algebra of SU(2) generators. The spatial geometry then, of course, looks like the geometry of angular momentum, suggesting a specific kind of "quantized spherical geometry". We actually expect a spherical cosmological geometry of some sort.

Notice that in what follows, these noncommutative geometries, because n is finite, are currently and technically called "0-dimensional"; this does not mean that they are trivial. They are called 0-dimensional in the usually defined context of noncommutative geometry because the "points" of the set underlying the geometry is a finite set of discrete points that are the spectrum of some operator in a finite dimensional C*-algebra. Topologically, such a set is disconnected and has Brouwer dimension 0. This is, of course, in the sense of classical point & set topology, not any kind of Qtopology.

Here, that set is not a set of the points of a noncommutative geometry, but a set of fundamental eigenprocesses of a quantum geometry that also happens to be noncommutative, which is a rather different concept. Noncommutative eigenprocesses then conceptually replace the spatial points of the Euclidean geometry of QM, and also replace the "events" of Minkowski space in R theories. This way, the idea of space and time having an intrinsic energy is more natural; that it is intrinsically dynamic is also more natural, as is the idea that these models of physical spaces are *not* empty.

   For such SU(2) rotated Qa(n),

        [Qa(n), Qb(n)]  ≊  i (Δ q(n)) εabc Qc(n)

   where (Δ q(n)) is the equal spacing of Qa(n) eigenvalues that
   is rapidly attained in n; similarly,

        [Pa(n), Pb(n)]  ≊  i (Δ q(n)) εabc Pc(n)
        [Sa(n), Sb(n)]  =  i εabc Sc(n)

   also hold, Pa(n) being rotated momentum operators, and Sa(n) being
   the generators of rotations.  (Δ q(n)) is of the order n-1/2,
   which brings the Lie algebra structure constants to zero for infinite n.

   The Qa(n) & Pa(n) define a second rank "metric" tensor operator:

        [Qa(n), Pb(n)]  =  i Gab

which, of course, is not symmetric in its indicies. An energy operator determining a time operator by an Υ transform could extend this to a 4-vector operator formalism. All the G operators with a≠b, turn out to have extra factors of Δq(n), indicating that they will be made small relative to the a=b operators for very large n. [Section XIV, Theorem 14.2]

This behavior of the off diagonal elements of G also implies that in asymptopia, G is essentially a 2nd rank tensor operator that is symmetric in its two indicies.

From this Gab, a Riemann curvature tensor operator can be constructed algebraically using the classical form for constant curvature; the general form requires a derivative w.r.t. coordinates, and this is not yet had. The commutators of the algebra quickly become fairly horrendous to compute.

Although a classical two dimensional manifold can have no intrinsic curvature, in this case the situation may be different. OPEN QUESTION: Is it different, or is there a similar result in such a context of these quantum noncommutative geometries?

If the coordinates Qa(n) are those rotated by π/2, using the su(2) generators Sa(n) of the IRREP, and one defines the total energy operator of an isotropic oscillator as

	E²(n)  :=  Σ Na²(n)
	Na(n)  :=  (1/2) (Qa²(n) + Pa²(n)) - (i/2) [Qa(n), Pa(n)]

   [NB the classical idea that different modes/kinds of energy are
    additive, is an expression of their independence; this is not
    necessarily always a physically operative assumption.]

   It becomes apparent that

	Na(n)  =  (n-1)/2 - Sa(n)
	Na²(n)  =  [(n-1)/2]² - (n-1) Sa(n) + Sa²(n)

	E²(n)  :=  n[(n-1)/2]² + Σ Sa²(n) - (n-1) Σ Sa(n)
                                 a                 a

	         =  n[(n-1)/2]² + (n² - 1)/4 - (n-1) Σ Sa(n)

	         =  (n-1)/4 [n(n-1) + (n+1)] - (n-1) Σ Sa(n)

	         =  (n-1)/4 [n² + 1] - (n-1) Σ Sa(n)

	         =  (n-1) [ (n² + 1)/4 - Σ Sa(n) ]

The operator formed by the sum of Sa(n) can be investigated through the same theorem used in investigating Q(n), and which relates the matrix to a system of orthogonal polynomials defined by a recursion relation. I have not done this, but applying the theorem means simply expanding the appropriate determinate of the eigenvalue problem by Lagrange's method, on the last row to obtain the recursion relation, and then hope that the recursion relation looks familiar :-)

   Of course, it is not at all clear that this E²(n) should be an
   appropriate energy term, and that a factor symmetrization of

	 Σ (1/2) Gab ( Pa Pb + Qa Qb )

   might be more appropriate.

   One could also simply take a threefold direct sum of FCCR(n) so that

        [Qa(n), Qb(n)]  =  0

        [Pa(n), Pb(n)]  =  0

        [Qa(n), Pb(n)]  =  i δab G(n)

After this, an investigation of the central force problem (which I have not done) is an obvious thing to do. Similarly, for the case of rotated Qa. For any m-fold such direct product, closure under commutation will then be on an m-fold direct product of su(n).

If the vectors are ontological symbols, as implied, then the local Hilbert spaces are as real as classical phase spaces, but they are not good conceptual analogs since Q(n), P(n), E(n) and t(n) are all non- commuting quantum variables, and the idea of a "state" isolated from time, e.g., the "time independent energy eigenstates" of QM is no longer a valid concept. An important thing to notice is that this situation is not near as extreme as the condition of "no dispersion free states" as sought algebraically by Jordan, Malcev, et al.

One pictorial analogy for the oscillator clock is that of a standing wave with a toroidal boundary condition. The nodes (clock pointer positions) are numbered and the allowed vibrational envelope represents the clock's energy state.

The clock's motion is an advance of the nodes accomplished by shifting the phases of the energy eigenstates in the superposition that is the clock state. The complete cycle through time eigenstates returns the clock in an overall phase factor to its initial value. Think of the average motion of the oscillator as a winding motion on a T², where a tubular cross section is the phase space, and the major radius sweeps out the clock pointer positions. The expected value of energy (n-1)/2 for all clock states measures the minor radius, and the major circumference is measured by n Planck time intervals, and the radius is then measured by n/(2π). The topology of the G(n)-null cone in the pseudohilbert space is of the form S(2n-3) X T1 X R.

That there is a local Q theory of finite extent (the spectral radii of Q(n) & P(n) are proportional to √(n) means that a nonlinear theory like GR can be "locally quantized", even though fundamentally this cannot be fully correct; so, a sensible approximation to QG can be made related to tangent spaces or, even better, to local Qcoordinate patches, conceptually making Qatlasses that define Qmanifolds.
See Classical Geometry & Physics Redux

While SR appears in GR locally as a structure in tangent spaces, which is really "at a point", and not within the manifold, taking the Q irreducibility of space and time seriously (by theoretical indications, one should), the "at a point" concept should not be valid, and a smallest ST chunk of Planck size modeled on FCCR(2), must be "in the manifold" so that the neighborhoods on which fields are defined are complex locally linear spaces expressing the Q phase space of the local "kinematics" of a Q ST.

This is surely, a most physically satisfactory situation. Occasionally, my thesis adviser Elihu Lubkin came up with a really memorable gem, and this important physical distinction between "at a point" and "in the manifold" was originally his; it has waited all these years in my mind to be made in an explicitly formal way that makes both physical and mathematical sense to me.

This stochastic evolution of process by transition amplitudes I look at as a zeroth quantization that finally provides for a genuine quantization of space and time, gives an explanation of the origin of a local & proper Newtonian time, and which restores the validity of standard quantum theory for very large n, i.e., for quantum physics well above the Planck regime. It appears also to resolve the question of the ontology of the quantum mechanical state vector: it is an ontological symbol, and not merely a mathematical artifice that holds physical information in a funny way.

An important consequence of this stochastic quantum evolution is that it wrecks, in a structural way, the common distinction made through all of physics between kinematics and dynamics - and yet revives that distinction as a natural approximation for large n.

Since this seems to have undermined and restored the foundations of all of physics, the number of directions to go, and the number of possible projects to predict things that might be reasonably contradicted by experiment is fairly enormous. The following are just a few more.

Some Questions and Projected Areas of Investigation

Though the above FCCR seems only to model one spatial dimension, there are several ways of increasing the spatial dimension, one of which involves SU(2) rotation operators, creating a closed quantized space with Qa that are the components of a vector operator. As n→∞., the [Qa, Qb] → 0. Here SU(2) actually replaces SO(3) as a geometric aspect of model.

There is not only room here for the unitary Lie algebra structure found in particle physics, but there is the foundation for a derivation and explanation for this that would be impossible from the viewpoint of existing quantum theory. The exact same thing can be said for the Clifford- Riemann-Einstein program of geometrizing physics, and in particular, of unifying internal and spacetime extensions and symmetries.

In a rather simplistic statistical graph theory of space that I constructed and played with, the indication is that the big bang structure is that of a finite but very large dimensional simplex. Notice that the complexity of such a model is contained in its dimensional parameter, and not in a measure of physical extension that exceeds tolerable fluctuations of the Planck regime. In this sense, the Big Bang would be merely a complicated quantized point. God does not need to specify the exact initial conditions for a big bang universe.

I have thought just recently of that again with the constraint that the dually indistinguishable lines and verticies are copies of FCCR(2). This allows a Big Bang to be constructed as uncertain condition of minimal entropy (maximal information).

The above clock process, whereby the clock of a system defines the dynamic evolution of a distinguished system, is the model for the general rule of "temporal propagation" for any "distinguishable physical ontology" (this phrase or possibly Bell's "beable" to replace "observable"). The essence of dynamical propagation is the computation of the transition amplitudes <tk| X |tj>. In this sense, although possibly initially disturbing, the fact that nothing seems to commute with T(n) is actually reassuring. Remember that T(n) is diagonalizable, and that it could even be taken more generally as normal.

We have previously shown that motion of an SHO(n) according to dominant transition amplitudes approaches a Newtonian clock. However, in this there is a bit of implicit trickery in that we assume state transitions between the eigenstates |tk>, while in fact, the SHO need not be in these states, in fact <tk| G(n) |tk> = 0 indicates that the clock pointer position states may not even be physically realizable. For the moment say that they are. If the system is to behave like a good clock, these are the desired states to ensure the a local Newtonianlike time does arise statistically. We have seen that these pointer states are linear combinations of equidistributed energy states, i.e., states where the energy is evenly spread over all possible available energy states.

What are these |tk> states then but those of maximal entropy or minimal information regarding the probability distribution of energy. Applying an argument of the second law of thermodynamics, SHO states should naturally approach those that make good clocks, but they may not be reachable physically. So then, we can only have pretty damn good clocks. Is that so terrible?

   If we construct the states

	|tk>  :=  (1/√(2)) |tk-1> + (1/√(2)) |tk+1>
	<tk| T |tk>  =  tk

	T |tk>  ≠  tk |tk>


   (entropy of Q processes)
   If we look at any process |p>, expanded in T eigenstates,

	|P>  =  Σ ak |tk>

   and assume that it can and has been normalized in some sense, the
   associated probabilities

	pk  :=  |ak|²

   so the projection operator M(P) is

	M(P)  =  |P><P|  =  Σ |tk> pk <tk|

   can be used to define the standard entropy functional

	S( P )  :=  - Σ pk ln pk  =  Tr( M(P) ln M(p) )

   which is independent of the basis chosen.

   Can you minimize with Lagrange multipliers?

   Density matrices can be formed by real convex linear combinations of
   any number of arbitrary projection operators M(Pj)

	D  =  Σ bj M(Pj),  where Σ bj  =  1
	      j                     j

   where the bj, as custom, can be interpreted as probabilities, and the
   entropy of D can be defined as the convexly weighted sum of partial

	S( D )  :=  Σ bj S( Pj )

In a classical calculation of entropy, partial entropies are often associated with different species collected within the same system. If one accepts the idea that Hilb(n) is not a mere artifice, and that it is a symbol of an ontological extension, as a Euclidean or Riemannian space is so accepted, and further, that the elements of Alg(n), the C*-algebra of linear operators, are also ontological symbols then, the transition from Hilb(n) to density matrices in Alg(n) allows that various species may be contained within the extension of Hilb(n) lifted to Alg(n). I am hoping that this interpretation of symbolisms and formalisms will finally put an end to any serious attempt to understand Q extensions, quantities and qualities in terms of classical analogs. There is a rich collection of Q structures for which no classical analogs exist.

There is no reason to assume a priori that a Q ontology manifests itself in some Q cognate, any more than there is any reason to suppose that all the constructs of statistical mechanics should be mirrorred in classical thermodynamics. The existence of particle spin and other internal particle properties should already be a convincing counterexample to any such assertion.

The time parameter used in physical theory, regardless of its particular properties in any given theoretical context is derived from an intuitive cognitive construct; the notion of space is also a cognitive construct of the brain coupled with an essentially classical perceptual apparatus that informs consciousness. The constructs are made below the level conscious intent, and for that reason we attribute a fundamental ontology to space and time far in excess of what is reasonable. In light of the current stage of physical theory, it makes perfect sense to doubt the applicability of constraints of perceptions to a fundamental ontology, and instead view the intuitions of space and time as what they are, cognitively constructed illusions and simplifications that have developed biologically because they were useful for survival.

Everything Q begins with finite dimensional noncommutative algebras (or so it would seem), and apparently unitary (Hermitean) Lie algebras, which happen, automatically, to have symplectic structure, a structure at the foundation of the Hamiltonian formulation of classical mechanics, that becomes in QM a complex structure. FCCR is possessed of both symplectic and complex structure, allowing that some Q structures will give rise to classical structures. Unitary algebra is associated with closed systems; pseudounitary algebra is associated with open systems.

Is there a way of specifying and classifying all possible physical particle species within any Alg(n)? The method should be one of rule outs performed in the general sense of quantum theory: what is not forbidden in principle, is compulsory. One of the properties of standard quantum theory is that a kinematical possibility always has a nonvanishing probability. If standard Q theory is somehow absolutely true, then it should be the arbiter of what is physically possible, and what is physically imposible; it does not appear that this is true, and so one wonders about its terminal absoluteness.

Since we don't know all that is physically possible, and are not likely to, the dictum in QM that all kinematically possible states must be accounted for with nonvanishing transition amplitudes is impossible.

In the Planck regime of the big bang, which should probably be considered an "initial process", one would (this one anyhow) consider the initial process to be populated with "massless things", typically neutrinos, photons, gravitons and possibly those of other spins, leaving a gauge invariance later broken by massive condensates. Since massless fields have only two gauge invariant degrees of spin freedom regardless of the spin value, a uniform FCCR(2) algebraic structure of verticies and lines throughout such a primordial Q crystal might actually be expected.

Relationships with Penrose's spin manifolds and Toda field theories are at least pictorially clear, but could use elucidation.

I have also considered slightly, these finite dimensional FCCR(n) theories using algebraically extended Galois fields with complex and square root structure, and constructed exact representations of CCR. [Appendix J] The not insuperable interpretational problem with these is the implicitly toroidal topological nature of Galois fields, and the lack of well ordering of calculated transition probabilities that would presumably be elements of this finite field. Though, if the degree of the field is large enough, a concept of "local ordering" can be defined.

I believe that through the algebraic FCCR(n) portal, the appropriately condensed principles of GR can be stated within a plexus of local algebras. [Geroch 1972] If I am right, that could be the elusive fundamental theory of QG that includes and unifies everything geometrical and energetic that one would want included and unified.

The specifics of a special and general "local coordinate transformations" (as a gauge theory?) are not yet clear, beyond a general local GL(n, C) group of local transformations of basis. The concept should arise from a plexus of FCCR's that describes not only a "Qmanifold", but also the species of creatures that live in it.

There remains to work out notions of connections in a plexus of FCCRs, and that of "field". A quantized field, it seems to me, can be defined through operator valued functions defined on a basis of the pseudohilbert spaces. Connections in a plexus of Hilbert spaces that presumably form a "Qmanifold" is more complicated.

Is it correct to parse a general Qmanifold in terms of a fixed plexus of Hilbert spaces? If there is low energy density and therefore low curvature, a Hilb(n) for large n will cover a local patch of the quantized ST, but, one can also consider various subcoverings .... However, for any given energy distribution, there should be a plexus of Hilbk(nk) where the nk are maximal; I say this sticking to the idea that high energy densities means high curvatures, and therefore a smaller patch within which linearity is valid; a maximal energy density and greatest curvature should then be associated with n=2. The reverse association may not hold.

The set theoretic aspect (a kind of dynamic quantum oscillator set theory) appears to be expressible through complex Clifford algebras of the subspaces of the Cn carrier space underlying FCCR(n) - thus, a plexus of Clifford algebras with possibly shared subspaces. I see no convincing a priori physical argument to restrict a natural direct product of Qpoints respresented by FCCR(2) to the completely antisymmetric Clifford algebra within the obvious tensor algebra, excepting the idea that Fermionic behavior of fundamental ST entities provides for the apparent stability of ST extension. That there are pseudocondensates of ST corresponding to the erroneous singularities of GR might be seen through a mechanism similar to Cooper pairing in superconductivity. If one believes in a spontaneously broken supersymmetry and infers a prior (on a cosmological time scale) supersymmetry, the enigma remains as to how supersymmetry emerges from a general permutational symmetry.

Currentless EMT can be expressed in purely topological form on simplicial complexes where the exterior derivative is the boundary operator; the conservation law dμ Fμν = 0 then becomes Poincaré's lemma (the boundary of a boundary vanishes). EMT is also abstractly expressible in the dual space (fully antisymmetric forms) of a Clifford algebra. I have not yet looked at electroweak theory in this context.

It would appear that one may need a kind of finite algebraic differential calculus where one may take a variational derivative (difference actually) of one operator w.r.t. another. My thoughts and work on that idea of "spectral derivative" "spectral derivative" (a working phrase of reference) is another paper of uncompleted work in itself.

Approximate Dynamical Algebra

   Consider the form of FCCR:

	exp( +i (2 π)/n T(n) )  N(n)  exp( -i (2 π)/n T(n) )

		=   N(n) + G(n)

   Then, for very large n, we can approximate (2 π)k/n with a continuous
   variable x with range [0, 2 π), and dropping the notational operator
   dependence on n,

	exp( +i x T )  N  exp( -i x T )  =  N + G

   Using the Baker-Campbell-Hausdorff formula for small x, the LHS becomes

	N + ix [T, N] - (x²/2) [T, [T, N]] -i ....

   and so, for large n and small x,

	[T, N]   =  -i/x G

	[T, N]   =  -i n/(2 π) G
	[T/n, N]   =  -i 1/(2 π) G

   This can never have a proper limit in n, but is good for any very
   large bounded value of n.  I think the representation of T will
   cease to be self-adjoint in the limit.  Nevertheless, for large,
   n, this last equation becomes the dynamical equation of QM for any
   energy operator E, where T is defined as the number operator in
   the basis that is the Υ transformed eigenbasis of E.
   The switch in sign of this commutator is consistent with what one
   would expect relativistically.

   While we know that, for any n > 1, we have exactly,

	[N, G]  =  0,

   since G = I - n |n, n-1><n, n-1|

	[T, G]  =  - n T |n, n-1><n, n-1| T
	[T/n, G]  =  - T |n, n-1><n, n-1| T

   which is not zero, for any finite n.  In the strong operator topology
   limit, it would be zero if the formally Hermitean T survived the
   strong operator limit; so then, it does not survive.

   Again consider the form of FCCR:

	exp( +i (2 π)/n T(n) )  N(n)  exp( -i (2 π)/n T(n) )

		=   N(n) + G(n)

   Approximate the exponentials for large n using the unitary and
   second order correct Cayley approximation,

	CN(n)  =

	                            1 + i (1/2)(2 π)/n T(n)
	exp( +i (2 π)/n T(n) )  =  ------------------------,
	                            1 - i (1/2)(2 π)/n T(n)

   (From the viewpoint of actual computation using a finite
   difference calculus that approximates the continuous, this
   would be superior since the approximating fraction on the right
   is strictly unitary.)

	1 + i π/n T(n)      1 - i π/n T(n)
	--------------- N(n) -------------------  =
	1 - i π/n T(n)      1 + i π/n T(n)

		  N(n) + G(n)

   Multiplying on the L by (1 - i π/n T(n)), and the R by
   (1 + i π/n T(n)),

	(1 + i π/n T(n)) N(n) (1 - i π/n T(n))  =

	(1 - i π/n T(n)) ( N(n) + G(n) ) (1 + i π/n T(n))


	N(n) + i π/n T(n) N(n) - i π/n N(n) T(n) +
	(π/n)² T(n) N(n) T(n)  =

	N(n) - i π/n T(n) N(n) + i π/n N(n) T(n) +
	(π/n)² T(n) N(n) T(n) +
	G(n) - i π/n T(n) G(n) + i π/n G(n) T(n) +
	(π/n)² T(n) G(n) T(n)

	i 2 π/n [T(n), N(n)]  =

	G(n) - i π/n [T(n), G(n)] + (π/n)² T(n) G(n) T(n)

   For very large n, we can neglect the term quadratic in T(n), so

	i 2 π/n [T(n), N(n)]  ≊  G(n) - i π/n [T(n), G(n)]
	i 2 π/n [T(n), H(n)]  ≊  G(n)

   where H(n) = N(n) + (1/2) G(n) [This is with h = 1]  Rewriting
   this large n approximation:

    |    [T(n)/n, H(n)]  ≊  -i/(2 π) G(n)   |

	[G(n), H(n)]  =  0,

   exactly for any n.  But [T(n), G(n)]  ≠  0  for any n.

	[T(n), G(n)]  =  [Υ(n) N(n) Υ(n), G(n)]

	              =  Υ(n) [N(n) Υ(n), G(n)]
	               + [Υ(n), G(n)] N(n) Υ(n)

	              =  Υ(n) N(n) [Υ(n), G(n)]
	               + Υ(n) [N(n), G(n)] Υ(n)
	               + [Υ(n), G(n)] N(n) Υ(n)

	              =  Υ(n) N(n) [Υ(n), G(n)]
	               + [Υ(n), G(n)] N(n) Υ(n)
	              =  - U(n) N(n) U(n) [U(n), G(n)] U(n)
	               + [U(n), G(n)] N(n) U(n)

   Taking the limit to unbounded n,

	[G(n), T(n)/n]  →  0.

   because, in the strong operator topology G(n) → I, and after
   completion, T(n)/n becomes a bounded operator T with continuous
   spectrum [0, 1) after completion.  Introduce the operator Z(n):

   See corollary 8.10.1[Section VIII: Corollary 8.11.1]
   regarding matrix elements

	Z(n)  :=  (1/2)(H²(n) + T²(n)) + i π [T(n)/n, H(n)]
	       =  (1/2)(H²(n) + T²(n)) + (1/2) G(n)

   for large n, Z(n) is a generator of rotations in the H-T plane:

	[Z(n), T(n)]  =  +i H(n)
	[Z(n), H(n)]  =  -i T(n)

   so, at least for large n, the second Fourier transform,

	Υ(n)  =  exp( i π Z(n)/n )

	Z(n), Hermitean.


	Υ(n) N(n) Υ(n)  =  T(n)


	[Υ(n), Z(n)]  =  0, 

   where one can approximate

	[Υ(n), G(n)]  =  0  =>  [T(n), G(n)]  =  0

   approximating a Heisenberg algebra among T(n), H(n) and G(n).

   NB: Using this quadraticizing procedure, does it close or simply create
       a bounded hierarchy, for fixed n?
       I.e., is the finite n-space toroidal?

   Without any approximation

	CN(n)  =  exp( +i (2 π)/n T(n) )

	Υ(n) T(n) Υ(n)  =  N(n)

	Υ(n) CNn/4(n) Υ(n)  =  exp( i π/2 N(n) )
	                                     =  φ(n)

   Of course, one still has exactly for any n,

	[Q(n), P(n)]    =  +i 1/(2 π) G(n)

   Taking the limit to unbounded n, in the strong operator topology,

	[G(n), Q(n)]  →  0,  [G(n), P(n)]  →  0.

   In this limit, the relationships

	(1/2)(Q²(n) + P²(n))  =  N(n) + (1/2)G(n)  =  H(n)

	[N(n), Q(n)]  =  -i P(n)
	[N(n), P(n)]  =  +i Q(n)


   Before the limit, next commutators give an su(2) algebra.
   If we add these into the approximating Heisenberg algebra, we have
   T(n), H(n), Q(n), P(n), N(n) and G(n), with H(n) = N(n) + (1/2) G(n),
   with the essential commutators

	   [T(n), Q(n)]  =  
	   [T(n), P(n)]  =  

   missing, seemingly not easy to compute abstractly, but see
   Theorem 8.10 which suggests how
   in the |tk> eigenbasis, We know that,

	[CN(n), T(n)]  =  0
	[CT(n), N(n)]  =  0

   one being a Fourier transform of the other,
   and that in the T(n) eigenbasis where T(n) is diagonal, there exists
   a pair of operators numerically identical to Q(n) and P(n) in the
   N(n) system, so

	[q(n), p(n)]  =  i g(n)
	T(n)  =  (1/2)( q²(n) + p²(n) ) - (1/2) g(n)

	[T(n), q(n)]  =  -i p(n)
	[T(n), p(n)]  =  +i q(n)

	Υ(n) N(n) Υ(n)  =  T(n)
	q(n)  =  Υ(n) Q(n) Υ(n)
	p(n)  =  Υ(n) P(n) Υ(n)
	g(n)  =  Υ(n) G(n) Υ(n)

   Introducing a third Fourier transform,

	fr(n)  :=  exp( i (π/2) T(n) )

	fr(n) q(n) fr(n)  =  + p(n)
	fr(n) p(n) fr(n)  =  - q(n)

   that connects q(n) and p(n), so that

	Υ(n) (Q(n), P(n), G(n), φ(n), N(n)) Υ(n)  =

		(q(n), p(n), g(n), fr(n), T(n))

   Do these close, or almost close in the asymptotic region short
   of the limit of unbounded n?

   For large n, s is a measure of "waiting time" in Planck clicks, and

   t(s)  :=  <exp( -i (2 π)/n N(n) s ) T(n) exp( +i (2 π)/n N(n) s )>
         :=  <T(n, s)>

   e(s)  :=  <exp( -i (2 π)/n T(n) s ) E(n) exp( +i (2 π)/n T(n) s )>
         :=  <E(n, s)>

	D/Ds e(s)  =

   i(2 π)/n <exp( -i (2 π)/n T(n) s ) [E(n), T(n)] exp( +i (2 π)/n T(n) s )>

   =  i(2 π)/n <[E(n, s), T(n)]>


Uniformity of Local Time Progression

   How does the deviation from uniformity of t(n) "time progression"
   depend on n?  At what point would it be experimentally discernable?
   For a statistical variance (square of standard deviation),

	     σ²(t)  =  < (<t> - t)² >
	            =  < <t>² - 2<t>t + t² >
	            =  <t>² - 2<t>² + <t²>
	            =  <t²> - <t>²

   Look at the average and standard deviation of the distribution of
   transition probabilities for one transition.  Because of the symmetry
   of the probability distribution for forward and backwards transitions,
   <t> = 0, so in reality
	     σ²(t)  =  <t²>

   The first order (one click) transition probabilities are already calculated,

	(1/2)² csc²( π/n (k-j) ) )
	-----------------------------        for k ≠ j

	       --------------                for k = j  [stasis]

	          n-1     (1/2)² csc²( (π/n) k )
	<t²>  =  Σ k² -----------------------
	          k=1         (n-1)(2n-1)/6

	       =  -----------------  Σ k² csc²( (π/n) k )
	           ( (n-1)(2n-1) )   k

   It would be helpful, of course, to be able to express this sum in
   a closed form.  It turns out fairly frequently that the sums that
   turn up in calculating things that would be of physical interest
   in, as well as integrals that can be used to approximate them are
   difficult, if not downright impossible to calculate in closed form.
   The saving grace, perhaps, is that for finite large n, they can be
   calculated exactly by a high speed computer in some reasonable
   length of time.

   For large n, the term csc²( (π/n) k ) should dominate the content of
   the sum; in fact, in the limit n→∞, it is unbounded, so an
   approximation of <t²> for large n is

       <t²>  =
	          -----------------  csc²( π/n )  →
	           ( (n-1)(2n-1) )

	               (3/4)(1/n²) (n²/π²)  =  3/(4π²)

   This is an approximate variance as it would be computed for T(n),
   but for the scaled time (clock phase) operator t(n) := (1/n) T(n),
   the expression will be


   This gives a rapidly decreasing variance for increasing n that
   conforms to Newton's uniform progression of time within a local,
   but increasingly large region of space and time; the larger the
   the region, the better the Newtonian approximation due to the
   peaking of the probability distribution of transitions on the
   0 → ±1 transitions.

   Understand that this computation is for 1 click only, and is then
   only reasonably suggestive for many clicks that the variance for
   the probability distribution on k for transitions 0 → ±m
   (mod n), for m clicks, has a variance proportional to (1/n²);
   the variance may also be proportional to some positive power of m.
   I will merely guess here that the power of m is less than or equal
   to 2.

   The variance is logically certain to appoach 0 as n approaches inf,
   but how fast?  This depends on the actual quantum model, and how
   "big", or energetic it needs be. 

Direct Sums of FCCR

Considering the direct sum of all the IRREPS of SU(2) in each n dim. Hilbert space, it is unitarily equivalent to that representation induced in the Hilbert space of holomorphic functions on C² by the SU(2) rotations in C². Restricting the direct sum to the odd n only is equivalent to the reducible representation of SO(3) induced in the standard Hilbert space of differentiable functions in the Schrödinger representation induced by rotations in an R^3.

Such a Hilbert space based on holomorphic functions on C² is that often used for the construction of the all the finite dimensional IRREPS of sl(2, C), the covering group of the restricted Lorentz group. The full Lorentz group requires a tensor product of this consideration with itself, and so should require a tensor product of FCCR(n) with itself (then su(n) X su(n) for a true relativistic expression.

Does this show standard QM as an already second quantized formalism starting with FCCR?

Understanding The FCCR Clock Propagator and the origins of the QM
Time Dependent Phase Factors of time Dependent Energy Eigenfunctions

See the separation of the time dependent phases in
Relativistic Feynman Path Integrals

According to the "system clock progression", the fundamental m-click propagator for any m > 0 is,

	( I - i H(n) m dt / ħ )m          [dt = t0 (2 π)/n]

If we ask what the transition amplitude is that after m clicks the time of the clock shown is tj when it was started at tk, it is,

	<tk| ( I - i H(n) m dt / ħ )m |tj>

   The probability for this m-click transition is proportional to

	|<tk| ( I - i H(n) m dt / ħ )m |tj>|²

   and equal to,

	Z(n) |<tk| ( I - i H(n) m dt / ħ )m |tj>|²

   where the normalizing factor Z(n) is given by,

	Z(n) =  Σ |<tk| ( I - i H(n) m dt / ħ )m |tj>|²

	     =  Tr(  Σ   ( -i (2 π t0)/n  H(n)/ħ )j / j! )

	     =   Σ   (2m j)/(2m)j (-i (2 π t0)/n)j Tr( (H(n)/ħ )j )

   This is not exactly easy to compute, but let us assume it done, and let
   us also recognize that Z is at least also a numerical function Z(n, m)
   of both n and m.

   If we ask for the expected value of the clock time after m clicks given
   that the clock started in the state reading tk, this is given by
   t(n, m, k) where

        Z(n, m) t(n, m, k)  =

             Σ  tj |<tk| ( I - i H(n) m dt(n) / ħ )m |tj>|²

   Getting tj inside of the inner product seems an interesting problem,
   because it appears that it will be necessary to take a square root and
   that we will then write,

        Z(n, m) t(n, m, k)  =

	 Σ  |<tk| ( I - i H(n) m dt(n) / ħ )m T1/2(n) |tj>|²

There is no especial formal problem with this since T(n) is a positive operator, or can always be arranged to be so. The question is one of allowing, on physical or formal grounds, the 2n possible operator roots of T(n), or simply selecting the single positive operator root where all its eigenvalues are the positive square roots of integers.

This method of taking expectation values, given the number of elapsed clicks, and an initial state should be typical of that for any observable, or at least for primary ones. The coupling of canonically conjugate pairs as structural in the construction above should be noted. The fundamental "dynamical question" that should be answerable in physics is usually answered by a solution of the Cauchy problem that predicts on the basis of some rule of propagation what the state of some system will be in the future, given its present state.

The expected value of any observable A(n) after m clicks have elapsed should then be given by

   distribution over (k, j), normalized for fixed n and m, where n is
   the "clock size" and m is the waiting time measured in multiples of
   the Planck time.  A RMS clock speed can then be defined through the
   energy operator by

	s²(n, m)  =  (1/m)²  Σ   (k - j)² prob( tk → tj; n, m )

   It is conceptually *very* important to distinguish the transition
   rate or clock speed, however it is defined, from the "rate of time"
   told by the clock.

   While the clock speed is the "average" number of transitions per second,
   during a given transition, the clock pointer moves (or not) from some
   pointer position, say tk to another pointer position tj.
   What we shall measure as the selftold time of a clock will be an
   expectation value of these transitions relative to the clock speed as
   the intrinsic fiduciary of a given structure for a physical system.

   Now, there are two forms of such expectation values, one is a phase
   average, the other an average over very many transitions.  Should
   these turn out to be equal, we might label the clock "ergodic".
   Cf. the ergodic theorems of Birkhoff and Hopf, and v. Neumann
   [Neumann 1932] in the context of statistical mechanics.

   Since the pointer eigenvectors are finite and discrete, one could
   expect all clocks to be ergodic and for
   the phase average and transition average to be equal; nevertheless,
   this must be proved.

   It is equally important to remember that physical systems are not
   alway a priori in "good clock states".  As a physical description, a
   "good clock state" is one with an equidistribution of energy over
   the energy operator's eigenvectors.  This would also be reasonably
   true in standard QM, conceptually.  See, however, clockstates,

m-Click Transitions for Large m

   Consider now m transitions, for finite values of n, m and k generally,
   instead of just one transition.

   A "state of process"  |ψ> is propagated by multiplying a series
   fundamental propagators:

	   (1 + i ( E(n) τ(k) )/ħ )m |ψ>

   with τ(k)  :=  τ0 k (m/n)


	   (1 + i ( E(n) τ(k) )/ħ )m  =

	    Σ 1/mj (m j) ( i (E(n) τ(k) )/ħ )j

	     →  exp( i (E(n) t)/ħ  )

   only if the variable τ(k) somehow conspires to approach a continuous
   variable.  If it does not, nevertheless, propagation in the case of
   finite n, m and k is still perfectly well defined.

   At least approximately, the exponential of finite n and discrete tk
   can be used.

   To evaluate the transition amplitudes for exponentials or powers, as

	    <tk| Em(n) |tj>

   one need only diagonalize the operator E(n). If one can do this
   then the general finite propagator can also be diagonalized, and
   transformed back to the t-basis.  Needless to say, in principle,
   and numerically we can do this for either N(n), or H(n) as the
   FCCR(n) SHO generator of time translations with no especial
   conceptual trouble.

   For large n, generally, the matrix looks like something that can
   be diagonalized using Chebyschev polynomials.  Diagonalize - raise
   to a power and undiagonalize.  The matric elements will probably
   be expressible only as sums that probably cannot always be combined
   to a closed form, can be evaluted numerically.

   To "unitarize" the finite propagator in principle, thus normalizing
   the transition probabilities through their transition amplitudes,
   see the method of Cayley, above.

   However, there are good physical and mathematical reasons not
   to impose a general theoretical condition of unitarity on the
   elemental propagator:

	1. The system is generally open, and not closed.
	   The unitarity of QM is a statement both of the closedness
	   of QM systems, and its dynamic conservation of probability.
	   A single particle in an infinite Newtonian type universe
	   must have its wave function defined over the entire
	   infinite universe otherwise unaccupied, even if the
	   particle has just come into existence.

	2. Such an open system cannot then be a Hamiltonian system.
	   The pure states that have become pure processes, must
	   then be replaced more generally by mixed states, which is
	   to say, density matrices representing "impure processes".

	3. A general change of reference is affected by a
	   noncompact group that contains the full Lorentz
	   group (actually its covering group SL(2, C) X SL(2, C)),
	   so the observed behavior of the clock with its transition
	   amplitudes will depend on the observer.

	4. The insistance on the unitarity of dynamics is more of
	   a convenience than it is a logical necessity.  It is
	   an expression of conservations of both energy and
	   probability.  For an open system, neither need be conserved;
	   but, an observer should then be considered to be bright
	   enough to normalize any frequency distribution to a
	   probability distribution by a standard mathematical
	   algorithm, which is, in fact, independent of any observer.

	   This last reason, in its defiance of the need for
	   unitarity distresses me, only because it seems to imply
	   the necessity of human consciousness and even intelligence
	   in order to affectuate and interpret the mathematics
	   as part of the theoretical construct itself.  Thus, I seem to
	   raise the spectre of v. Neumann's "psychophysical parallelism"
	   [Neumann 1932]
	   that I have long shrunk from.  There must be a way
	   out of this horror.

Conclusions & Criticisms

There exists a sequence of reformulations and generalization of standard quantum theory parameterized by an integer n, 1 < n, that approaches QM founded on CCR. The generalization labeled FCCR(n) has all the requisite structural and interpretive properties of QM, except that the long sought noncentral time operator is naturally present. FCCR is a local theory with discrete spectra for the noncommutative position and time operators.

In addition, for given n there exists a maximal velocity that is left invariant by the group of complex linear transformations on the algebraic structure that leave the Q(n)-P(n) commutator invariant in much the same way that the group of Lorentz transformations leaves c(n) invariant. The invariant maximal velocities are determined by a cone structure of a pseudohilbert space with a sign indeterminate inner product, and so the invariance group is noncompact. Theory then contains the essential relativistic velocity structure as well as finally representing space and time on the same mathematical footing: they are both represented by well defined operators.

The time operator, Fourier conjugate to an energy operator, describes a clock aspect of an oscillator; it behaves erratically in its transitions of pointer positions; but for very large n, the clock behavior approaches, asymptotically, the uniformity of time progression that is part of the standard Newtonian assumption, and so the uniformity is now derived from more primitive and fundamental notions rather than being simply assumed. The uniformity is a statistical result arising from the inherent statistics of quantum theory most generally.

As FCCR(n) is local, so too is the clock aspect of the natural oscillator. If the universe can be modeled as an ensemble of such local FCCR(n) structures, it seems that only oscillator clocks with large n will provide a uniform notion of time progression since uniform progression just starts to become dominant at n=4. This reveals a basic chaotic dynamics for systems smaller than n=4, and that only leaves n=2,3 for this regime.

Thus, there is derived a natural ontology of Newtonian time uniformity which is the same everywhere that the fundamental theory is operative, and one might conclude throughout space. How clocks at a distance and/or in relative motion may measure each other's rates of progression is something not addressed, but based on the ingredients of relativity theory available in the theory, some good guesses could probably be made. [Algebraic Universe]

It may be worth noting that this not a "hidden variable theory", even though in a sense the time operator T(n) has been hidden at a quantum level below QM based on CCR. It is instead a continuation of thought motivating existing quantum theory that makes the basic theory more consistent with its own basic ideas, also more formally symmetrical, and consistent with the fundamental concepts of relativity.

Can a sensible time of cosmological scale be defined from this? I conjecture that the answer is yes, but first, an emergence of relativistic structure that makes connection with known structure must be made, and my current suspicions are being investigated in the form of random flight. [FCCR and Random Flight]

How would such a finitistic quantum theory make sense in terms of that predicated on CCR?
Conceptually, garden variety CCR based QM is given in terms of a noninteractive noncommutative phase space in which the states of a system reside. View this then as an ideal noninteractive phase space and then also necessarily a noninteractive configuration space, which even in classical mechanics may be curved. An assumption behind this, using the merest suggestion of the notions of general relativity is that there is insufficient local energy density to contradict local noninteraction with configuration space.

The notion of particle, however indeterminate its classical state may be quantum mechanically, involves a considerable localization of energy, a high local energy density that is enough to force "interaction" with the configuration space to curve it, and restrict the validity of an assumption of local flatness associated with the linearity of the space of states in quantum theory.

Effectively then, one might expect that theory would work in describing particles themselves with small values of n, simply because a particle's existence associated with a high local energy density would also exist within a context of high local curvature of the configuration or phase space. This happens to coincide with the observed unitary symmetries of elementary particles. The smaller n, and there is a limit to this, the higher the curvature, and the higher the local energy density measured in a locally defined relativistic frame.

FCCR(n) is a theoretical framework where n is a measure of the size of a locality under consideration, and one where such statistical flatness is specifically valid. FCCR(n) also describes an open quantum theoretical framework, meaning that if one were to posit a collection of such local algebras, not unlike the known Haag-Kastler formalism of QFT, [Local quantum field theory - Wikipedia] there would be an interaction between the local algebras that would conspire together to create a local space and time structure as a cooperative phenomenon. The mathematica of a putting together such local algebras seems to be that of sheaves [Wikipedia] of special unitary Lie algebras.

In GR, Lorentz covariance is so strictly local as to be "at a point", a concept that is at odds with a reasonable concept of a genuine quantum theory.

On the other hand, FCCR(n) provides a Lorentz covariance that is not at a point, but within a neighborhood whose size is measured by n that is presumably inversely proportional to some parameter of local curvature.

There is the obvious remark that when n=2n, a complex Clifford algebra, which is also a C*-algebra may be defined with clear anticommutation relationships. The "low" values of n for this confluence are obviously, powers of 2: n = 2, 4, 16, 32, 64, 128.

A major criticism of this entire construction is that the "clock" tied to the energy by a Fourier transform is only one dimensional, when clearly, any physical clock that we know is three dimensional, which has more degrees of freedom. An oscillator of three dimensions has has not only two more degrees of freedom, but also has degenerate energy levels. The energy levels are given by n, of a p dimensional oscillator are given by

		Ep(k)  =  (k + p/2)

A p-dimensional Q oscillator with n states has a degeneracy of (k+p-1 k), a binomial coefficient, where k < n, for energy level k. If p=3, there are still n energy levels for a finite n-state oscillator, but now, these energy levels have degeneracy.

	k=0   has degeneracy (2, 0)    =  1
	k=1   has degeneracy (3, 1)    =  3
	k=2   has degeneracy (4, 2)    =  6
	k=3   has degeneracy (5, 3)    =  10
	k=4   has degeneracy (6, 4)    =  20
	k=5   has degeneracy (7, 5)    =  21
	k=6   has degeneracy (8, 6)    =  28
	k=7   has degeneracy (9, 7)    =  36
	k=8   has degeneracy (10, 8)   =  45
	k=9   has degeneracy (11, 9)   =  55
	k=10  has degeneracy (12, 10)  =  66
	k=11  has degeneracy (13, 11)  =  78
	k=12  has degeneracy (14, 12)  =  91

As n increases, the upper bound of k increases, and so does the degeneracy measure. The higher energy levels acquire higher statistical weight. But, in the toroidal clock topology, there is a symmetry between the energy states E(k) and E(n-k), in particular between E(1) and E(n-1), and indeed also, as might be expected,

		(k+p-1 k)  =  (k+p-1 p-1)

One might expect then a Fourier transformed clock will then have degenerate "pointer positions". Such clocks would have properties unlike standard, effectively one dimensional clocks that we have come to know and love, that tell a one dimensional time that is so much a part of our cognitive constructions.

Notice that the standard ammonia clock, while having a three dimensional existence, is essentially a Q oscillator in the one dimensional oscillation of N relative to the plane formed by the hydrogen triangle in the NH3 molecular complex.

We pick such a molecular clock precisely because it conforms to our notions of what a clock should be; there is no real reason to suppose that nature actually conforms to our peculiar notions of how it ought to conform itself. How do we actually read an oscillator clock with three independent degrees of freedom? It seems to make perfect physical sense, but it is not within the common sense of constructed physical models of clocks that conform to the human model of time that is manufactured cognitively at a subconscious level, and which we mistakenly take as somehow ontological in nature.

Nevertheless, the idea of a multidimensional clock is easily obtained for using the invariance group of FCCR(n) that leaves G(n) invariant, and which leaves the energy relations

        H(n)  =  N(n) + (1/2) G(n),


        N(n)  =  (1/2)(Q²(n) + P²(n)) - (1/2) G(n)

form invariant.

Inner actions on the operator set {N(n), T(n), Q(n), P(n)} can produce G(n)-orthogonal sets

        {Np(n), Tp(n), Qp(n), Pp(n)}.

There are then separate energy and time operators associated with each axis.

One might recall that in SR, a time dilation in the direction of 3-motion is different than in a direction orthogonal to the direction of 3-motion.


Physics Pages

Math Pages

Home Page

            © August 2003 by Bill Hammel (bhammel@graham.main.nc.us).
            Permission to use for any noncommercial, educational purpose.
            This copyright and permission notice must appear in all copies.
            Permission is also granted to refer to or describe these
            documents in commercial books, products, or online services.
            These documents may be freely reproduced, copied and disseminated
            by any electronic, digital or written means, but in no case may
            such copying or dissemination be charged for.  The idea is very
            simple, no person or body has supported any of the original
            works contained in this pages.  They are works of love given
            freely.  I find repugnant the idea of someone expropriating,
            for profit, what I give freely.  If you have a problem with
            this, ask; rules always have exceptions.

The URL for this document is:
Created: August 14, 2003
Last Updated: September 5, 2003
Last Updated: May 31, 2004
Last Updated: June 2, 2004
Last Updated: June 3, 2004
Last Updated: June 24, 2004
Last Updated: September 30, 2004
Last Updated: October 4, 2004
Last Updated: October 17, 2004
Last Updated: October 19, 2004
Last Updated: November 15, 2004
Last Updated: December 9, 2004
Last Updated: December 25, 2004
Last Updated: June 5, 2005
Last Updated: March 11, 2006
Last Updated: May 8, 2006
Last Updated: June 19, 2006
Last Updated: July 20, 2006
Last Updated: August 23, 2006
Last Updated: September 4, 2006
Last Updated: September 28, 2006
Last Updated: October 10, 2006
Last Updated: October 16, 2006
Last Updated: October 25, 2006
Last Updated: November 10, 2006
Last Updated: November 29, 2006
Last Updated: July 29, 2007
Last Updated: October 31, 2007
Last Updated: December 23, 2007
Last Updated: January 30, 2008
Last Updated: September 14, 2009
Last Updated: October 7, 2009
Last Updated: January 18, 2011
Last Updated: March 7, 2011
Last Updated: October 28, 2011
Last Updated: November 30, 2011
Last Updated: December 20, 2011
Last Updated: March 8, 2012
Last Updated: November 7, 2014
Last Updated: August 9, 2015
Last Updated: August 15, 2015
Last Updated: August 23, 2015
Last Updated: