INTERPRETING THE PLANCK MASS Abstract The Planck mass is shown to be a lower bound on the regime of validity of General Relativity. Empirically, the Planck mass is greater than the masses of elementary particles in the regime of Quantum Theory, and so the regimes of validity of GR and QT are disjunct, casting severe doubts on the meaning of quantizing the gravimetric field of the Einstein equations. The Canonical Commutation Relation (CCR) assumes that q(t) and p(t) can both be measured "simultaneously", but that this involves an uncertainty relationship regarding the accuracy of their measurements. SR, on the other hand, reminds us that the concept of simultaneity, at any spatial separation, is not a relativistic invariant, and a concept that must be ambiguously defined. Consider CCR, algebraically, in terms of classical concepts, and not in terms of any of the standard representations. Look at [q(t), p(t)] = [q(t), m dq(t)/dt] = lim [q(t), (q(t + dt) - q(t))/dt] = lim [q(t), (q(t + dt)/dt] - [q(t), q(t)/dt] = (ħ/m) I lim [q(t), (q(t + dt)/dt] - [q(t), q(t)]/dt = (ħ/m) I Now, suppose the limit dt → 0 cannot be taken. [Strictly speaking, the measurement of either position or momentum "simultaneously", as it is often spoken, is physically, and quantum theoretically, impossible, even though QM and QFT allow complex valued functions and operators of functions of space and time, arguing that neither the functions nor the operators are observable in themselves, and that what is measurable are functionals of these. Such an argument is conceptually bogus, and an inconsistency in both QM and QFT.] The second term on the LHS must vanish identically and we have [q(t), q(t + dt)/dt] = (ħ/m) I [q(t), q(t + dt)] = (dt/m) ħ I If U(.) is the uncertainty functional of q(t) in some state which is not an eigenstate of q(t) or q(t + dt), then U( q(t) ) U( q(t + dt) ) ≥ (dt ħ) / ( 2 m ) If U( q(t) ) is approximately (c dt), where dt is understood as a relativistically related uncertainty in t, and c is considered as a true physical constant, and therefore merely an instrument of conversion of physical units, then U( q(t + dt) ) ≥ ħ/(2 m c) ≥ (c ħ)/(2 m c2 ) Since no spatial point is distinquished, and the RHS of the above equation is independent of t, and q(t) is just as good as q(t+dt), w.l.o.g., we can write for an intrinsic uncertainty in position q of a mass m, U( q ) ≥ ħ/(2 m c) [1/2 Compton wavelength] If now, m is taken to be the Planck mass (a very large or highly energetic mass in terms of elementary particles) mP = 2.177E-5 gm the RHS, lower bound, becomes lP/2, half the Planck length, and the diameter of the sphere of uncertainty surrounding q is lP, which happens also to be the Schwarzschild diameter (4GmP/c5) of the Planck mass. On the other hand, take m = me, the mass of the electron, do we get as lower bound, the "Heisenberg length" as the lower bound uncertainty? Taking me Electron mass [M] 9.1095E-28 gm ħ Reduced Planck [ET] 1.0546E-27 gm-cm2/sec ħ/(2 m c) = 1.0546/(2 9.1095 3) E(-27 +28 -10) cm = 0.0192595 E-9 = 1.92595 E-11 cm - Not too bad This is 2 orders of magnitude greater than that of the classical electron radius, and 1 order of magnitude less than the (e+ - e-) pair production length. For any mass m smaller than the Planck mass mP, U( q ) ≥ ħ/(2 m c) > Schwarzschild radius of m and so, Schwarzschild diameter of m < Planck length
which should tell us that something is wrong with GR being uncritically applied to masses smaller than the Planck mass. In fact, the Schwarzschild radius of the electron is of the order E-55 cm, some 22 orders of magnitude less than the Planck length, which is of order E-33 cm, a distance which current physics cannot resolve, and is, theoretically, the absolute lower bound in distance resolution.
Such a small Schwarzschild radius should not be ascribed any physically measurable or logical meaning.
It might be worth recalling here that the Schwarzschild singularity is not exactly trivial. While a point is what first comes to mind when the word "singularity" is used, the Schwarzschild singularity is an entire 3D surface within a 4D space which separates an interior from an exterior solution to the set of Einstein equations. Moreover, interpretationally, the radial and temporal coordinates are the reverse of each other in each of these two regions, which easily suggests an analytic connection between the variables through complex values. Cf. Classical Geometry & Physics Redux
It is physically and reasonably implausible that the intrinsic uncertainty of a mass's position should exceed the mass's Schwarzschild diameter. That, obviously, is an assumption here.
This is not exactly Penrose's idea that the Planck mass somehow represents a division line between quantum and classical mechanics, and therefore shows where classical mechanics fails; rather more specifically then, the Planck mass shows where GR fails. Thus, this is a similar, but, I think more reasonable and specific interpretive result than the idea that Penrose has suggested in several places since no where in quantum theory is there an inbuilt criterion for some mass above which quantum theory itself is invalid. On the assumptions of quantum theory, the theory is as valid for galaxies as it is for electrons, yet experience tells us otherwise. I've never heard of quantum interference patterns being observed with baseballs.
So, why am I making this argument, and of what use is it?
I consider that I have made a reductio ad absurdum argument
that casts doubt on the applications of the concepts of GR
to the necessarily quantized regime of particle physics, and
that as a consequence have also cast doubt on the physical
legitimacy of even considering a concept of "quantum gravity"
as being equivalent to or recognizable as some suitable quantization of the
Einstein equations of GR; it is not possible to avoid nonsense
if both GR and the current concepts of quantizations and quantum
theory are insisted upon.
Another way of putting this is that the regimes of validity
of a quantum theory given by QM, and of a relativistic theory
given by GR are necessarily disjunct, and that an attempt to
force a common regime of validity will inevitably lead to
contradictions.
This is the fundamental point of the present argument and essay.
If this argument is to be believed, then a consequence is that a classical gravitational theory in the manner of Einstein is isolated conceptually and theoretically from the quantum regime, and so a theory of "quantum gravity" cannot be simply constructed by some method of quantization, but must be constructed within an a priori quantum context with the view that it has some classical limit which approximates a classical theory, as a first guess, that of Einstein.
As a second consequence, given that all attempts at "quantizing the gramitational field" have indicated strongly that the very observable quantities of space and time themselves must be quantized, it is necessary that the standard quantum theories which do not do this must be corrected so that they do quantize space and time automatically and intrinsically in order that a "correct" theory of quantum gravity be stated.
One can get away with making impressive results in the context of QED because of two conditions: 1) the accepted Maxwell theory is cast as one of a massless field, and 2) the equations are linear. Even there, the fudging of self consistent cutoffs and renormalization are necessary to get finite answers. What fails in the case of the "massless graviton" is the linearity. The same tricks cannot be pulled to untie the Gordian knot. As Alexander showed, even if aprocryphally, the knot needs to be cleaved.
Note also that Wigner has argued, applying the fundamental structure of quantum mechanics and general relativity to a clock as harmonic oscillator, that the mass of any "good" clock must be of the order of the Planck mass. [Wigner 1957]
The current argument, and Wigner's work from opposite sides, in a sense, to the same conclusion.
The question of where quantum mechanical behavior becomes classical behavior is an old one; the problem is that QM by itself does not know the difference between an electron and baseball. Loosely speaking quantum states can be represented as superpositions of waves with De Broglie wave lengths.
ld(m) := ħ /(mv) Compared to the Compton wavelength ld(m) > lc(m) := ħ /(mc)
In either case, these wavelengths decrease as the mass increases. The waves themselves as contributing to the coherent superpositions that give rise to quantum mechanical interference effects have no meaning, and in any case cannot contribute to the quantum state, if the wavelength becomes "too small". The Planck length defines the natural smallest length that is too small, and so one asks the question: at what mass does the Compton wavelength equal the Planck length? The answer, with a little algebra, is the Planck mass.
Then, going back to ld(m), which also depends on the velocity of the particle, and in fact on the momentum, the momentum that shrinks the De Broglie wavelength to the Planck length is the Planck momentum (= mPc). In either case, it is the Planck mass which straddles the quantum and classical regimes.
The Planck mass is of the order of the mass of flea. It seems unreasonable that the mechanics of a flea's body, as an entire entity should somehow have to be described by QM, and that classical mechanics would be inapplicable to such an enterprise. Therefore, understanding mP to distinguish the boundary between classical and quantum *mechanics*, does not seem generally appropriate. As biological molecules go, a flea is still a pretty large object. Many biological processes need to take place quantum mechanically. E.g., the passage of an ion through a channel, and therefore also neural currents. There are many life forms smaller than a flea. Can they somehow be shown to be quantum mechanical in nature? Or, is the cutoff actually smaller than the Planck mass? If so, what is it, and how does this come to be?
On the other hand, GR is fundamentally about very large and massive structures like the sun and larger; it is not so unreasonable that its selfdefined regime of validity via the Schwarzschild radius should exclude quasimicroscopic masses, of the order of the Planck mass, as well as the yet smaller masses of elementary particles.
Although there exists a well known coordinate transformation due to Wheeler, and greatly expanded upon by D. Finkelstein, and others, from the initial spatially spherical coordinates, that eliminates the Schwarzshild singularity, making of it the essential wormhole by extending the spacetime manifold; that this extension is physically correct or permissable is not exactly physically clear. Remember that the spacetime manifolds are necessarily at *least* second order differentiable.
Although General Coordinate Invariance is "permissable" in the general context of the Einstein equations; but, for a given solution of these equations this symmetry of general diffeomorphisms will be broken by any symmetry of a particular solution with a more specific and confined symmetry. Are such manifold extensions physically legitimate? They are certainly interesting in trying to understand just what an otherwise nonremovable singularity of the gravitational field means. Is it simply a matter of having chosen an injudicious coordinate system, or is this a singularity of actual physical importance? The tenor of current physics research lends credence to the idea that the Schwarzschild singularity is of genuine physical significance, and not merely a removable singularity which can, for all purposes, be ignored, or whisked away by a change of coordinates, and in so doing actually redefine the concept of the physical spacetime manifold of gravitational theory. If the spacetime manifold may not be so extended as a mathematical artifice, then its physical significance is established; what to do about such an apparent impasse is another matter.
I lean towards an examination of our misunderstandings of the
geometry of physical space and time.
E.g., Viz. Classical Geometry & Physics Redux
The idea of the mathematical extensibility may be be perfectly valid with regard to what may be physically possible, while at the same time, not be applicable to a given system. E.g., The axiomatic assumptions of special relativity do not preclude the existence of tachyons; yet, if tachyons exist, that does not imply that tardyons may be converted to tachyons within the axiomatic system. They must be explicitly assumed within some axiomatic (logical) formulation of physics.
It may be worth noting that if the GR axiomatic assumption that there exists some fixed spacetime manifold is abandoned, then there is a logical middleground in which a physical reality of such singularities and a meaning to manifold extensibility can be had conjunctively. The ideas of pregeometry, quantum topology, and changing local topologies of string theories are consistent with such an abandonment.
It seems more reasonable and precise then to view the Planck mass as that which provides a lower bound in mass on the validity of GR, and separates GR from the as yet to be unfolded theory of quantum gravity.
Thus, one must be extremely suspect of conclusions derived from considerations that combine generalities of GR and QM, or even QFT, in the regimes where either QT holds, or where GR holds.
All this is not to say that Penrose is wrong; he is in my understanding, probably right in saying that the Planck mass is involved with a genuine transition between quantum and classical theories. But, it seems that only GR, as a non quantum theory, provides the classical argument that shows just how a lower bound in mass can be calculated for certain classical theories. For this argument to make sense, however, one must assume that the Schwarzschild radius has ontological significance.
From the viewpoint of QFT, the classical theories of GR and EMT exist only because they are bosonic in nature and are field theories of massless quanta: gravitons and photons respectively. The respective "charges", sources or sinks, of the field are "mass" and "electric charge".
Despite the common masslessness of quanta of the field, one cannot successfully make a similar argument in the context of EMT that would show that EMT cannot be applied to the regime of particle physics that empirically lies below the Planck mass. The essential and apparent reason for this is that the equations of EMT are linear, while those of GR are not; thus, EMT automatically allows a kind of scaling that GR does not allow.
Are the Maxwell equations correct in the regime of very high energies? Might they also be nonlinear under certain certain cirmstances? We usually allow that ε and μ are are constants; not only can they be functions over space and time, but they can also be 2nd rank tensors that do not necessarily have any symmetries.
Moreover, while we have at least a half dozen ideas of how EMT and GR combine, we still have no idea what the reality is, nor how it might be correctly modeled. This is to say that Einstein's quest for a "unified field theory" is a problem that has never been solved; it remains unsolved - and, it would appear, rather ignored.
Contrary to Hawking's fairly bizarre claims, theoretical physics is not even close to finishing up and closing down.
The neutrino is also massless (or not), and one might wonder why it does not have a standard classical theory also. It is a Fermionic field constrained by the Canonical Anticommutation Relations (CAR) rather than CCR. The mass is too small?! I would *love* to be able to explain that one, since it seems we have transluminal neutrinos; OK, I can deal with that, but not here.
The standard geometric understanding of space,
or spacetime does not embrace the neutrino's spin (1/2).
Again, Cf. Classical Geometry & Physics Redux
Generally, CAR is not yet theoretically connected with CCR
except through the supersymmetric formalism (an ad hoc affair), or
FCCR
As an extreme example, stars in the Hubble sequence envelop their Schwarzschild radius; this very condition can be seen to be useful in separating clasical objects from quantum objects. These stars do not immediately collapse, yet there is an upper limit on the mass of a star which would correlate to collapse; an idea worked on by Richard Feynman, and worked out by S. Chandrasekar, (the Chandrasekar limit) where collapse to a black hole is inevitable.
An interesting "paradox": Matter (mass) is composed of Fermionic material, (protons, neutrons, electrons). This is the substance of our world, yet CM and even QM does not, seemingly, have room for this substance which obeys the Pauli exclusion principle and prevents any Bosonic collapse. Yet, the dominant force fields of the standard model, are Bosonic in nature. We see, experientially and theoretically, a Bosonic world of the Euclidean geometry of force fields, and *not* the Fermionic world of matter. Large scale objects are apparently always Bosonic in "appearance". Considering the large numbers of fermionic contributions to a large scale object, one might say intuitively that the difference between these large objects is small, say as small possibly as the spin of an electron.
The theory of spin and stiatistics, however, would not care about this smallness; either a thing is a Boson or it is a Fermion.
The "Cooper pairing" of Fermionic electrons into Bosonic enities nicely explains superconductivity as low temperatures as a Boson condensation. One might consider that the putative graviton should not be considered a fundamental massless Boson, but that it should be considered as a similar kind of pairing, or perhaps a quadrupling of massless Fermionic neutrinos. This is decidedly not a new idea, but the resurrection of an old idea that I read as a graduate student. Unfortunately, I can't give credit since I simply don't remember whose idea it was. On this, I'd rather be silent than be mistaken in attribution. Anyhow, the idea makes some formalistic sense when the symmetric gravimetric tensor field, gμν(x) is expressed in terms of nonholonomic tetrads.
Euclidean geometry is a geometry defined by a metric and its
inhomogeneous invariance group IO(n), and ultimately by the
group itself.
Cf. Notes on Felix Klein's Erlanger Program
Yet, the simple "belt trick" is an indication
that there is more to our physical Euclidean geometry than is
immediately apparent, in that it allows spinor structure by
virtue of the belt trick working.
This may not, however, be the right viewpoint.
A local spinor space has a compact toroidal topology;
this, when appropriately quantized may conversely allow a local
Euclidean structure.
[A local Lie algebra structure does not determine a global
Lie group topology.]
Remember that knots only exist in 3 Dim spaces, and that they become more interesting when the space has holes!
The belt trick is quite macroscopic and real, though to noninitiates is does look like a trick: something unreal. It exposes a fundamental, unintuitive, macroscopic nature of our assumed Euclidean space that is not immediate in most people's experience. It is an underlying spinor property that does not appear in any axiomatic construction of E3. This is to say that our axiomatic constructions of existing physical geometry is at least, from a physical viewpoint, incomplete! A conundrum and a mystery partially "explained", by drawing into Euclidean geometry the factorizability of its bilinear forms called inner products into monomial sums, as Dirac has done in teasing out his equation for the electron (and positron) from the relativistic quadratic form that is the Klein-Gordan equation. This turn out to be a uniform formal method for constructing the spin representations of orthogonal and pseudoorthogonal Lie groups, as remarked on further below.
There is a difference between a mathematically defined geometry, and what is possible for extended objects that may live in it. This is the source of the above alleged incompleteness. The geometry does not specify, with completeness, the nature (topological?) of all extended objects which may live in it; the standard mathematics of geometry instead either ignores them or relegates them to singularities as a kind of geometric bookeeping. These objects, when introduced, generally, break the symmetry group that defines the geometry when the geometry and the objects are considered together - as they must be considered in GR. If one can say with sufficient insight and specificity how this can come about, the problem will vanish.
Spinors arise, in one sense, from the double simple connectness of the Lie group SO(n) as an analytic manifold for any n > 2. That is to say that there are two classes of closed paths in the SO(n) manifold whose elements cannot be continuously deformed into any path in the other class. The model is the surface of a toroid, which happens to *be* topologically equivalent to the SO(n) manifold. This is a perfectly macroscopic property, which explains (with much mathematics) why the belt trick is no trick. A way to recognise the real toplogical properties of E3 *with* extended objects moving in it is to replace SO(3) as the invariance group with that of SU(2), which is the universal covering group of SO(3). It is simply connected as a manifold, meaning that any closed path can be deformed to any other closed path. Now, the true macroscopic properties of an E3 can be exhibited, oddly enough, by the inhomogeneous group ISU(2), the semidirect product of the translation group C2, by the unitary group SU(2), which is a "Euclidean" group of rotations and translations in the 4 real dimensional complex space C2.
Put simply, if you want to understand E3 (IR3) geometry, it appears to be wiser to look at IC2 geometry. [1] Strange as this may seem, mathematically, one is led to this inexorable conclusion, physically, by the demonstration of the belt trick.
This reminds me of certain arcane things that happen in real analysis where the convergence of certain series is inhibited, where it appears that there is no reason for the series not to converge, but when the complex extension of the function in question, represented by the series, is examined, it is found that there is a pole in the complex plane that inhibits convergence. From the viewpoint of real analysis, the inhibiting pole is invisible. Consider, e.g., the function f(x) = (1 + x2)(-1), in McClaurin expansion about zero, which has no singularities on the real line. A certain Platonistic viewpoint is difficult to avoid that somehow real analysis has already intrinsically implied complex analysis without your having had to "invent" (make the algebraic completion of the real field to the complex field) it yet. Similarly, and even more disturbingly, though physical theory clearly procedes by a sequence of corrected mistakes, it also appears that we do really discover it rather than invent it. I refuse to be a Platonist, except possibly on Tuesdays.
Then - perhaps the reader has the security of being a confirmed Platonist.
Math Pages
The URL for this document is