The Harmonic Oscillator

   One of the favorite pets of theoretical and mathematical physicists
   is the Simple Harmonic Oscillator (SHO), and more generally, the
   Harmonic Oscillator (HO).  There are several reasons for this.  Here
   are a few:

	1) The solutions of the appropriate Differential Equations (DE)
	   are easily obtained "by inspection", and easily manipulated.
	   The solutions to the basic problems are functions that are
	   "as smooth as can be", and these functions are called

	2) SH motion is often well approximated for many physical
	   systems that involve small dispacements from an equilibrium
	   condition.  Assumed analytic forces are linear for small
	   displacements from equilibria, giving rise to SH motion.

	3) While lengths are relatively easily measured, time
	   differences require complicated things called clocks.
	   Clocks are (or should be) important and seminal animals
	   in the menagerie of physics.  One might say either that
	   the best clocks are oscillators, or that the best oscillators
	   are clocks, and that these are SHOs.

	4) Cryptically said: Harmonicity is next to Analyticity.

	5) Any Hamiltonian system can, by a suitable contact
	   transformation of Hamilton-Jacobi theory, be transformed
	   into a collection of independent harmonic oscillators,
	   indicating that the oscillator is indeed a fundamental
	   nontrivial system of theoretical physical models.

	6) The basis of all Fourier theory is a decomposition of
	   functions into a sum or integral of harmonic functions.
	   The Fourier transform is an essential component of
	   all wave phenomena, and quantum theory specifically.
	   Waveforms, and functions generally can be constructed
	   from the very functions that solve the problem of the

	7) A conceptual extension of oscillation is that of periodic
	   functions, and even with more sophistication, of doubly
	   periodic functions (Viz. elliptic functions in complex
	   variable theory)

The SHO problem and its solution are both useful and mathematically elegant in the many contexts in which they appear. This essay attempts to spin the long thread of the SHO concept as it weaves through a good part of the body of theoretical physics.

The selection of exactly the ground covered is purely personal and simply reflects my interests. There is no particular deference given to the reader's level of mathematical knowledge or sophistication; sometimes I do, and sometimes I do not; though when it comes to my head that something should either be explained or be given some reference, I try to do just that. The intent in writing of this small (compared with the enormity that is possible) monograph was as much to amuse, teach and remind *myself*, as it was to be otherwise didactic. The asciified symbolic notation should be fairly obvious where I do not bother to explain.

The following list of links, internal to the current file, constitute a Table of Contents.

  1. Introduction: Analyticity & Linearity
  2. Hookes Law & Analyticity
  3. The Problem of "Finiteness" & Theological Infinities
  4. Models v. Reality
  5. The Theoretician's Disease of Confusing Models with Reality
  6. Solution Approach from Newton's 2nd Law of Mechanics
  7. Complex Numbers and Complex Valued Functions
  8. Cartesian Representation of the Complex Plane, and Complex Conjugation
  9. De Moivre's Theorem, and Complex Valued Functions of a Real Variable
  10. A Complex Phase Space
  11. Simple Harmonic Oscillator With Response Delay (with preliminary suggestions to model the cardiopulmonary system)
  12. An Harmonic Oscillator With Damping
  13. Linear ODEs with Constant Coeficients
  14. The Method of Laplace Transforms
  15. The Method of Fourier Transforms
  16. Harmonic Oscillator With Damping Solution
  17. Driving Forces and the Inhomogeneous ODE
  18. Driven SHO with damping via Laplace transform
  19. Resonance, Bandwidth and Selectivity
  20. RLC Electrical Circuit Analog
  21. Damped and Driven SHO with response delay, a functional ODE
  22. Oscillator Coupling & Normal Modes
  23. Oscillator Coupling & Resonance Damping
  24. Many Coupled Oscillators In One Dimension
  25. The General Pendulum Oscillator
  26. The Quartic Oscillator
  27. SHO in n dimensions
  28. Action Principles
  29. Lagrangian Formalism
  30. Hamiltonian Formalism
  31. Hamilton-Jacobi Theory
  32. Normal modes
  33. Oscillators in the Mechanics of Elastic solids
  34. Quantum SHO
  35. Quantum SHO in n dimensions
  36. Symmetries
  37. Symmetries of Equations v. Symmetries of Their Solutions
  38. Quantizing The Damped SHO
  39. Massive Special Relativistic Oscillator
  40. Quantized Special Relativistic Oscillator
  41. The Damped Relativistic Oscillator
  42. Lightlike SHO
  43. Tardyon SHO
  44. SHO in General Relativity
  45. The cosmological constant in GR
  46. Classical Statistical Mechanics of Oscillators
  47. Quantum Bosonic Statistical Mechanics of Oscillators
  48. Quantum Fermionic Statistical Mechanics of Oscillators
  49. The FCCR(n) Oscillators
  50. The FCCR Oscillator in Quantized 3-Space of Positive Definite Curvature
  51. The Euclidean FCCR Oscillator in m Dimensions
  52. The Relativistic FCCR Oscillator

Analyticity & Linearity

In advanced quantum theory there is a concept, the S-matrix, conceived by John Archibald Wheeler, that is almost an analog to the behaviorist approach to human behavior, except that it is considerably more realistic. The S-matrix deals with simple definable things like particles rather than impossibly complicated things like human beings, and economies which have as their "particles" those very impossible human beings to which nonexistent variables that cannot be quantified are foolishly applied.

The S-matrix idea, introduced by John A. Wheeler, avoids having to know the details of particle interactions to be able to predict the endproducts of collisions. The S-matrix is the particle physicist's "black box". The problem is that one has to assume *something*, rather enough somethings about the nature of the S-matrix before there is enough structure to be able to predict anything.

One of the strange advances of 20th century physics was the understanding that there are limits in what one can know, physically. Mathematically, Gödel, Church and others provided the bad news regarding any interesting mathematical system. Mathematical systems are, of course, the substance of physical theory and so that is not exactly a trivial result in terms of physics.

One of the standard technical assumptions made about the S-matrix and meant to tighten things up a bit is "analyticity". Since the S-matrix is not the present subject matter here I will not be at all detailed. Real analyticity is basically an assumption that makes functions sufficiently smooth that they are developable or well approximated by a taylor, or similar series expansion.

Complex analyticity, a stronger condtion than real analyticity, requires that a function f(z) that would otherwise be simply analytic in two variables (x, y), further be a function of the complex variable z=x+iy, meaning that the limit in defining the derivative df/dz at any given point in the complex plane must be independent of the approach to that point. Complex analyticity is strong enough an assumption that the existence of the first derivative df(z)/dz of f(z)

	                  f(z + h) - f(z)
	df(z)/dz  :=  lim ---------------
	              h→0        h

implies the existence of all n-th order derivatives, because h is complex, and the limit must exist for all paths by which h approaches zero.

Mathematical physicists often use analyticity, either real analytic or complex analytic as a key ingredient of advanced theory, yet the idea appears early on in even the simplest theory.

The notion of analyticity that I want to emphasize here is the elemental one of developability. Suppose that between any two physical parameters there exists some functional relationship that is assumed to be sufficiently smooth, i.e., enough derivatives exist.

			y  =  F(x)

Without any loss in generality, suppose that it has been arranged that F(0) = a, some constant. Suppose that F(x) is sufficiently smooth at x=0, that a tangent (i.e, a first derivative) to the graph defined by F(x) exists and is uniquely defined. Then, within some neighborhood of x=0, the values of F(x) can be approximated linearly by

			y  =  a + bx

where as it turns out b is the derivative dF/dx evaluated at x=0. This approximation to F(x) will only be good within some neighborhood of x=0; how large that neighborhood is depends on exactly what F(x) really is. The point, of course, is that *some* neighborhood exists, and it is upon this that physicists rely in trying their best to get away with only having to work with linear equations.

If the neighborhood is intolerably small (not quite small enough), the next best approximation might be

			y  =  a + bx + cx²

where c is half the value of the second derivative d²F/dx² evaluated at x=0, and so on for better approximations in the manner of Taylor's series. Maybe you can still solve the appropriate equations. Often obtaining such solutions is more of a black art than a science. For many classes of problems there are no systematic ways of obtaining solutions, but no matter how you may have gotten one, a solution is a solution, when it can be demonstrated to be one.

As a general rule, physicists try to create physical theory based on linear equations, when, in fact, most real systems are nonlinear. Two important exceptions to this linearity are General Relativity (GR) and Quantum Chromodynamics (QCD). These exceptions occur because anything less is not acceptable in that it will not be able to express even the most limited or confined physics that needs to be expressed. The nonlinearities are essential to the fundamental theory.

The practical matter of concern is simply that most linear equations are easily solved in closed form, and the game is to get as much information out about the behavior of a model without having to do an inordinate amount of work. Start with a simplified special case that is "easy"; then complicate it a bit.

An interesting question of form is whether one can have a concept of analyticity in the context of discreta rather than continua. This becomes a more interesting question considering that quantum theory truly, through its somewhat artificial probabilistic component relates discreta to continua. In a discrete context, the concept that underlies analyticity is that of "nearest neighbor" with its obvious topological flavor. The notion of a function being "continuous" is topologically defined directly through the of homomorphism, but the notion of a function being "differentiable" requires additional structure. When models of physics are required for the smallest elements of reality finitistic algebra seems inevitable. But in such a situation, there is still a need for a concept of nearness, and hence topology, which might be supposed to arise from the measurements that theoretically need at least the rational field. On the other hand, perhaps a finite field of large order is not only possible but preferable. In that case, is there an appropriate extension of the concept of analyticity?

Hooke's Law & Analyticity

Let us do some destructive analysis of a real spring. First measure the length of the spring - um - as it lies on a table, not hanging in a gravitational field. Call the measured length l_0 You can also approximate the length of the coiled wire that is the spring by measuring the diameter of the coil, the number of turns per unit length to get the total number of turns, or possibly count all the turns and be done with it. call the diameter D, the number of turns N, then the length L_0 of the wire is approximated by

		L_0  = π N D

Fix one end of a spring and attach a standard tension gauge to the other end. Now, begin displacing the movable end of the spring in small increments, each time recording pairs of (l, f) of displacement length l and the tension (force) required to maintain the displacement f. Do this several times never letting the spring stretch to any length more than about 25% of l_0.

With any luck, my guess at the 25% was good on average, and if you again measure the spring length l_0, it will be exactly what it was before.

Now, if for all the recorded (l, f) pairs, the numbers f/l are computed, you should discover that they are all pretty much the same number. This result is Hooke's Law, but for some strange reason, k is often called Young's modulus - go figure.

You could for the collection of n pairs {k_j}, compute the standard "average",

	k  :=  (1/n) Sum k_j

This is the classic "most efficient estimator" for such parameters that are distributed with error (which happens to be asymptotically Gaussianly distributed). An alternative that is much less work, and surprisingly almost as good, is to find the maximum value and the minimum value, and average those two numbers. In either case, call the average <k>.

An industrious person would also want a measure of how good the average <k> really is, and to do that one needs an efficient estimator for the standard deviation of the distributed errors, the most efficient estimator being

	s(k)  =  sqrt(  Sum (<k> - k_j)² ) / (n-1) )

   where n is the number of the data points.  Your value of k is then

	k  =  <k> (+|-) s(k)

We will now make that value of k useless. In displacing the spring further than this comfortable region of linearity, there are several critical points that come in succession:

	1) The function F(x) ceases to be linear.

	2) The spring will no longer return to its original length,
	   but is now longer.  The value of k will be higher.

	3) As the displacement approaches L_0 (so the spring looses
	   its coiling), F(x) will increase sharply as you begin
	   pulling directly against the tensile strength of the wire.

	4) Finally, at an L_1 > L_0, the wire will snap, thus
	   destroying the physical system that used to be called
	   the "spring".

In truth, this is completely typical of all real physical systems; these are the regimes of physical behavior where most physicists prefer not to go, despite the essential features that are common to all physical systems.

One of the great lessons from this little orgy of destruction is that all physical systems are finite, and ultimately fragile. If you pump too much energy into any system, it will suffer some irrevocable form of energetic disassembly, ceasing to be what it once was, and turning into a different system. This is a fundamental fact of all existence that is mostly not treated fundamentally. The apparent reason for this is the convenience, in another way, of long range forces, and a certain slavishness to their reality as physical models. Long range forces, in reality, tend to get interfered with by screening efffects that replace the 1/r potential with the shorter range potential exp( -kr )/r². I am making a type of renormalization argument here, derived from quantum field theory.

A kind of exception to that last statement involves something a little tamer that appears in thermodynamics called a phase transition, although there, the consideration usually involves taking energy out of a system, and also involves transitions that can be reversed.

Ice to water to steam, superconductivity and spontaneous ferromagnetism are standard examples. Most of the transformations we see are irreversible, per a handwaving application of the second law of thermodynamics.

If we want to describe correctly, mathematically, a spring oscillator, it must be, at least classically, by an integrodifferentional equation (k=k(x)), with an appropriate singularity when the applied driving force excedes the spring's force of attraction.

One should expect a discontinuity/singularity in k(x) at the snapping point, and a discontinuity of the first derivative of k(x) at that point.

The Problem of "Finiteness" & Theological Infinities

The unbounded ability to pump energy into any real physical system is nonsense, while all mathematical models of physical systems allow this as a matter of course. Few students figure out this essential fraud in the teaching of science, and physics in particular. There are limits to unfettered abstractions, as the nonvanishing values of both c and h-bar show. Yet the temptation to physically inadmissable idealization persists.

The philosophical antagonism between science and religion is often stressed, but one should also be mindful of religious origins of many of the deep underlying assumptions of science coming from a culture of religious thought. Pythagoras understood mathematics, and particularly whole numbers as having mystical significance. Their connection with the tones of musical instruments, for him, gave music a mystical significance; from Pythagoras we get the phrase "music of the spheres", meaning the spheres of the special heavenly wanderers we call planets. The writing of the ancient Egyptians, considered to be of mystical origins and whose understanding had vanished by the time of Herodotus were called even then "hieroglyphs" (sacred symbols). It is probably because of the significance that culture attributed that such effort was expended in deciphering them.

At this point, since I am about to point to history, I should draw a rather emphatic line between religion and spirituality, noting that they are often just as mutually antagonistic in practice as are religion and science - but that is another story of politics and beyond.

Men invented gods and had creation myths over 8000 years ago, and in all of them there was a "god, the creator" acting, as the Greeks phrased it, as the "prime mover". To some this may seem like dismissable mythology, but in that simple, relatively primitive idea is a powerful cultural suggestion that perhaps, somehow, if the universe has a design, we might actually be able to understand it. In that sense, science is the successor of religion, its quests being of an even of a more highly refined spirituality than the always politically motivated priests and witch doctors are capable of understanding. This, of course, is exactly where scientists get in trouble with the ruling politicized witch doctors. As it is turning out, the tales of scientists on nature, its structure and its origins tell a far more mysterious, complicated and awesome story than even the most sophisticated religionist witch doctors of the 20th century could ever have dreamed.

But, there is more: the theological infinity. Infinities are in principle unobservable and unmeasurable; infinitesimals are unobservable and unmeasurable. Physically speaking, a continuum is simply ridiculous; yet, one way or another, all physical theory is based on Newton-Leibnitz differential calculus ever since Newton's Principia. The notion of "an infinitude" is a physically transcendental and theological fabrication, and it is a standard denizen of the world of science, most especially because science that relies heavily on mathematics. Although this concept of infinity and its refinements have given us a magnificently large body of aesthetically attractive mathematics which serves as an arena for more physical theory, it is past the time to be wary of its putative relevance in any fundamental theory of the Planck regime. Philosopher-scientists (Naturwissenshaeftler) of the late 19th century understood this, and it seems to have been forgotten, overshadowed by a glittering of continua. Pierce & Clifford

Physically, it makes no sense whatsoever even to consider a "countable infinity", and even less to consider a continuum. The rationals might be a idealization, but the reals are just physically absurd. We use them, and persist in using them mostly because mathematicians have spent about 300 years developing the applications of this nifty tool. They have proved useful, making what might otherwise be impossibly laborious calculations relatively simple. An integration, if it can be performed, is more likely to be simple, efficient and elegant, where a possibly more correct summation might be laborious and close to impossible. Machines, however, are very good at this sort of stuff.

If we do not express physical theory in the customary form of differential equations, how should it be done?

The answer is simple and unavoidable - abstract algebra. If a truly fundamental physics exists and is understandable it must be expressible in terms of abstract algebra. The suggestions that we have so far indicate Lie algebras, the algebras rather than their associated Lie groups. Why the algebras should be fundamental and not the groups, which are analytic manifolds, should be intuitively obvious. Why Lie algebras in particular? I do not know. One might try a metaphysical law that natural law tends to prefer conditions that yield maximal variety; see, e.g., the condition of 4 dimensional spaces having the maximal number regular polytopes.

Closely related are the Lie superalgebras that may acquire fundamental physical significance in uniting the apparent Fermion-Boson dichotomy of elementary particles that should be united. Should Fermions and Bosons be united in a single supersymmetry? Possibly. In any case, some very interesting mathematics has been developed by trying the idea.

The basic "symmetry of indistinguishability within distinction" is one of the permutations of distinguishable symbols. Fermionic symmetry arises as a subclass of completely antisymmetric permutations, while Bosonic symmetry arises as a subclass of completely symmetric permutations. Any permutation is either symmetric or antisymmetric. Algebraically consider Jordan and Clifford/Grassmann algebras, as well the Quantum groups (that are not groups - an unfortunate terminology) and superalgebras that seek to combine them.

Having expressed my opinions on the absurdity of physical continua, and the fundamental importance of algebra, note that most of this small essay will use the language of differential and integral calculus, hence continua.

Models v. Reality

The Theoretician's Disease of Confusing Models with Reality.

Feynman said repeatedly that to so many questions the best answer he had then was "I don't know", and that while he was very curious, not knowing was ok, that he was comfortable with that. Nevertheless, you could either read or see the gleam of curiosity in his eyes - the very spirituality that religionists have condemned, often to death, for millennia.

Unfortunately, the body of physics and its mundane teaching speaks to the opposite of that, i.e, Feynman's perspective as a genuine natural philosopher, is actually rare among supposed physicists. One reason for this is body of real life conditions under which physics is taught in US universities. Much science has become not only politicized but "religionized" according to its own orthodoxy within its own now diluted ranks.

An essential difficulty with all physical theories that should immediately that something is wrong, is that they insist, formally, on their own universality: any theory, within itself, never seems to understand its own limits of applicability.

From Newton's Second Law

The background assumptions

The laws

The most general form of Newton's 2nd Law (holding also in special relativity) is

	dp/dt  =  F

For the simple harmonic oscillator, with all the caveats above understood, if one assumes the mass to be a constant, and that the regime of motions is restricted to that in which Hooke's law is valid, the equation of motion becomes

	(d/dt)² q(t) + k/m q(t)  =  0

This is a linear, second order, homogeneous Ordinary Differential Equation (ODE) for which standard mathematical theory says there exist two linearly independent solutions. The body of black magic wherein the solutions to nonlinear ODEs are found consists either of the obvious standard integrals or an accumulation of recipes and transformations to standard forms applied to equations that have been guessed and then proven. The general linear case is discussed later, but in this case, the leading question is simply, what kinds of functions reproduce themselves on taking a second derivative. The most elementary answers available in differential calculus for that question are the periodic sine and cosine functions. The general solution is then

	q(t)  =  A cos( w_0 t ) + B sin( w_0 t )


	(w_0)²  =  k/m
	w_0 := sqrt( k/m ),

   which is the natural frequency of the oscillatory motion,
   if it is displaced from equilibrium and released.  For *simple*
   harmonic motion, and only simple, is the frequency independent
   of initial conditions for any physical system.  That is not a
   trivial statement, as has been made plain above.

   The classic example is the pendulum, which is only approximately
   harmonic for small displacements because sin( x ) approximates
   x, for small x.  An initial dispalcement that is not "small" in that
   sense, will not initiate an harmonic motion, though it may be

   There is a substantial difference between a "physical system" and
   a "physical system subject to arbitrary kinematic constraints";
   one is "real", the other is a "model".

   We say this equation models a spring with an attached mass, but there
   are many aspects of any real such system that are ignored by the model.
   The spring really has a mass distribution (with each little mass
   infinitesimal dm undergoing a different oscillation motion). The
   wire of the coil has a nonzero thickness, and finite length, so
   the spring cannot be compressed to zero or stretched to infinity,
   which is to say that there are intrinsic bounds on the amplitude.
   If it is not in a vacuum, there is an air resistance for the major
   mass, and as well for each dm that are all different.  There are
   also internal frictional forces within the spring that will cause
   it to heat and radiate energy.  Even a bulk spring constant really
   depends on the temperature of the spring. Lower the temperature
   enough and it will freeze, becoming so brittle that it will break
   rather than oscillate; raise the temperature enough and the spring
   will melt or even vaporize.  At lower than these very high
   temperatures, the spring can be weakened.  The length of the spring
   also depends on its temperature.  As the spring compresses and
   stretches, mechanical work is being transformed into heat energy
   which raises the spring's temperature.  Prevent the spring from
   dissapating this heat and its temperature will rise indefinitely
   to a point of catastrophe.  The more violently the spring
   oscillates, the greater the heat production.  When the heat input
   and dissapation are balanced a stable temperature is achieved.
   Assuming the spring is indeed metallic, there is always the
   matter of metal fatigue - a product of the spring's operational

   If we were simply to take Young's modulus into account, the
   mathematical model would not be a differential equation, but
   an integrodifferential equation.

   Mathematical models of physics and physical systems are always
   games of "let's pretend".

Reparametrizing the constants of the solution


	A  :=  C cos( c )
	B  :=  C sin( c )


	q(t)  =  C cos( c ) cos( w_0 t ) + C sin( c ) sin( w_0 t )
	      =  C cos( w_0 t - c )

from the angle addition formula for the cosine. This makes the physical significance of the constants of integration easier to understand: C is the amplitude and the c is phase angle. The total energy of the oscillator is the sum of the kinetic and potential energies.

	E  :=  K + V

We get this as a derived constant (conserved in time) from Newton's 2nd Law generally

	dp/dt - F  =  0
	dp - F dt  =  0
	m dv - F dt  =  0
	m v dv - F dv dt  =  0
	m v dv - F dq/dt dt  =  0

	(1/2) m v² - ┃ F dq  =  E

where E is a slyly named constant of integration. If F is a force that depends only on q (not always true), the work integral can always be performed (at least, in principle) so that there exists a function of q, V(q) called the potential energy with

	V(q) - V(q_0)  :=  | F(s) ds

A convenient fiduciary q_0 can then be chosen so that V(q_0) = 0, and one can effectively write V(q) as an indefinite integral

	V(q)  :=  | F(s) ds

For the oscillator, F(s) = -k s, so this defined V and Newton's 2nd law implies

	(1/2) m v² + (1/2) k q²  =  E


   Substituting the known solution

	q(t)  =  C cos( w_0 t - c )

   and its derivative into the energy equation gives,

   (1/2) m C² w² sin²( w_0 t - c ) + (1/2) k C² cos²( w_0 t - c )  =  E

   with (w_0)² := k/m 

   (1/2) k C² sin²( w_0 t - c ) + (1/2) k C² cos²( w_0 t - c )  =  E

   (1/2) k C²   =  E

	C  =  sqrt( 2 E / k )

	   =  1/w_0 sqrt( 2 m E )

The important qualitative observations to make here are that the Energy is proportional to the square of the amplitude of the oscillation, and is also inversely proportional to k which measures the strength of restoring force of the possibly metaphorical spring.

Notice that I have focused on a restoring force proportional to displacement, in rather macroscopic systems, making the restoring force to appear somewhat reasonable.

One can make the restoring force on the molecular level seem almost as resonable, e.g., citing the ammonia (NH3) clock, where the hydrogen atoms form an equilateral triangle, and the nitrogen atom oscillates on a a line through the centroid of the triangle, and perpendicular to its plane, from one side of the plane to the other. The force law will be different, of course, but now, as we know, the forces will be of electromagnetic origin (variations on (q/r2). We also know that QM does not, per se cover electromagnetic interactions, despite the Aharanov-Bohm effect. For the logical consistency of this effect, the EM field, which is relativistic, would have to be quantized - with electric charges, and the matter field would also have to be both quantized and relativized. As we see below, the transformations of the Schrödinger theory of the state vector, are noncommutative, leaving the concept of a massive, spinless relativistic quantum field theory ambiguous.

I will conjecture that the only reason the A-B effect makes any sense is that the masslessness of the quantum of any QFT implies that the theory has a classical, unquantized form, as is the case for both the electromagnetic field and the gravitational field. Since no classical field can have a massless quantum, such a field must also be relativistic.

Harmonic oscillators appear again in the context of the strong nuclear force, and this is the arena of quantum chromodynamics of quark-quark interactions, so carrying the simplistic concept of SHO to that level becomes a bit silly; both our theories and models do have serious limitations.

Complex Numbers and Complex Valued Functions

The complex field distinguishes itself as the smallest algebraically complete field, algebraically completing the reals. While it is important for algebraists to distinguish the abstract structure that is the complex field from its representation in the Cartesian plane, analysis seems to gain no advantage by that additional complication, and so we will take the two as equivalent. There is more than enough material in the area of complex variables to fill several years of mathematical study, so what is said here under this heading is mostly very general, and wide ranging, so for that reason hoped to be useful.

There are substantial structural differences among complex valued functions of one real variable, complex valued functions of two real variables, and complex valued functions of a complex variable.

Superficially, the first is fairly simple, essentially being a pair of functions of a real variable.

Complex valued functions of one complex variable, are the meat of the analytic functions in a standard first course in complex analysis. To say that a function is a function of a complex variable is considerably stronger than to say that a pair of functions are functions of (x, y).

A last note of this section, for the sake of completeness is on functions of several complex variables. Though it is not immediately obvious, a function of several complex variables that is analytic, which in this case I prefer to call holomorphic, is a much stronger requirement than demanding that the function be analytic in each of its complex variables separately. To see the details of why requires close examination of the several Cauchy-Riemann conditions. To understand a bit of the structure of the difference, it should help to understand one important geometric consequence:

In the case of one variable, for any Jordan curve J in the complex plane C, there exists a function analytic in the interior of J, and which cannot be analytically continued beyond J, meaning that J is not just an accidental boundary. Another way of looking at this is that every interior of every Jordan curve is a domain of analyticity, for some function of z, analytic in the interior of J.

If the idea of a Jordan curve is extended to "Jordan Surfaces", one might suspect that a similar result holds for the case of several variables; that this is not so is the big difference. Analyticity in each complex variable separately actually implies the holomorphy of the function in all complex variables together.

A function that is holomorphic in the interior of some Jordan surface is generally holomorphically continuable to a larger region; if it is not, the interior of the particular Jordan surface is called a domain of holomophy. The unique maximal boundary within which a function is holomophic is called its domain of holomorphy. More than one function can have the same domain of holomorphy.

It turns out that domains of holomorphy have subtle convexity properties not possessed by arbitrary Jordan surfaces. The problem of categorizing and specifying all of these domains does not have a closed or finished solution.

Cartesian Representation of the Complex Plane, and Complex Conjugation

Complex Numbers & Polar Representation.

   The once mysterious Euler relation relates the imaginary unit
   i := sqrt(-1), and the two most important transcendentals
   numbers e and π.

	exp( iπ )  =  -1

   It turns out to be a little less mysterious when understood as a
   special case of,

	exp( i z )  =  cos( z ) + i sin( z )

   easily provable by combining the appropriate absolutely convergent
   taylor series.

   A different way of expressing the two linearly independent solutions
   of the homogeneous simple harmonic oscillator equation.

	q(t)  =  A exp( +i w t ) + B exp( -i w t )

De Moivre's Theorem, and Complex Valued Functions of a Real Variable

	( exp( i z ) )ⁿ  =  exp( i n z )  =

	cos( n z ) + i sin( n z )

A Complex Phase Space

   The state of any system with one degree of freedom can be
   specified by a single complex number

     z  =  2^(-1/2) (q + i a p)

   where a is a real constant of physical dimensions [M^-1][T].
   The Hamiltonian H(q, p) can expressed in terms of z and z^*,
   by substituting according to

     q  =        2^(-1/2) (z + z^*)
     p  =  -i a^-1 2^-1/2 (z - z^*)

   The canonical equations can then be written

     £H'/£z^*  =  +i(d/dt) z
     £H'/£z   =   -i(d/dt) z^*


     £^2H'/£z £z^*  =  0

   assuming that the complex partial derivatives commute with d/dt,
   and where

     a H(q, p)  :=  H'(z, z^*)

   with H' clearly a real valued function of z and z^*.

   For the special case of the harmonic oscillator,

     H'(z, z^*)  =  w z z^*

   This is a special case of the real valued function in a coordinate
   neighborhood of a complex manifold from which a Kaehlerian metric
   can be defined,

               £^2 F(z, z^*)
     g_kj  =  --------------
                £z^k £z^*j

   Note that in the complex plane, multiplication by i is a π/2 rotation,
   which has analogy to the Fourier transform in FCCR: exp( i a N(n) ) is a
   rotation in the Q(n)-P(n) plane, and for a=π/2 this becomes a Fourier

Quantization of the oscillator in this form -> Fock-Bargmann Hilbert space of
analytic functions (?)

   For n degrees of freedom it seems not so simple, since a phase space endowed
   with a symplectic structure is not the same as one endowed with a complex or
   almost complex structure - or an almost symplectic structure.  The
   usual phase space of Hamiltonian formalism of Newtonian mechanics is
   endowed with a symplectic structure.

     z^k  =  (q^k + i a^kj p_j)

   (with Einstein summation convention over repeated index 'j', one upper
    and one lower)

   In the context of canonical formalism, consider again, simply a one
   dimensional oscillator with conjugate variables q and p, and Hamiltonian

     H  =  (1/(2m)) (p² + m² w² q²)


     z  =  w (m/2)^(1/2) (q + i (m w)^-1 p)

   and take

     z^*  =  w(m/2)^(1/2) (q - i (m w)^-1 p)

   considered as an independent coordinate.


     q  =  (w (2m)^(1/2))^-1 (z + z^*)
     p  =  -i (m/2)^(1/2) (z - z^*)

     z z^*  =  H(q, p)  =  H(z, z^*)

   The dynamic trajectory of the oscillator is a circle in the complex
   plane C (= phase space) whose radius equals (E/w)^(1/2) [Check Corbin
   and Stehle] The canonical equations of motion are:

     £H/£q  =  - dp/dt
     £H/£p  =  + dq/dt

   [Note: the radius has physical dimensions of action, [E][T].)

     d/dz  =  (£q/£z)(£/£q) + (£p/£z)(£/£p)
           =  (w(2m)^(1/2))^-1 (£/£q) - i(m/2)^(1/2)(£/£p)

     d/dz^*  =  (£q/£z^*)(£/£q) + (£p/£z^*)(£/£p)
            =  (w(2m)^(1/2))^-1 (£/£q) + i (m/2)^(1/2) (£/£p)

   the canonical equations of motion can then be written

     (w(2m)^(1/2))^-1 (£H/£q)  =  - (d/dt)(p (w(2m)^(1/2))^-1)

     i (m/2)^(1/2) £H/£p  =  + d/dt(i (m/2)^(1/2) q)


     dH/dz  =    - (d/dt)[(p(w (2m)^(1/2))^-1)  + (i (m/2)^(1/2) q)]
            =    - (d/dt)[(p(w (2m)^(1/2))^-1)  + (i (m/2)^(1/2) q)]
   and subtracting

     (i a £/£q - £/£p)H   =  - (d/dt)(i a p + q)

     dz(t)/dt  = i w z(t)

   so solving either of the two conjugate differential equations,
   the solution of the dynamical equation(s) is, is as described

     z(t)  =  z(0) exp( i w t )

   Formal note:
   p and q appear as cartesian coordinates of the complex plane, while
   E^1/2 and t appear as polar coordinates of the plane, with t as a
   recurrent angular variable.
     single oscillator in n dimensions = n one dimensional oscillators

symplectic phase space -> complex phase space
energy ->  Phi(z, z^*)
            the one form on a Kaehlerian manifold from which the Hermitean
            metric is derived.
Coupling ---- Curvature of phase space
Curvature ---- holonomy group through the structure constants of a Lie algebra

	z(t)  :=  q(t) + i p(t)

Simple Harmonic Oscillator With Response Delay

The ODE for a SHO models more that the ideal and rather fictional picture of a massive point particle attached to a massless spring that all exists in a frictionless environment and is not disturbed too greatly. The equation can also be useful in describing much larger spatially distributed systems that behave periodically.

Consider, for example, the cardiopulmonary system. In a quiescent human, there is pleasantly steady (mostly) periodic behavior not only of the diaphragmatic excitation for breathing, but also, a similarly steady periodic heartbeat. We know that these periodicities are related; they couple together for the purpose of maintaining homeostasis through servomechanisms, and delayed ineluctably because of the spatial distribution of the coupled systems and the neural conduction speed which is much slower compared to electrical wires because the conduction speed is generally limited by the action and design features of calcium gates that are the foundation of neural conduction.

In this specific kind of modeling, which ignores the inordinate complexities of realistic hemodynamics which should properly account for the elasticity of arterial tree and its obvious "multifurcations", we do account for bulk servomechanistic effects of the brainstem's CO2 and amplified bulk effects of sensors of O2 concentration and mitochondrial O2 sensors.

If one looks simply at the standard QRST(U) cardiac form, the idea that it can be represented by a Fourier series, and even well represented by a finite series or more generally by a more general sinsusoidal sum is immediately available. A modulation of the function representing the repeating QRST complex provides an envelope that can be related to the pulmonary rhythm. The two systems, in reality, must really be both nonlinear and coupled. These are facts of life that always make models and analysis of the models difficult.

We know the coupled oscillators have damping and that it is driven servomechanistically by neuroexcitation determined by sensors of both CO2 and O2. This is too complicated a situation to deal with mathematically, initially, and in essence we eliminate that here, much as it is done didactically in countless other places.

But, while locally at least a physically quiescent human, e.g., in a number of stages of sleep has steady frequencies for cardiac and pulmonary systems, sometimes this goes awry.

A particularly noticible form of disturbance is called Cheyne-Stokes pattern, where the breathing frequency itself becomes a (modulated) periodic function of time ranging from apnoeic to hyperventilation. This is not only noteworthy as a distinguishable pattern, but it also happens to correlate well with linked central apnoea and congestive heart failure. It is a disturbance in the balanced delay mechanism in the pulmonary oscillator that causes this. The inclusion of a delay in response of the oscillator is necessary to get a mathematical model that is capable of exhibiting such a phenomenon.

One might begin directly, with a seemingly most simple model by writing down the ODE for a simple oscillator with a time delay 'c',

		x"(t) + w² x(t - c)  =  0

A small or large amount of messing with this equation should convince that this seemingly innocuous change causes more mathematical grief than you might have imagined. To ease into the problems more gently, and also make an arcane point, consider the simplest of ODE's with and without a delay. First consider the functional equation

			f(t - T)  =  f(t)

This is to hold for all t and some constant T, and only specifies a function that is periodic with period T. Simply considering functions that are expanded in a Fourier series, shows a vast number of solutions to the functional equation. The function need not be differentiable, or even continuous, though of bounded variation might be good to impose. Now, consider the simple first order ODE

			f'(t)  =  a f(t)

for a constant 'a'. The equation's solution is instantly available by inspection and elementary differential calculus to be

			f(t)  =  exp( at ) + C

Now, consider the cognate delayed ODE

			f'(t)  =  a f(t-c)

There are at least three ways of approaching this functional ODE, as well as a way of teasing out a solution to an approximate equation, by expanding in a Taylor expansion, expressing the solution of the full deceptively simple equation. However, no matter what method you employ, it will turn into, relatively speaking, a minor horror. The method with the least degree of unpleasantness seems to be the method of Laplace transforms.

Letting F(s) = L{ f(t) }, we know from the elementary properties of the transform that

		L{ f'(t) }  =  s F(s) - f(0)
		L{ f(t-c) }  =

		             =  INT exp( -st ) f(t-c) dt

   Defining u := t-c, so du = dt

		             =  INT exp( -s(u+c) ) f(u) du

		                          0     oo
		             =  exp(-sc) (INT + INT) exp( -su ) f(u) du
		                          -c    0

		             =  exp(-sc) [INT exp( -su ) f(u) du + F(s)]

               0                           -c
  Let S(s) := INT exp( -su ) f(u) du  =  - INT exp( -su ) f(u) du
	      -c                            0

            = INT exp( +su ) f(-u) du

   So, transforming the equation gives

		s F(s) - f(0)  =  a  exp(-sc) ( S(s) + F(s) )

		s F(s) - a exp(-sc) F(s)  =  a  exp(-sc) S(s) + f(0)

		         f(0) + a exp(-cs) S(s)
		F(s)  =  ----------------------
		           s - a exp( -cs )

   S(s) is already a strange kind of initial condition for an ODE since to
   get a value for it, we must actually specify all the values for f(t)
   in the range -c <= t <= 0; such initial conditions have more the
   character of an integral equation.  We might, e.g., suppose, for the
   sake of doing something reasonable, that f(t) = f(0) identically in
   this interval, and then

     S(s)   = INT exp( +su ) f(0) du  =  f(0) (1/s) (exp(+cs) - 1)

   The physical picture being that the system, whatever it may be,
   until it starts responding at t=0 from an influence applied starting
   at t=-c, has been doing nothing.

   Then, substituting, rearranging, normalizing and reducing, etc.,

		         f(0) + a exp( -cs ) f(0) (1/s) (exp( +cs ) - 1)
		F(s)  =  -------------------------------------------
		                     s - a exp( -cs )

		              1 + (a/s) (1 - exp( -cs ) )
		      =  f(0) ---------------------------
		                     s - a exp( -cs )

		              s + a (1 - exp( -cs ) )
		      =  f(0) -------------------------
		               s ( s - a exp( -cs ) )

		               1 + (a/s) (1 - exp( -cs ) )
		      =  f(0) ----------------------------
		                 ( s -  a exp( -cs ) )

		               1 + (a/s) (1 - exp( -cs ) )
		      =  f(0) ----------------------------
		               s ( 1 -  (a/s) exp( -cs ) )

	               1                       a (1 - exp( -cs ))
      =  f(0) [ ------------------------- + ------------------------]
	        s( 1 - (a/s) exp( -cs ) )   s²(1 - (a/s) exp( -cs ))

	         1      a (1 - exp( -cs ))               1
      =  f(0) [ --- + ------------------------] ----------------------
	         s            s²                (1 - (a/s) exp( -cs ))

	         1     a     exp( -cs )            1
      =  f(0) [ --- + --- - ------------] ----------------------
	         s     s²       s²        (1 - (a/s) exp( -cs ))

The ODE is reduced to an algebraic equation whose exegesis is trivial for F(s). So, "in principle" all we have to do is invert the Laplace transform, and there we are - right? There are two ways to invert a Laplace transform: the easy way, and the hard way. The easy way is simply looking up the form in a built up table of known transforms and reading it in reverse. The hard way is actually performing the inverse transform via a Bromwich integral derived from a Cauchy integral over a suitable path in the complex s-plane, which may, and often does, express the answer as an infinite series anyhow.
Guess which will be required, here.

   Using  1/(1-z) = SUM z^k for |z| < 1

	            1     a     exp( -cs )    oo
   F(s)  =  f(0) [ --- + --- - ------------]  SUM (a/s)^k exp( -kcs )
	            s     s²       s²         k=0

   There is then a sum of terms each of which can be inverted from a
   common tabular form, for 0 <= n:

	L^-1{ (1/s)^(n+1) exp( -α/s) } =  (t/α)^(n/2) J_n( 2 sqrt(αt) )

   where J_n() is the nth order Bessel function.  In principle then, and
   conceptually one has an analytic solution to the the functional ODE

			f'(t)  =  a f(t-c)

   given some prototypical initial conditions, even if the form of the
   solution is not completely in closed finitely expressed form.

   Now, consider the second order time delayed oscillator equation

			x"(t) + w² x(t - c)  =  0

           L{ x(t) }  :=  X(s),

           L{ x"(t) }  =  s² X(s) - s x(0) - x'(0)
           L{ w² x(t-c) }  =  w² ( exp(-sc) S(s) + X(s) )

  defining, as before, and simplifiyng S, with the same reasoning,

      S(s)  :=  INT exp( -su ) x(0) du

             =  x(0) (1/s) (exp(+cs) - 1)

   Then, transforming the equation gives

           s² X(s) - s x(0) - x'(0) + w² ( exp(-sc) S(s) + X(s) )  =  0

           (s² + w²) X(s)  =   s x(0) + x'(0) - w² exp(-sc) S

                    s x(0) + x'(0) - w² exp(-sc) S
           X(s)  =  -------------------------------
                              (s² + w²)

                            s                 1
                 = x(0) --------- + x'(0) ---------
                        (s² + w²)         (s² + w²)

			   x(0) w² (1 - exp(-cs))
                        -  ----------------------
                                s (s² + w²)

   Using the well known inverse Laplace transforms,

           L^-1{ s/(s² + w²) }     =  cos( wt )
           L^-1{ 1/(s² + w²) }     =  sin( wt )
           L^-1{ 1/(s(s² + w²)) }  =  (1 - cos( wt ))/w²

   and finding

           L^-1{ exp(-sc) / (s(s² + w²)) }

   which is not completely trivial, we have a solution.  Using the
   convolution theorem (The Laplace transform of a convolution is
   the product of Laplace transforms), and

           L^-1{ exp(-sc) }  =  δ(t-c), a Dirac delta function

       L^-1{ exp(-sc) / (s(s² + w²)) }  =  INT δ(t-c-x) (1 - cos(wx))/w² dx

                                        =  (1 - cos( w (t-c) ) )/w²

   Then, all terms can actually be inverted:

      L^-1{ X(s) }  =  x(0) cos(wt) + x'(0) sin(wt)

		- w² x(0) [ (1 - cos( wt ))/w² - (1 - cos( w(t-c) ))/w² ]

                    =  x(0) cos(wt) + x'(0) sin(wt)

		-  x(0) [ (1 - cos( wt )) - (1 - cos( w(t-c) )) ]

	=  x(0)(-1 + 2 cos( wt )) + x'(0) sin( wt ) + x(0) (1 - cos( w(t-c) ))

	=  x(0)( 2 cos( wt ) - cos( w(t-c) ) ) + x'(0) sin( wt )

	(with x(0) and x'(0) being initial conditions that are the
	 constants of integration of the second order ODE)

   This is actually a simpler and more ready form of solution than any other
   solution of the first order functional ODE.  [Use your intuition as a
   guide, but be prepared to learn that your intuition is not magical,
   and that it merely reflects the current state of your understanding:
   that it is always subject to humiliating improvement.  Real scientists
   understand the true meaning of "humility", because their nature is one
   of truth, with the understanding of its fragility; religionists, for all
   their duplicitous and deranged interminable babbling to the contrary,
   have not even a clue, as they are the least spiritual of all critters,
   and the very definition of the truly stupid and subhuman.]

   This delayed response will be elaborated on later in
   Driven SHO with damping via Laplace transform, and
   Damped and Driven SHO with response delay, a functional ODE

An Harmonic Oscillator With Damping

[Cf. Wikipedia article Harmonic Oscillator for other interesting aspects.]
If we add to the simple oscillator equation a term, "b (dq/dt)", so that

	m (d/dt)² q + b dq/dt + k q  =  0

we still have a linear, second order ODE.

Before solving the equation along with linear ODEs with constant coefficients generally and examining the solutions, I want to look at the new term more carefully. Any resistance contribution will be some function of the velocity. Once again the subtle concept of analyticity enters.

The term represents a "bulk resistance" force -b(dq/dt), negative/retarding and proportional to the velocity which would be a typical first order approximation to an air resistance. Such a force will have to be opposing the direction of motion, and so the next term would be one proportional to (dq/dt)3.

It is possible for b not be a constant, but we take it as such, just as in Hooke's law, as a first approximation. (Realistically, air resistance needs higher order terms, and these will ccount for a nonconstant b.)

Understand that this is a simple resistance term. If you think about air resistance, where is the aerodynamics of the oscillator geometry? It is not accounted for, nor is the compressibility of the air. Reality is again more complicated than the simply solvable models. It is always important to remember what you are ignoring.

If the oscillator with its geometry is in a viscous fluid, like water, the equation becomes even more complicated, and, naturally we still have not taken account of the massiveness of any possible spring.

Linear ODEs with Constant Coeficients

It turns out that any linear ODE of kth order with constant coefficients

	c_k (d/dt)^k q + c_(k-1) (d/dt)^(k-1) q + ... + c_0 q  =  0

has a formal set of k linearly independent solutions that one can see by simply substituting the form

	q(t)  ->  A exp( a t )

It is easy to see that with this substitution one obtains an equation of the form

	A P(a) exp( a t )  =  0

where P is the polynomial in the indeterminate a, and A is a nonvanishing constant. Then,

	c_k a^k + c_(k-1) a^(k-1) + ... + c_0  =  0

Since exp is never zero, the condition for a so that exp( at ) is a solution of the ODE is that

	P(a)  =  0.

If a_j are the roots of P, the general solution of the ODE is

	q(t)  =  Sum  C_j exp( a_j t )

where the C_j are the k constants of integration. All the usual things said of P and its roots apply: For arbitrary complex coefficients c_j in P, there are k roots (some of which may be numerically identical) that are complex numbers. If the c_j are real, any non real complex root will be accompanied by its complex conjugate. There are "formulas" for expressing the roots for k < 5, but there is no general exegetic formula for quintics or higher, although there are techniques for some special forms using Galois theory that exploit symmetries of the roots, hence also of the equation itself. This same result can also be obtained through Fourier or Laplace transform theory.

We now begin the typical trek of physical theory after having reduced a physically complex problem to its simplest, most essential mathematical form and seeing how that behaves.

The following steps are to complicate the problem slowly, adding complications in sequence of easiest first, investigating the behavior of the model and using that the knowledge obtained in progressing to the next step.

Method of Laplace Transforms

In the grand mathematical tradition of false eponomy: the thing is never named after its inventor, we continue.

Euler invented it, and Laplace got mileage from it; there is, of course, a logic to that, despite its perversity. It is an integral transform suspiciously resembling the usual Fourier transform, and looking like a piece of one. It is especially useful in engineering situations in the solution of ODEs with constant coefficients - which is exactly what motivated Euler.

When the integral exists, we write

		L{ f(t) }  :=  F(s)  :=  INT exp( -st ) f(t) dt

Though f(t) can be piecewise continuous, or even piecewise integrable (in a suitable sense), it must not grow faster than exponentially for unbounded t, and not have a singularity too virulent at t=0. Often the assumed and most convenient context of Laplace transform is the Riemann-Stieltjes integral. For specifics and more particulars, see the Wikipedia page on the Laplace transform, and our own Table of Laplace Transformsr.

Method of Fourier Transforms

[Coming soon to a theater near you.]
In the mean time, visit the excellant Wikipedia page on the Fourier transform.

Harmonic Oscillator With Damping Solution

So, to solve the general equation for a damped oscillator using the exponential substitution trick, we really need only solve the quadratic equation

	m a² + b a + k  =  0

   which, with fortuity, is easy using the standard "quadratic formula":

	      -b ± sqrt( b² - 4mk )
	a  =  ---------------------
	                2 m


	a  =  -b/(2m)  ±  sqrt( (b/(2m))² - k/m )

   There are the three structural cases regarding the discriminant:

	i)    b²  <  4mk
	ii)   b²  >  4mk
	iii)  b²  =  4mk

   In case i),  with b > 0, as spoken of above, and b < sqrt(4mk),
   there are two complex conjugate values of a, and the general
   exponentially damped harmonic solution can be written

	q(t)  =  A exp( -b/(2m) t ) exp( +i sqrt(k/m - (b/(2m))² ) t )
	       + B exp( -b/(2m) t ) exp( -i sqrt(k/m - (b/(2m))² ) t )

   which describes an exponentially damped harmonic motion with frequency
   sqrt( k/m - (b/(2m))² ).  Notice that not only does the damping factor
   b damp, but it also lowers the natural frequency from sqrt(k/m).

   In case ii),  with b > 0, as spoken of above, and b > sqrt(4mk),
   there are two real values of a, and the general solution

	q(t)  =  A exp( (-b/(2m) + sqrt( (b/(2m))² - k/m ) ) t )
	       + B exp( (-b/(2m) - sqrt( (b/(2m))² - k/m ) ) t )

   This aperiodic solution is often called overdamped.
   The second term is clearly an exponential decay caused by sufficiently
   great damping.  The first term is a little more interesting since it
   might vanish if

	(b/(2m))²  =  (b/(2m))² - k/m

   but this is not possible unless k = 0.  However, it is not too difficult
   to show, in the general solution, if A and B have opposite signs,
   and |B| > |A|, that the oscillator, after starting at t = 0, will pass
   once through the equilibrium point, in an overshoot, and return slowly
   to rest at the equilibrium point.

   In case iii),  with b > 0, as spoken of above, and b = sqrt(4mk),
   the solution is referred to as critically damped, but now the
   solution values for the quadratic equation are degenerated, and we have
   only one linearly independent solution.

	exp( (-b/(2m) t )

   In this case, and this generalizes, the general solution can be written

	q(t)  =  (A + B t) exp( (-b/(2m) t )

If this were a third order equation and there were three roots coalescing, the prepending polynomial would be (A + Bt + Ct²). One of the most readable and enlightening introduction to ODEs I have ever read or seen is [Tennenbaum 1963]. I am told it is now available in paperback. It is heavy on example and intuition, but not so heavy on mind numbing, but impressive existence theory. Unfortunately, there are places it does not go, but it is only a thorough introduction, not a definitive treatise on theory. Many mathematicians may therefore cast eyes heavenward at its mention, and perhaps prefer, if they know it, [Ince 1958] as an introduction to more rigorous mathematical theory.

Driving Forces and the Inhomogeneous ODE

If we now consider driving the damped oscillator by applying some time dependent force to it, we get an inhomogeneous ODE

	m (d/dt)² x + b dx/dt + k x  =  F(t)

and things start to become not only interesting, but also useful.

We still have a linear, second order ODE with constant coefficients, it should be clear that its particular solution may have any of the solution of the homogeneous equation added to it and still be a solution, and that the most general solutions will be of this form.

Before solving the inhomogeneous equation, notice that the homogeneous solutions given as exponentials are likely to come in handy since any arbitrary "physical" function F(t) in classical theory can be represented by a Fourier integral

	  F(t)  =  |  exp( i s t ) f(s) ds

An especially useful particular solution to obtain would then be for a Fourier kernel,

	m (d/dt)² x + b dx/dt + k x  =  F exp( i w t )

A little physical intuition goes a long way here as what one might expect to happen physically, so as to make a good mathematical guess. If even a good guess does not work, more guesses are still possible. If a driving frequency w_1 is applied to the oscillator, its response would possibly be forced into the same frequency, but since there is also a resistance to the driving force both inertial and damping, one could expect a phase lag even when the oscillator remains entrained to the frequency w_1; so, as a special particular solution assume x(t) to have the form

		x(t)  =  A exp( i (w t - z) )

   Then, substituting into the inhomogeneous equation,

	m (d/dt)² x + b dx/dt + k x  =  F exp( i w t )

   one obtains

   A( - m w² + i b w + k ) exp( i (w t - z) )  =  F exp( i w t )

   A( - w² + i b/m w + k/m )  =  F/m exp( i z )

   A( (w_0)² - w² + i b/m w )  =  F/m exp( i z )

   Equating the absolute values of the complex numbers,

	   A²( ((w_0)² - w²)² +  (b/m w)² )  =  (F/m)²

   which gives The General Amplitude

	A  =  -------------------------------------
	      ( m² ((w_0)² - w²)² +  (b w)² )1/2

   In the real cartesian components

	   A( (w_0)² - w² )  =  F/m cos( z )
	   A( b/m w )  =  F/m sin( z )

	                A b w                 A m ( (w_0)² - w² )
	   sin( z )  =  -----,   cos( z )  =  --------------------
	                  F                              F

	                        b w
	   tan( z )  =  -------------------
	                 m ( (w_0)² - w² )

This shows that there is a particular, entraining solution for such a driving frequency w, with specifically determined amplitude A and phase lag z. For any fixed w, this is a stable state solution where the energy input of the driving force is exactly that which is dissipated through the resistance b. If b is taken to zero, the phase lag vanishes, and the amplitude is increased. With nonvanishing b, an increasing m makes the phase lag smaller and the amplitude smaller. Also interesting, qualitatively, is the oscillator behavior as w is varied.

Note that without the damping, there is still a resonance peak in the amplitude; but *any real* oscillator will have such damping.

The oscillator exhibits a varying reponsiveness to the driving frequency w that depends on ( (w_0)² - w² ), which measures the closeness of w to the natural frequency in the same analytical way that the square of a standard deviation in statistical theory, and uncertainty in quantum mechanics are measured.

One might think that the oscillator would have a resonance frequency where it is most responsive at w_0, but, remember that the homogeneous equation with nonvanishing b showed a harmonic motion with frequency w_1 where,

		(w_1)²  :=  (w_0)² - (1/4)(b/m)²

The proper way to find the resonance frequency is to notice that the Amplitude A(w²) is a function of w, and that the power of the oscillator, being the square of the amplitude, will have a maximum amplitude at the resonance frequency.

   Setting the first derivative of the A(w) just derived

		d A(w)
	        ------  =  0,
		d (w²)

   for the condition of an extremum, one finds

		(w_res)²  :=  (w_0)² - (1/2)(b/m)²

   A check of the second derivative to see that it is in fact negative
   will show that w_res is indeed a maximum.  It is a small exercise in
   in differential calculus that should be carried out in detail if one 
   has difficulty seeing it by purely mental means.

   So, in fact,

	(1/2) ( (w_res)² + (w_0)² )  =   (w_1)²

   relating the three distinct frequencies, so that on a quadratic
   scale, w_1 lies exactly in the middle between w_0 and w_res

		w_res  <  w_1  <  w_0

   The squares of these three resonant frequencies differ by a term of
   the order of (b/m)².  Contrasting this with (w_0)² = k/m
   The ratio

	(b/m)²/(w_0)² = (b/m)²/(k/m) = (mb²)/(km²) = b²/(km)

   is a measure of the spacing relative to the frequency squared.

   From the viewpoint of an engineer, b is often simply a fact of life,
   while k and m can be adjusted.  To make the spread of the the three
   'w's small, stiffen the oscillator by using high values of k and m.
   This spacing ratio is also the key quantity whose value determines
   the three different behavioral patterns for the damped

   The ratio of these then also adjusts the coalesced resonance arena
   of the oscillator.

   But, while doing this, notice that making m large decreases the
   resonance peak in the general oscillator amplitudes.
   Minimizing resonances is very often an important desideratum in the
   design of electricoacustical equipment where the relative nirvana is
   a flat response curve over all frequencies.  People who work in this
   area have developed a bag of black magical tricks to suppress the
   tendency of nature to have unwanted resonances.  Besides tweaking
   parameters in the construction, e.g., moving a speaker resonance 
   down to 6 or 8 Hz so it is out of the way of the range of human
   hearing, there are a few other tricks that we will be able to discuss
   a little further on.

   The interesting case is where genuine harmonic motion is likely to
   be easy, and that is case 1) from above, where,
   where b² < 4mk, or b²/(mk) < 4.

		(w_1)²  :=  (w_0)² - (1/4)(b/m)²

We look at this resonance phenomenon more quantitatively next.

Driven SHO with damping via Laplace transform

As before, solve the equation

	m (d/dt)² x(t) + b dx(t)/dt + k x(t) =  0

using the Laplace transform L{x(t)} = X(s), and the formulas for the derivatives to get the algebraic equation,

	m(s² X(s) - s x(0) - x'(0)) + b (s X(s) - x(0)) + k X(s) =  0

        (m s² + b s + k) X(s)  =  m s x(0) + m x'(0) + b x(0)

                  m s x(0) +  m x'(0) + b x(0)
	X(s)  =  ---------------------------
                     (m s² + b s + k)

   We already know how to factor the denominator as (s - r_1)(s - r_2),
   and also know that the cases of r_1 = r_2, and r_1 ≠ r_2 need to be
   distinguished, and further that when the roots are equal, they are
   real, and complex conjuagates when they are unequal.  The useful
   inverse transforms for unequal roots are

	                        exp(-at) -  exp(-bt)
      L^-1{1 / (s+a)(s+b)}  =  ----------------------

	                       a exp(-at) - b exp(-bt)
      L^-1{s / (s+a)(s+b)}  =  ------------------------

   and for the one degenerate real root, use

      L^-1{1 / (s+a)²}  =  t exp( -at )

      L^-1{s / (s+a)²}  =  (1 - at) exp( -at )

   It helps to have gone through the more tedious detailed analysis first.
   Often, even though there is a clear algorithmic procedure, actually
   following out the necessities of that procedure requires more work
   than using knowledge and being a bit clever. [A handwritten parser
   for a computer language is more elegant and efficient than one
   written automatically by programs based on finite state automata,
   which are guaranteed to be unreadable.]

   If we complicate the situation by adding a driving function f(t).
   The entire procedure is easily repeated, requiring only that
   L{f(t)} = F(s) exists.  A physically real driving function for which
   the Laplace transform does not exist would be hard to imagine,
   though easy, mathemtically.

Resonance, Bandwidth & Selectivity

The energy or intensity of an oscillation is measured by the square of its amplitude, and oscillators will always have natural resonances at certain frequencies where their responses are more sensitive than at other frequencies. The resonance frequencies, as we have just seen, are dependent on the structural parameters that define the oscillator.

The bandwidth of a resonance peak in the intensity function I(w) is simply a measure of the frequency range over which the oscillator is effectively responsive. The word "bandwidth", is used in other contexts, e.g., signal propagation, and seems always to mean, in the appropriate context, the width of a range of frequencies. There are two different ways of defining a resonator's kind of bandwidth. The first looks theoretically at the analytic curve determined by a solvable ODE to find the frequencies on either side of the resonance where the the second derivative (d/dw)²I(w) vanishes. These are the curve's inflection points where the curvature changes sign.

This is all well and good if you happen to have the function I(w) in analytic form, but you may not, and a more general, and actually better definition is to find frequencies on both sides of the peak where the response intensity is (1/2)I_0, the maximim intensity. Outside of this bandwidth, the oscillator is effectively unresponsive. Obviously, the narrower the bandwidth, the more selective the oscillator will be in its response to driving frequencies in the neighborhood of the peak frequency of response. The idea of a radio tuner should come to mind readily.

The following graph of a general oscillator response curve plotting intensity of response as a function of frequency should be almost self explanatory.

       I_0  -  - - - - - - - - - - -   .
            |                       .  |  .
            |                      .       .
            |                     .    |    .
            |                    .           .
            |                          | 
            |                   .             .  "bandwidth"
    I_0/2   -                -->.______|______. <--  (b/m)
            |                   .|           |.
            |                   .|     |     |.
            |                    |           |
            |                  . |     |     | .
            |                 .  |           |  .
            |                .   |     |     |   .
            |             .      |           |      .
           -|--------------------|-----|-----|----------------> frequency

                               Oscillator Response Curve
                                 At Its Resonance Peak

   The intensity function for our damped oscillator is easily gotten
   by squaring the theoretical general amplitude function.

	I(w)  =  --------------------------------
	         ( m² ((w_0)² - w²)² +  (b w)² )

   We have already determined that the resonance peak of I(w) is at
   w_res, where

                  (w_res)²  =  (w_0)² - (1/2)(b/m)²


	I_0  :=

	I(w_res)  =  -------------------------------------------------
                     ( m² ( (1/2)(b/m)² )² + b² (k/m - b²/(2m²))

	          =  -------------------------------------------------
                     ( (1/4) b² / m² + b²(k/m) - (1/2) b² / m² )

	          =  ------------------------------
                     ( b² (k/m) - (1/4) b² / m² )

	          =  ------------------------------
                      b² ( (k/m) - ( b/(2 m) )² )

   For a w which gives I(w) = (1/2)I_0

                    b² ( (k/m) - ( b/(2 m) )² )
	         ------------------------------------  =  1/2
	         ( m² ((w_0)² - w²)² +  (b w)² )

                  2 b² ( (k/m) - ( b/(2 m) )² )
	         ------------------------------------  =  1
	         ( m² (k/m - w²)² +  (b w)² )

    ( m² ( (k/m)² - 2(k/m)w² + w² ) +  b² w² )  =

                  2 b² ( (k/m) - ( b/(2 m) )² )

    ( m² w² - (2km - b²) w² + k² )  =

                  2 b² ( (k/m) - ( b/(2 m) )² )

    ( m² w² - (2km - b²) w² + k² - 2 b² ( (k/m) - ( b/(2 m) )² )  = 0

    which, messy though it may seem, is simply a quadratic equation in w².
    Let us do this exactly since we really can, while most textbooks
    will cut immediately to the approximative chase.  We have a standard
    quadratic equation once again,

	A w² + B w² + C  =  0,

	A  :=  m²
	B  :=  - (2km - b²)
	C  :=   k² - 2 b² ( (k/m) - ( b/(2 m) )² )

   Using the trusty old quadratic formula again in the form

	w²  =  - (B/2A) (+|-) sqrt( (B/2A)² - C/A )

	B/2A  =  - (k/m - (1/2)(b/m)²)
	      =  - (w_1)²

	(B/2A)²  =  (k/m)² - (k/m)(b/m)² + (1/4)(b/m)²
	          =  (w_1)²

	C/A  =  (k/m)² - 2 (b/m)² (k/m) + 2 (b/m)² ( b/(2 m) )² )
	     =  (k/m)² - 2 (b/m)² (k/m) +  (1/2)(b/m)²

	(B/2A)² - C/A  = (k/m)² - (k/m)(b/m)² + (1/4)(b/m)²
	               - (k/m)² + 2 (b/m)² (k/m) -  (1/2)(b/m)²

	=  (k/m)(b/m)² - (1/4)(b/m)²

	sqrt( (B/2A)² - C/A )  =  (b/m) sqrt( k/m - (b/2m)² )

		=  (b/m) sqrt( (w_1)² )

		(w_res)²  :=  (w_0)² - (1/2)(b/m)²
		(w_res)²  :=  (w_1)² - (b/2m)²

   Finally, the exact frequencies that bound the bandwidth are
   given by the positive roots of w.

	w²  =   (w_res)² (+|-) (b/m) w_1

   If b/m is small, i.e., the bandwidth is narrow, a little fiddling
   with approximations shows that a good (standard) approximation is

	w  =  w_res (+|-) ---

   and then the bandwidth is approximately b/m.  Otherwise, generally,
   and exactly, the bandwidth is obviously more complicated.  It is given
   exactly by:

      sqrt( (w_res)² + (b/m) w_1 ) - sqrt( (w_res)² - (b/m) w_1 )

   For some reason the word "bandwidth" is used incorrectly in sloppy
   computerese when the correct word, and concept of "channel capacity"
   would be correct.  These really are entirely different concepts
   that should not be confused; do not accept one for the other.
   Bandwidth is a concept belonging to analog signals; computers deal
   in digital signals.  If they were somehow conceptually equavalent,
   DAC and ADC devices (using Fast Fourier Transforms) would be

RLC Electrical Circuit Analog

Even complicated electrical circuits can be reduced using Kirchoff's laws (which become less inscrutable in topological graph theory) to equivalent circuits which can be said to contain a single effective resistance R, capacitance C, and a single inductance L. In all real circuits, all three of these properties are always present, even in a single straight conducting wire.

   For the current in an RLC circuit with a single step voltage function,
   the analog of displacing a damped SHO, the differential equation for
   the current function of of time, i(t) is:

	( L (d/dt)² + R d/dt + 1/C ) i(t)  =  0

Such a circuit then can act as an analog computer that solves numerically the SHO equation, the solution of which is obtained by measuring the current. A driving electromotive force (potential) can also be applied on the RHS, and the inhomogeneous equation solved similarly.

Damped and Driven SHO with response delay, a functional ODE

Making things a little more complicated yet, and remaining at the same level of mathematical difficulty (i.e., seond order, linear ODE with constant coefficients), think of a SHO with damping, as well as with a "refractory" response time. For a single Newtonian harmonic oscillator, this is about as complicated as one can get without going nonlinear in dissipation terms, or beyond quadratic for restoring force terms.

But, most generally, there are actually two refractory times associated each to two forces impinging on the oscillator "body", and for all glorious generality, we should consider two different delay times for each of these forces.

	m x"(t) + b x'(t-β) + k x(t-α) =  F(t)

The delay β would reasonably be expected to represent the impulse time of inelastic collisions with the oscillator as moves through a dissipative medium, while the delay α is associated with force propagation by successive collisions through the spring. In this case one might suppose that β << α

Collecting the necessary pieces from above:

	Defining the Laplace transform of x(t)

	L{ x(t) }   :=  X(s),

	L{ x(t-α) }   = ( exp(-sα) S_b + X(s) )

        with S_b defined by

        S_b  :=  INT exp( -su ) x(u) du


	L{ x'(t) }  =  s X(s) - x(0)

	L{ x'(t-β) }  =  exp( -sβ )( s (X(s) + S_b ) - x(-β)

	L{ x"(t) }  =  s² X(s) - s x(0) - x'(0)

	L{ f(t) }   :=  F(s)

   So, the transformed equation can be written

	m( s² X(s) - s x(0) - x'(0) )
	+ b( exp( -sβ )( s (X(s) + S_b ) - x(-b) )
	+ k( exp(-sα) S_b + X(s) )  =  F(s)

	(m s² + b s exp( -sβ ) + k) X(s)  =

		F(s) - k exp(-sα) S_b - b exp(-sβ) S_b + x(-β)
		+ m( s x(0) + x'(0) )

   which can be solved for X(s)

                   x(-β) + m x'(0)
   X(s)  =  ---------------------------
	    (m s² + b s exp( -sβ ) + k)

              S_b (k exp(-sα) - b exp(-sβ) )
           -  ---------------------------------
	        (m s² + b s exp( -sβ ) + k)

                    m s x(0) + F(s)
	   + ---------------------------
	     (m s² + b s exp( -sβ ) + k)

   To specify X(s) fully, both F(s) and S_b must be provided,
   along with x'(0), x(0) and x(-β).  Notice that this is not
   simply a standard ODE, and the understandings of what must
   be supplied as initial conditions is contextual.

   The delay β introduces a genuine difficulty since what was
   formerly a quadratic equation has now become a transcendental
   equation, and we need its roots.
	     (m s² + b s exp( -sβ ) + k)  =  0

   If β can be taken truly small, as I suspect in most macroscopic
   applications it can, this difficulty can be removed since retarding
   forces tend to be immediate forces resulting from the motion of a
   system through its environment rather than something that involves
   interaction with a greatly extended thing like a spring.  Then,
   exp( -sβ ) → 1, and the quadratic equation is returned and we
   can write a reduced and approximating X(s) as

               x(0) + m x'(0)
   X(s)  =    -----------------
	      (m s² + b s  + k)

               S_b (k exp(-sα) - b  )
           -  -----------------------------
	           (m s² + b s + k)

               m s x(0) + F(s)
	   +  -----------------
	      (m s² + b s  + k)

   From the prior discussion of the Strangeness of S, and one reasonable
   assumption about it, same as before: for t in [-c, 0], x(t) = x(0),

     S_b   = INT exp( +su ) x(0) du  =  x(0) (1/s) (exp(+cs) - 1)

   The common denominators of each term can be factored and the terms
   expanded by partial fractions.  If L{ f(t) ) = F(s) is well defined,
   then an inversion L^-1{ F(s)/(s-r) } should, in principle, present
   no problem.

               x(0) + m x'(0)
   X(s)  =    -----------------
	      (m s² + b s + k)

                   (k exp(-sα) - b  ) (exp(+cs) - 1)
           -  x(0) ---------------------------------
	                    s (m s² + b s + k)

               m s x(0) + F(s)
	   +  -----------------
	      (m s² + b s + k)

   Remember that inverting this transform goes differently depending
   on whether the roots of (m s² + b s + k) are distinct or equal.

   The relevant additional inverse Laplace transforms are:

	L^-1{1/(s + a)²}  =		t exp(-at)

	L^-1{s/(s + a)²}  =		(1 - at) exp(-at)

	With n > 0

	L^-1{1/(s + a)^n}  =       	exp(-at) t^(n-1)/(n-1)!

	L^-1{a / s(s+a)}  =		1 - exp( -at )

	For a ≠ b,
	                                exp(-at) - exp(-bt)
	L^-1{1 / (s+a)(s+b)}  =        	--------------------
	                                        exp(-at)   exp(-bt)
	L^-1{1 / s(s+a)(s+b)}  =    	1/(ab) + -------- + --------
	                                         a(a-b)     b(b-a)
	                                a exp(-at) - b exp(-bt)
	L^-1{s / (s+a)(s+b)}  =       	------------------------

Oscillator Coupling & Normal Modes

Notice, that for simple spring additions, spring constants combine like electrical capacitors, oppositely from resistors. For a parallel addition, the constants add to produce a resultant kR

kR  =  k1 + k2

For a series addition, the constants add to produce a resultant kR

1/kR  =  1/k1 + 1/k2

   With an arrangement of two undamped coupled oscillators with 3 springs:

         x_1              x_2

              m_1     m_2
	|----  O ----- O ----|

          k_1      k_3    k_2

   The coordinates x_1 and x_2 are the displacements of the masses m_1
   and m_2 respectively from their equilibrium positions.

   The pairs of coupled equations of motion can be written,

	m_1 (d/dt)² x_1(t)  =  - k_1 x_1 - k_3 (x_1 - x_2)

	m_2 (d/dt)² x_2(t)  =  - k_2 x_2 + k_3 (x_1 - x_2)

	m_1 (d/dt)² x_1(t)  =  - (k_1 + k_3) x_1 + k_3 x_2

	m_2 (d/dt)² x_2(t)  =  - (k_2 + k_3) x_2 + k_3 x_1

   or, combining into a matrix equation,

        | m_1  0  |          | x_1 |       |(k_1 + k_3)     k_3     | | x_1 |
        |         | (d/dt)² |     |  =  - |                        | |     |
        |  0  m_2 |          | x_2 |       |     k_3    (k_2 + k_3) | | x_2 |

        (d/dt)²  x  =  M⁻¹ K x

        To get two uncoupled equations that are easily solved, diagonalize
	the matrix (M⁻¹ K), and find the diagonalizing transformation.

	The eigenvalues are obtained from the secular equation

		Det( M⁻¹ K - W I )  =  0

	((k_1 + k_3)/m_1 - W)((k_2 + k_3)/m_2 - W) - (k_3)²/(m_1 m_2)  =  0

	W² - ((k_1 + k_3)/m_1 + (k_2 + k_3)/m_2) W
	+ ((k_1 + k_3)/m_1)((k_2 + k_3)/m_2) - (k_3)²/(m_1 m_2)  =  0

	W² - (m_2(k_1 + k_3) + m_1(k_2 + k_3))/(m_1 m_2) W
	+ ( (k_1 + k_3)(k_2 + k_3) - (k_3)² )/(m_1 m_2)  =  0

	W² - (m_2 k_1 + m_1 k_2 + (m_1+ m_2)k_3)/(m_1 m_2) W
	+ ( k_1 k_2 + k_3(k_1 + k_2) )/(m_1 m_2)  =  0


		(w_1)²  :=  k_1/m_1
		(w_2)²  :=  k_2/m_2

	W² - ((w_1)² + (w_2)² + k_3 (m_1 + m_2)/(m_1 m_2) ) W
	+ ( (w_1)² (w_2)² + k_3 ((w_1)²/m_2 + (w_2)²/m_1) )  =  0

	W  =
	- (1/2)( (w_1)² + (w_2)² + k_3 (m_1 + m_2)/(m_1 m_2) )

	(+|-) (1/2)sqrt( ( (w_1)² + (w_2)² + k_3 (m_1 + m_2)/(m_1 m_2) )²
		- 4 ( (w_1)² (w_2)² + k_3 ((w_1)²/m_2 + (w_2)²/m_1) ) )

   which gives the expressions for the two normal modes.

A system of many coupled oscillators can be seen upon little reflection to be mathematically represented by a similar equation with a larger nxn matrix.

The secular equation for the eigenvalues will be of nth order with n roots by the fundamental theorem of algebra.

For n > 4, of course, there is no general formula for solving the secular equation, and numerical methods must be resorted to generally, but you will find that with a linear "string" of masses connected by springs that the matrix form will dictate that the eigenvalues of the matrix will be zeros of orthogonal polynomials, not of course guaranteeing that those zeros will be easily calculable. The problem, however, will be less difficult and more symmetrical than the associated general eigenvalue problem. There are many numerical methods for obtaining such eigenvalues encoded in packages for computing languages, and that are available in many computer algebra environments, e.g., maxima (MACSYMA).

Needless to say, the situation of two coupled oscillators with damping is an interesting problem to solve. How will it go? There will be a differential equation with 2x2 matrix operators M, B and K, and a column vector x(t) with 2 components:

		M x"(t) + B x'(t) + K x(t)  =  0

Fortunately, the introduction of the dissipation matrix B is not a significant mathematical complication since it involves a coupling, and it will commute with M. Mathematically one should allow for one of the eigenvalues of B being zero, but physically this doesn't make much sense; neither does a Newtonian M having a zero make much physical sense. Then, the equation can be rewritten as

		x"(t) + M^-1 B x'(t) + M^-1 K x(t)  =  0
		x"(t) + B M^-1 x'(t) + M^-1 K x(t)  =  0

   We know that B M^-1 is diagonal, and that K is symmetric.
   Thinking either in terms of exponential substitutions or Laplace
   transforms, and a matrix W = Diag[w_1, w_2], there will result
   a matricial equation

		M W² + B W + K =  0

		W² + M^-1 B W + M^-1 K =  0
		W² + B M^-1 W + K M^-1 =  0

   where M, W and B are mutually commuting.  Use the very method of
   proving the standard quadratic formula, "completing the square"

	W² + B M^-1 W + (1/4)(B M^-1)² - (1/4)(B M^-1)² + K M^-1 =  0

		(W + (1/2) B M^-1)²  =  K M^-1  - (1/4)(B M^-1)²

   Taking the square root of a 2x2 matrix will give four square roots,
   e.g., the square roots of the 2x2 identity matrix Diag[1, 1] are
   Diag[+1, +1], Diag[+1, -1], Diag[-1, +1], Diag[-1, -1].  Taking that
   into account, you have a variant of the standard quadratic formula.

   The forms of W can then be pulled into diagonal form, and the normal
   modes determined.  It is all only slightly messier than without the
   dissipative B term.  Applying different driving frequencies to this
   system is even more fun; the general procedures will still work,
   it's just a question of keeping your wits about you and not be
   daunted by lengthy calculations.  Running a simulation program
   where you can tweak the parameters would be most instructive.

   Another fairly simple coupling scheme can be represented by the
   equation for 4 particles in 2 dimensions

		x"(t) + w² x(t)  =  0

   where x(t) and w² are both taken to be 2x2 matrices with no
   vanishing elements.  There are then 4 oscillators and their
   couplings form a quadrilateral.

Oscillator Coupling & Resonance Damping

This section describes the physics and mathematics of an important trick used by electrical engineers and acousticians to supress resonances in the physical systems, a problem with which they deal regularly.

The trick is to adjust another resonance to an essential and avoidable resonance. E.g., speakers always have natural resonance peaks. Enclosures, rooms, concert halls naturally have resonances, that for use in musical situations need to be damped. Acoustical design is sufficiently complicated that even a proper design often requires tweaking to eliminate unpredicted resonances.

Certain manufacturers of speakers have tried to avoid the resonance peak by engineering it down to a frequency so low that it is below human hearing; a good choice in this case would be to make the resonance frequency less than 10 Hz. Even so, the sides on the resonance can still be a problem in obtaining the "speaker nirvana" of a flat response curve. The flat response curve simply meaning that for any given frequency you want the sound reproducing apparatus to put out sound that correctly transforms the AC frequency input together with the relative weighting of frequencies: no accentuating or diminishing any frequency or frequency range allowed. The tracking of the electrical signal is then faithful.

The solution lies in the baffling for the speaker; this is in effect a second resonator which couples to the speaker and also is the intermediate coupling from speaker to the listening space.

The simplest of design solutions for the baffling is to use a closed box behind the speaker which faces into the listening area. The air in the interior of the box then becomes a 3-dim resonator. Sound is a compressional wave, and the air medium for it is compressible.
Organ pipes and resonance
wind instruments

The ported box standing waves Avoiding resonances by letting the three design parameters h, l, d be distinct. lining with sound absorbing material. Port coupling and Capacitance Phase and resonance

The Klipsch horn design

Many Coupled Oscillators In One Dimension

The General Pendulum Oscillator

This is not a classic SHO, but a classic system that is well approximated by SHO for small displacements, and for this reason it is treated here.

Consider a pendulum with mass m attached to one end of a massless rod of length L. The other end of the rod is attached to a fixed point about which the rod can move in a plane. This pendulum is suspended in a uniform gravitational field, and we then look at the planar motion that results when the mass is displaced from its equilibrium position, and released. We will measure displacement by the angle 'a' made by the rod and the equilibrium position of the rod., understanding that in the motion, this is a function of time.


                       | \
                       | a \
                       |     \ (massless rod of length L)
                       |       \
                       |         \
                       |           O (mass m)

   The circular arc length s = a L, and for the kinetic and potential
   energies  T and V summing to some fixed total energy E,

		(1/2)m (ds/dt)² + mgL(1 - cos a)  =  E


		(1/2)m L² (da/dt)² + 2mgL sin²(a/2)  =  E

Suppose, most generally (in one dimension) that there is a potential function for the force applied to a particle in the context of Newton's second law, so F(x) = dV(x)/dx. It is not difficult to show that for a classical particle, Newton's second law leads to an expression of the conservation of energy

		(1/2)mv² + V(x)  =  E


		v  =  dx/dt  =  sqrt( (2/m) (E - V(x)) )

	An "in principle" quadrature then gives the solution for the
	for the one dimensional conservative problem of dynamics
	(which assumes that a potential function V(x) exists).

		              f         f
		(t - t_0)  =  | dt   =  | dx / sqrt( (2/m) (E - V(x)) )
		              j         j x_0

   If this describes an oscillatory motion, there must be turning points
   x' and x" where v = dx/dt = 0 ==> T = 0, and therefore that

		V(x')  =  V(x")  =  E

   [NB: an oscillatory motion is periodic, but a periodic motion
    need not be oscillatory; see case 3, below.]

   Within the physical x range of displacement, there also needs to
   be at least one minimum of V(x), a point of equilibrium of the
   system.  In SHO, it is easy to see the parabola that is V(x) = (k/2)x²
   opening upward, but for a general V(x) there may be more such
   potential wells with points of equilibrium at their bottoms.
   Then, there are also potential "humps" or barriers separating the wells, and
   it becomes important to see that changing the total energy level can
   lead to entirely different motions, that for low enough energy
   levels that there can be many distinct possible motions: periodic
   motions within various potential wells.

   The integral must, of course, converge, even though it appears that
   the integrand has singularities at the turning points.

   In the case of the pendulum, there are three possible types of motion,

   	1. oscillatory,
	2. "sticking",
	3. circulatory (libration),

   in order of increasing total energy.

   For our pendulum, the indefinite integral of motion becomes

		| ds / sqrt( (2/m) (E - 2mgL sin²(a/2)) )  =

		| da L / sqrt( (2E/m) (1 - (2mgL/E) sin²(a/2)) )

   Oscillatory motion requires that the argument of the sqrt() have
   two simple real roots, which it will when E < 2mgL.

	   Motion 1, requires E < 2mgL
	   Motion 2, requires E = 2mgL (coalescing of roots)
	   Motion 3, requires E > 2mgL (conjugate complex roots)

The sticking situation is a tricky boundary point between the oscillatory and libration regions, in reality a metastable equilibrium point. Classical mechanics is rather deficient in describing physical reality in this case; QM does better. In reality, the sticking point is a physically metastable state, or metastable motion; while thermodynamics grudgingly acknowledges metastable states, classical mechanics does not - yet it is capabable of creating one. You should understand that something is not quite right with classical theory simply on this basis: there are questions that it is incapable of answering, or that it answers, empirically wrongly. This exposes a a general problem with physical theories: they do not define their own limits of applicability. Every theory thinks it is universally applicable; it is the responsibility of the discerning theorist to discover and define the curtailments to such aggrandizements - a particularly vexing task since the boundaries of regimes of validity are most frequently not clear cuts. For additional information, see [Corben 1950]

With these generalities in mind, return to the problem of the pendulum in a plane.

   From the preceeding, the conservation of energy gives the equation
   of motion,

	(da/dt)²  =  (2E/(mL²)) - (4g/(L)) sin²(a/2)

	          =  (2E/(mL²)) ( 1 - (2gmL/E) sin²(a/2) )


	da/dt  =  sqrt(2E/(mL²)) sqrt( 1 - (2gmL/E) sin²(a/2) )

   which will describe oscillatory motion when the second sqrt has
   two simple and distinct roots, and that happens when

	(2gmL/E)  >  1    =>    E  <  2gmL

   Let y  :=  sin(a/2), so

	da/dt  =  2 (dy/dt) / sqrt(1 - y²)

   Making the variable substitution in the preceeding equation of motion,
   performing a little algebraic gymnastics, the eq. of motion becomes

   2 (dy/dt) / sqrt(1 - y²)  =  sqrt(2E/(mL²)) sqrt( 1 - (2gmL/E) y² )

      dy/dt  =  sqrt(E/(2mL²)) sqrt( (1 - y²)(1 - (2gmL/E) y²) )

   With k² := E/(2gmL), and z := y/k, this becomes
      dy/dt  =  k/sqrt(gL)) sqrt( (1 - k²z²)(1 - (1/k²) k²z²) )

      sqrt(gL) dz/dt  =  sqrt( (1 - k²z²)(1 - (1/k²) k²z²) )

   separating to the integrable form

      dz / sqrt( (1 - k²z²)(1 - z²) )  =  dt / sqrt(gL) 

   The integral on the RHS is trivial, and the integral on the LHS
   defines a transcendental "elliptic sine" function sn(z), Euler's
   elliptic integral of the first kind.  If,

      | dz / sqrt( (1 - k²z²)(1 - z²) )  =  u(z)

   then,  z := sn(u).  The constant 'k' is traditionally called "the
   modulus" of the function.  Unsurprisingly, sn() is a periodic function,
   a fact of mathematical life that is best understood by curling
   up with a few good texts on the theory of functions of a complex
   variable getting to the parts on periodic and doubly periodic
   functions, and the questions of just how periodic you can get.

   The solution to the pendulum problem with energy below the "sticking
   point" can then be expressed

	y(t)  =  k sn( sqrt(g/L) (t - t_0) )

        (sn() is the elliptic sine integral)

   Unlike SH motion, the period is *not* independent of the amplitude,
   and there is much to play with and explore here since it is a piece
   of classical physics that is far more real than the mathematically
   cool model of SHO that is approximated by it for small amplitudes.
   A first correction for the period, from the viewpoint of SHO, is
   actually of second order in the amplitude, so the SHO approximation
   for small actually finite amplitudes is really pretty good.  If
   a_max is the maximal angular amplitude, then the period is corrected
   for small finite amplitudes to be

		2π sqrt(L/g) (1 + (a_max / 4)²)

   The other cases get discussed when I get around to them.
   In the meantime, see the very nice articles at Wikipedia,
   Simple Pendulum and Pendulum (Mathematics).

The Quartic Oscillator

Considering a force law in terms of the analyticity discussion in the introduction, and restricting the terms in the analytic expansion to provide a force that is opposite in sign to the displacement, the force can only be a function of odd powers of the displacement. The potential energy can then only be a function of even powers of the displacement. The harmonic potential energy is kx², and the first perturbational term that can be added to that is then k_1 x⁴, which accounts for the title of what follows.

   The equation of motion is then,

	m (d/dt)² q(t) + k q(t) + k_1 q³(t)  =  0

Action Principles

The idea of the action principle begins with Maupertuis. It reduces to the metaphysical idea that nature is somehow optimal or most efficient in what actually happens as opposed to what would be otherwise possible. There is no physical reason that this should be true, but it seems that it is on the basis of many considerations.

Action principles arise from making this idea concrete by describing a functional of physical degrees of freedom whose integral over path or continuous sequence of states, usually parameterized by time, assumes an extremum. When this is done, an application of the calculus of variations to this specific extremum problem derives the equation(s) of motion from a specified functional. Whether or not it is true, this gives a sense that physical theory is being shown to be somehow less arbitrary than it might seem. Why, after all, does Newton's second law have the form that it does? That is not really a question of physics, but one of metaphysics: physics is concerned with finding out how nature behaves, what the fundamental rules are, and stating the answer in a precise mathematical language, while metaphysics concerns itself with how or why the rules have the form that they do.

It would be extremely satisfying to show that the "physical laws" have the form they do because they can not be otherwise. The metapostulate that physical laws are derivable from an action principle does not quite get to that point but makes a tantalizing step in that direction.

There are two problems that obstruct such a satisfying answer. First, regarding the extremum principle, all it really does is cause the question to be recast in another form which asks why the given functional whose integral is to be an extremum has the form it does. Second, and connected to the first problem, is observing that the first problem is merely the first step in an infinite regression of recasting the question.

Of what good then is the action principle?
Noether's theorem.

Lagrangian formalism

Hamiltonian formalism

cf ODE Handbook

		dH/dq  =  - Dp/Dt

		dH/dp  =  + Dq/Dt

Hamilton-Jacobi Theory


Canonical Quantization

The Quantum SHO [Dirac 1958] [Messiah 1965]

There are as many approaches to solving the problem of the quantum harmonic oscillator as there are to developing the mathematical machinery of quantum mechanics. The greater the level of mathematical sophistication that can be assumed, the more simple and elegant the solution. Rather than attempt all possible paths that can be taken, here I will take Dirac's algebraic path which to me is clearest and most elegant, if not the most general.

In his "Principles of Quantum Mechanics" [Dirac 1958], Dirac develops his abstract algebraic approach to QM that unites and extends the methods of Schrödinger through differential equations and the matrix mechanics of Heisenberg in the idea of algebras of linear operators operating on a projective Hilbert space. Since all separable Hilbert spaces are isometrically isomorphic, [ not true for the nonseparable Hilbert spaces appearing in QFT since the "degrees of freedom" are uncountable.] the physical structure necessarily resides in the structure of the algebra of operators.

If the operators of the algebra are bounded (in the norm topology), the C*-algebra of operators would be the well defined arena for QM. Life is never simple, and the operators of CCR turn out necessarily to involve unbounded operators, as the early theorems of Wintner, Wieland and Taussky show unequivocally. One of the P and Q must be unbounded, and the derived quadratic number operator is always unbounded. To tame an unbounded operator, one which engages in "exomorphic" mappings, i.e., those whose ranges are outside the domain of the mapping, it is necessary to find a "proper" restricted domain of definition D(M) for an operator M, so that M:D(M) → D(M), for all elements of D(M). Then, M becomes a simile of a bounded operator. For several such operators Mj, constituting a set or closed algebra, The intersection of the D(Mj) is the common domain of bounded behavior for the set of Mj.
[Section III],
There are other mathematical pieces of machinery to work with the messiness of operator unboundedness, like rigged Hilbert spaces and von Neumann algebras that do not fit here.

We understand QM immediately as a story of the time evolution of a quantum mechanical state of a system, as the time evolution of a vector of a projective Hilbert space [Appendix A], Hilbert space - Wikipedia While the fundamental equation of motion of SHO in CM is a linear ODE for a function x(t) of a single variable, in QM it is a linear PDE for a η(x, t) with two variables, that is of parabolic type, i.e., behaviorally like the "heat equation". The equation type will become more important and interesting when relativistic equations are considered.

Quantum mechanical observables are represented by linear operators, usually polynomial functions of Q and P, acting on the projective Hilbert space. By requiring the operators to be formally Hermitean, two important things happen: First, the eigenvalues of the operators are then real, not complex, and second the eigenvectors of the operators are mutually orthogonal relative to the Hilbert space inner product, and the eigenbasis set for each operator can be normalized, and so form a complete orthonormal basis for the Hilbert space as well for the projective Hilbert space.

Why bother with the actual details of this when it seems the Wikipedia article on the Quantum Mechanical Oscillator has done a rather good job.

Quantum SHO in n dimensions

Similarly, see the Wikipedia article on the Quantum Mechanical Oscillator


Symmetries of Equations v. Symmetries of Their Solutions

Quantizing The Damped SHO

To ask for the theory or mathematics of a quantized damped harmonic oscillator is a trick question. To see this, notice that the necessary damping term is a frictional or dissipative term. While this is no particular conceptual problem in CM, in QM the failure of the conservation of energy, which is exactly what a frictional term is about, of a particle (which is now extended throughout space) means a failure of the fundamental Schroedinger equation which is a statement of the conservation of energy. (In fact, all the dynamical laws of physics are essentially statements of the conservation of energy: Newton's second law, the first law of thermodynamics, the Einstein equations of general relativity.)

QM is a unitary theory, meaning that the propagator in time for quantum states is a unitary operator, whose unitarity implies the conservation of probability (the length of the state vector in a projective Hilbert space). Were QM not unitary, we would be using the full Hilbert space and not its projectivization.

That dissipated energy not only disappears from the "particle system", it disappears from the known universe, which is to say that neither energy nor probability is conserved in time, and without probability conserved the theory is necessarily nonunitary and so is not under the protection of QM. In QM, in a subtle way, conservation of energy is closely tied to conservation of probability; think thermodynamically in terms of available energy states, and that might be slightly clearer.

The dissipation is thermal in nature, and a way has been developed to rescue the problem by appealing to the theory of quantum thermal field dynamics, and by attaching to the system a thermal reservoir in such a way that unitarity is restored to the coupled system.

One way of restoring unitarity, and probably the simplest, is to attach another oscillator which absorbs the energy that the first one loses, allowing it represent the thermal reservoir. There is physical truth to this picture, so it is not quite as artificial as it may appear at first. That is to say that the lost energy must actually go *someplace*.

On the other hand, one might understand that the failure of unitarity may not be so dire, conceptually, as it appears to be, and that a correction for this overly burdensome requirement is actually not necessary, but certainly convenient in a successor theory of quantum mechanics.

Unitarity is not only associated with conservation of probability in time propagation, but also with invariance/covariance of the equations of motion under transformations associated with a spatial change of observer. If every observer is presumed to be able to normalize his observed frequency distribution, or any other probability distribution, what does the unitarity requirement mean, and why is it necessary?

The question of the necessity of unitarity come again to mind when the reality that all physically real systems that we model with isolated (closed) models are, in fact, open systems that are connected to the rest of existence. We have merely drawn a convenient distinction in the universe of "inside the system", and "outside the system".

Unitarity and its denial arises again in the context of relativity theory.

Let us look at a few of the results from the seemingly stable Wikipedia article.

Massive Special Relativistic Oscillator

The idea of an oscillator in SR is not a trivial idea, which is why textbooks often avoid the obvious concept altogether. The problems and results are enough to make you wonder about both QM and SR.

One reason for the nontriviality is clear simply from looking at the nonrelativistic simple oscillator solutions which are analytic functions for which no time derivative vanishes; there is more than just acceleration involved.

Another reason for the nontriviality is that "harmonicity is not a relativistically invariant concept". [A rather perspicuous turn of phrase due to Rick Fine at Columbia Univ. many years ago that summed up one of my more lengthy explanations.]

I will assume here a basic working knowledge of SR. Setting the problem up explicitly below, a frame is fixed in which the center of force is fixed at the origin as "our frame". x(t) is the displacement that we observe and "t" is our proper (comoving time) relative to some clock fixed in position in our frame. Notice that this an attempt to "quantize" the relativization, and not "relativize" the quantization.

The velocity v(t) has two interpretations: it is the velocity of the oscillating particle, and it is the relative velocity between our fixed frame of our observation and the comoving frame of the particle.

The fact that v(t) is indeed a function of time does not mean that the Lorentz transformation between the frames cannot be made; that transformation simply becomes time dependent. The time dependency *does* mean that what would otherwise be invariant under further transformations is now specifically *not* invariant.

What we observe from this fixed frame is what we observe, and what an observer in another relatively moving frame is what he observes. However, the tensorial invariants and forms of the equation of motion will, generally, differ. To rectify this unpleasantness, the general theory of relativity and particularly its general covariance is required. We will consider the general theory context later.
To begin, this is what one can say within the framework of SR:

   One can write, correctly, in "our frame" or "lab frame"
   that fixes the center of force, even in SR, the Newtonian equation

		dp/dt + kx  =  0.

   Since relativistically,

	                       m(0) v
	p  =  m(v) v  =  -------------------
	                 (1 - (v/c)²)^(1/2)

	                 m(0) dv/dt
	dp/dt     =  -------------------
	             (1 - (v/c)²)^(1/2)

	                 (-1/2)  m(0) v
	             + ------------------- ( - 2 v/c² ) dv/dt
	               (1 - (v/c)²)^(3/2)

	                (1 - (v/c)²)              v²/c²
	=  m(0) dv/dt [------------------- + -----------------]
	               (1 - (v/c)²)^(3/2)   (1 - (v/c)²)^(3/2)

	       m(0) dv/dt           m(v) dv/dt            p
	=  -------------------  =  -------------  =  -------------
	   (1 - (v/c)²)^(3/2)      (1 - (v/c)²)      (1 - (v/c)²)

    then, the equation of motion

	dp/dt  =   - k x(t)

    becomes simply

         m(v) dv/dt
	-------------  + k x(t)  =  0
        (1 - (v/c)²)


          (d/dt)² x(t) + k/m(v) (1 - (v/c)²) x(t)  =  0

   where an "effective relativistic frequency" w(v) is defined

	w²(v)  :=  k/m(v) (1 - (v/c)²)
	         =  k/m_0 (1 - (v/c)²)^(3/2)

	w(v)  :=  (k/m_0)^(1/2) (1 - (v/c)²)^(3/4)
	       =  w_0 (1 - (v/c)²)^(3/4)

   But, notice that we have naïvely assumed that the spring constant k
   is still a constant; it has units of

	[E]/[L]  =  [M][L]/[T²]  =  c [M]/[T]

   While one could continue work on solving the above equation, it makes
   better sense to understand that k should actually have a v dependency,
   and also to notice that w has units of 1/[T], and that the units we
   are using to measure time are the invariant units of proper time of our
   fixed frame.  If all that is so, then the actual frequency of the
   oscillator cannot have a v dependency, and so, in order that the
   equation reduce properly in the v << c regime, we must make the

		k  ->  k(v)  =  k_0 / (1 - (v/c)²)^(1/2)
		m(v)  =  m(0) / (1 - (v/c)²)^(1/2).

   What we cannot escape in a mechnical situation of SR is that the spring
   also moves and changes its observed characteristics in the course of
   the periodic motion.  This a priori integrated effect is accounted for
   on a macroscopic level by the above replacement transformation of k.

   Then, the correct relativistic equation of motion becomes,

  	|                                               |
        | (d/dt)² x(t) + (w_0)² (1 - (v/c)²) x(t)  =  0 |
	|                                               |

   with an invariant (proper time) frequency defined by

	  w_0  :=  (k_0/m_0)(1/2)

   as of old.  As one might expect, for (v/c) << 1, the correct
   equation of motion is approximated by that of the Newtonian harmonic
   oscillator; this is clearly true when (v/c)² can be neglected.

   NB: Getting to this corrected equation can be done by more formal
   and less circuitous paths and methods, but I happen to like this
   way because it is more physical and instructive as to how initial
   physical intuition can fail, and be misleading.  See more of same,

   Poincaré Transformations:

   The variable 't' does not appear explicitly in the equation;
   so, substituting

			t  ->  t + t_0

   does not alter the form of the equation.  In fact, any substitution

			t  ->  t'  =  f(t)

   still leaves the equation form invariant.  Similarly, a substituion

			x  ->  x + x_0

   also leave the equation form invariant.  Leaving the equation
   form invariant under both space and time translations.  A Lorentz
   transformation in one dimension is a simply a "boost" transformation
   to a frame differing by a constant velocity, call it v_0.  Then,

			x  ->  x (1 - (v_0/c)²)^(1/2)
			t  ->  t / (1 - (v_0/c)²)^(1/2)

   Is a velocity addition formula appropriate here?
   No - there is only the comoving frame and the frame of the particle.

   Though we still have a linear ODE, the coefficient of the x(t) term
   is clearly not constant.  At such a point, the game is "noticing"
   and cleverness - or tables of integrals and recipes for ODEs.
   We can get a first integral of the equation, which is usually helpful,
   by rewriting it as

	-------------  =  - (w_0)² x(t) dt
	(1 - (v/c)²)

   This has the advantage of at least isolating the variable v on the
   LHS.  Fortuitously, remembering that v = dx/dt, there is an easy
   integrating factor (-2v/c²), so that

	 - 2 v/c² dv
	-------------  =  (2/c²) (w_0)² dx/dt x(t) dt
	(1 - (v/c)²)

	               =  (2/c²) (w_0)² x dx

   Multiplication of the integration factor on both sides yields a
   separable equation where both sides can be separately integrated.

   Integrating both sides,

	ln( 1 - (v/c)² )  =  (2/c²) (w_0)² (1/2)x² + const.

	x  =  (+|-) sqrt( const. + (c² /(w_0)²) ln( 1 - (v/c)²) )

   When v=0, then, x = ±sqrt( const. ), but v=0 is a turning point where
   the excursion is maximal, so for the first constant of integration,
   v=0 ===> ln( 1 - (v/c)²) = ln(1)  = 0, and

			const.  =  A² 

   the square of the amplitude.  Then, from

	x(t)  =  (+|-) sqrt( A² + (c²/(w_0)²) ln( 1 - (v/c)²) )

             If x=0 then,
		               A²  = - c²/(w_0)² ln( 1 - (v_max/c)² )
             ln( 1 - (v_max/c)² )  =  - (w_0)²/c² A²
               ( 1 - (v_max/c)² )  =  exp( - (w_0)²/c² A² )
                       (v_max/c)²  =  1 - exp( - (w_0)²/c² A² )

	v_max  =  c (1 - exp( - (w_0)²/c² A² ))^(1/2)

   gives the maximal oscillator velocity at the point of equilibrium,

   Pushing for a second integral:

   First, backing up, exponentiating, and a little rearranging, ...

	x²  =  A² + (c²/(w_0)²) ln( 1 - (v/c)²)

	ln( 1 - (v/c)²)  =  (x² - A²) (w_0/c)²

	1 - (v/c)²  =  exp( (x² - A²) (w_0/c)² )

	(v/c)²  =  1 - exp( (x² - A²) (w_0/c)² )

	(1/c)² (dx/dt)²  =   ( 1 - exp( (x² - A²) (w_0/c)² ) )

	(1/c)² exp( -(1/2) (x² - A²) (w_0/c)² ) (dx/dt)²  =

     - ( exp( +(1/2)(x² - A²) (w_0/c)² ) - exp( +(1/2)(x² - A²) (w_0/c)² )

	(1/c)² exp( -(1/2) (x² - A²) (w_0/c)² ) (dx/dt)²  =

                  - 2 sinh( +(1/2)(x² - A²) (w_0/c)² )

 	f(x)  :=  ( (1/2) (x² - A²) (w_0/c)² )
	x      =  ( A² + 2 (c/w_0)² f )^(1/2)
	df     =  (w_0/c)² x dx

   the equation becomes, with these substitutions

	(1/2) (1/c)² exp( - f(x) ) / sinh( f(x) )  (dx)²  =  - (dt)²

	1 / ( 1 - exp( 2 f(x) )  (dx)²  =  - (c dt)²

	1 / ( 1 - exp( 2 f(x) )^(1/2)  dx  =  - c dt

   Finally, we have a separable first order ODE, and need to perform an
   indefinite integral of the form,

	  f                 dx
	  |  ---------------------------------
	  j  ( 1 - exp( ( a x² - b) ) )^(1/2)


	a  :=  (w_0/c)²

	b  :=  (Aw_0/c)²  =  A² a

   The integrand has no singularity at x=0, but does seem to have
   singularities at x = ±sqrt(b/a) = ±A; i.e., at the turning points.

   Compare the turning point singularities discussed in the
   section on the Plane Pendulum, regarding general oscillatory
   motion in Newtonian mechanics.

   For 0 < (ax² - b) << 1, exp(ax² - b) = 1 + (ax² - b), and the
   integrand approximates 1/(ax² - b)^(1/2).  For a > 0, the indefinite
   integral of this approximating integrand containing the singularity is:

	ln( 2 sqrt(a) sqrt(ax² - b) + 2ax )

   showing that the turning point singularities are actually integrable,
   and not a source of problematic physics.

   A Taylor expansion of the full integrand about x=0, to eighth order
   in x is (courtesy of maxima under gcl running on Linux):

	   exp(b/2)            exp(b/2)  a
      ------------------ + ------------------- x^2 +
      sqrt( exp(b) - 1 )    2 (exp(b) - 1)^3/2

      exp(b/2) (2 exp(b) + 1)  a^2
      ----------------------------- x^4 +
           2^3 (exp(b) - 1)^5/2

      exp(b/2) (4 exp(2b) + 10 exp(b) + 1) a^3
      ---------------------------------------- x^6
              (3) (2^4) (exp(b) - 1)^7/2

      exp(b/2) (8 exp(3b) + 60 exp(2b) + 36 exp(b) + 1) a^4
      ----------------------------------------------------- x^8
                   (3)(4) (2^5) (exp(b) - 1)^9/2

      where the general term has the form:

		exp(b/2)           a^k P_(k-1)( exp(b) )
	      ------------------  ------------------------- x^2k
	      (exp(b) - 1)^(1/2)   2^(k+1)  (exp(b) - 1)^2k

      P_(k+1)( exp(b) ), being a k+1 degree polynomial in exp(b).

      By a standard theorem of elementary calculus, where the series
      converges, it converges absolutely, and so a term by term
      integration is permissible.

      The integral of the sum will be an expansion in odd powers
      of x.

      So, integration of the equation of motion for a relativistic
      "simple harmonic" oscillator is possible, but it is not
      apparently possible in any closed or convenient form; it is
      merely "infinitely" approximable.  This does not, of course,
      accomodate the quantum theoretical requirements; the notion
      of "quantizing" a special relativistic harmonic oscillator
      is, to a ridiculous degree, not uniquely defined.

      Nevertheless, let us look at what the problem of "canonical
      quantization" will look like, by setting up the Langrangian
      and passing to the Hamiltonian formalism of a coordinate with
      a canonically conjugate momentum.

   A Lagrangian for the relativistic oscillator:

	L(q, dq/dt)  :=

   m_0 exp( - (1/2)(w_0 q/c)² )  x
	 (1/c dq/dt arcsin(1/c dq/dt) + ( 1 - (1/c dq/dt)² )^(1/2))

	- m_0

   will give the above equation of motion back, from the standard
   equation of the Lagrangian functional

		d/dt ( DL/D(dq/dt) ) - DL/Dq  =  0

   obtained in the usual ways through methods of the calculus of variations
   in determining the path for an extremum of the action integral

	  |  L(q, dq/dt) dt

   by requiring that its variation is equal to zero:
	D |  L(q, dq/dt) dt  =  0

   For now, lacking the customary l.c. delta, I use 'D' to indicate
   functional variation.  It is worth reminding that in Newtonian mechanics,
   the extremum obtained is a minimum, while the extremum in
   relativistic mechanics, and in relativistic fields, it is a maximum.
   To see this, one must of course determine the sign of the second
   variational derivative; if it is negative, you have a maximum, and
   if it is positive you have a minimum.  I will leave that demonstration
   as an exersize.

   The canonical momentum as always defined through the Lagrangian is

	p  :=  DL/D(dq/dt)

	    =  m_0 exp( - (1/2)(w_0 q/c)² ) arcsin( 1/c dq/dt )

    [This is NOT the same (Newtonian) p as above.]

    By computing dp/dt from this and seeing that

	DL/Dq  = - m_0 q (w_0/c)² exp( - (1/2)(w_0 q/c)² )   x

	 (1/c dq/dt arcsin(1/c dq/dt) + ( 1 - (1/c dq/dt)² )^(1/2))

   the previous equation of motion results as the Lagrangian equation
   of motion, with notation change, x -> q

          (d/dt)² q(t) + (w_0)² (1 - (1/c dq/dt)²) q(t)  =  0

   The Hamiltonian is then defined in terms of the Lagrangian by

		H  :=  p dq/dt - L


   - m_0 c² exp( - (1/2)(w_0 q/c)² )  cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

	+ m_0 c²

   Expanding the cosine in a Taylor series,

	H  =  p²/(2 m_0) + (1/2)m_0(w_0 q)²

	   - (1/4) ( (1/2)m_0(w_0 q)⁴ - (w_0 q p)²/m_0 + (1/6)p⁴/(m_0)³

		     ... )

   The first term is the Newtonian K.E. contribution, the second term is
   the Newtonian P.E. contribution.  The remaining terms are combined
   relativistic corrections to both the Newtonian K.E. and P.E.

   Notice, however, that there is no explicit rest mass term (m_0 c²)
   in the expansion, and remember that p is the canonical momentum,
   not the Newtonian momentum.

   A few fairly simple, but important observations should be made:

	0) H is a positive time dependent function, not a constant.

	1) 0  <=  H  <=  m_0 c² (H is bounded positive function.)

	   NB: this is with respect to the proper time of the observer
	       whose frame is stationary and confluent with the frame
	       of the oscillator's center of force.

	2) dq/dt = 0  ===>  H  =  0, and p  =  0  (turning points)

	   In newtonian physics, we expect a positive value equal
	   the maximal potential energy.  Relativistically, we
	   could expect the total energy to include a term (m_0 c^2).
	   Neither happen.

	3) dq/dt  -> (+|-) c  ===>  H  =  m, and

		p  =  (+|-) (1/2)(m_0 c pi) exp( - (1/2)(w_0 q/c)² )

	4) q = 0  ===>  H  =  m_0 c² ( 1 - cos( p/(m_0 c) ) )

	5) p = 0  ===>  H  =  - L(p=0)  =

				 m_0 c² (1 - exp( - (1/2)(w_0 q/c)² )

	   [If you want an expression of the "relativistic massive
	    energy of the oscillator", this would appear to
	    be the best choice, and it is an interesting one that
	    looks like the usual particle rest energy with Gaussian
	    fluctuation correction.

	    Let us recall the Gaussian factor in the energy
	    eigenfunctions, in the solution of Schrödinger equation
	    for a Newtonian quantum mechanical SHO.

	    There are more interpretational things to play with here.]

	6) Unlike the Newtonian Hamiltonian for an SHO, which has no
	   ambiquity in the ordering in products of p and q (because
	   such terms simply don't exist), this Hamiltonian is rather
	   completely ambiguous regarding such orderings: hopes for
	   any cute and simple canonical quantization are dashed.

	7) One might, however, be less theoretically sophisticated,
	   and still employ the original Bohr rules for the quantization
	   of action.  Would this be a right thing to do?  Possibly,
	   but, even if it is not, it might be enlightening.

	8) If you have the idea that all the simple, even just formally
	   simple things have been figured out, you would be quite wrong.
	   There is always a wealth of dirt swept under many rugs; I am
	   merely giving a precious few of those rugs a minor beating.

   An easy conclusion from 0) and 1) is rather important and instructive:
   H is not an invariant (constant) of the motion, and yet this is
   a perfectly conservative system.  The lesson then is this:

	For any closed relativistic system cast in Canonical Hamiltonian
	form, it is generally *not* true that the Hamiltonian function
	represents the total conserved energy of the system.

	This is to say that as a general rule, "canonical quantization"
	is invalid for relativistic systems; this includes any
	quantization of the electromagnetic field, and most surely
	then quantum electrodynamics, seen as a canonical quantization.

   This needs to be emphasized since so much is made of the Hamiltonian
   as the energy of the system in classical mechanics, but especially in
   in QM where the method of canonical quantization is preeminant, and
   even, sometimes wrongly in QFT, that these almost elementary formal
   facts are overlooked.

   Do note that this is a massive oscillator, and that the conclusions
   regarding it may not be (or are not) the same for a massless oscillator,
   which in Newtonian physics cannot even exist.  You cannot "simply"
   take the limit of these expressions as m_0 goes to zero.


   The Canonical (Hamilton) Equations of Motion

   With H =

   - m_0 c² exp( - (1/2)(w_0 q/c)² )  cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

	+ m_0 c²

	dH/dp  =  

    m_0 c² exp( - (1/2)(w_0 q/c)² ) exp( + (1/2)(w_0 q/c)² )  X

		(1/(m_0 c) sin( p/(m_0c) exp( + (1/2)(w_0 q/c)² ) )

	= c sin( p/(m_0c) exp( + (1/2)(w_0 q/c)² ) )


	dH/dq  =  

   - m_0 c² (d/dq) exp( - (1/2)(w_0 q/c)² ) X

		cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

   - m_0 c² exp( - (1/2)(w_0 q/c)² ) X

		(d/dq) cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )


   - (1/2) m_0 w_0  exp( - (1/2)(w_0 q/c)² ) (d/dq) ( q² ) X

		cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

   - m_0 c² exp( - (1/2)(w_0 q/c)² ) X

      - sin( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) ) X

	( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )(d/dq)( + (1/2)(w_0 q/c)² ) )


   -  m_0 w_0  exp( - (1/2)(w_0 q/c)² ) q  X

     cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

   + m_0 c² exp( - (1/2)(w_0 q/c)² ) X

      (1/2) (p/m_0 c) w_0 c² sin( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) ) 2 q


  - m_0 w_0  exp( - (1/2)(w_0 q/c)² ) q cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

+ (p/c) w_0  exp( - (1/2)(w_0 q/c)² ) sin( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) ) q

   Finally, the two canonical equations are then:

	dH/dq  =  

  - w_0  exp( - (1/2)(w_0 q/c)² ) X

	 [ m_0 q cos( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

	- (p/c) q sin( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) ) ]

		=  - Dp/Dt

	dH/dp  =   c sin( p/(m_0 c) exp( + (1/2)(w_0 q/c)² ) )

		=  + Dq/Dt

   If we let sinh( r )  :=  v/c  =  (1/c) dx/dt, then,

		cosh( r )  =  (1 - (v/c))^(1/2)
		dv/dt  =  c dr/dt cosh( r )
			dv  =  c dr cosh( r )

   The equation of motion becomes,

          (d/dt) v(t) + (w_0)² (1 - (v/c)²) x(t)  =  0

	  c dr/dt cosh( r ) + (w_0)² cosh²( r ) x(t)  =  0

   or, since cosh is never zero

	  c dr/dt  + (w_0)² cosh( r ) x(t)  =  0

   Differentiating with respect to t,

   c (d/dt)² r  + (w_0)² (d cosh( r )/dt x(t) + cosh( r ) dx/dt )  =  0

   c (d/dt)² r  + (w_0)² ( sinh( r ) dr/dt x(t) + cosh( r ) dx/dt )  =  0

   c (d/dt)² r  + (w_0)² ( sinh( r ) dr/dt x(t) +  c sinh( r ) )  =  0

   and substituting

	x(t)  =  c/(w_0)²  (dr/dt)/cosh( r )

   c (d/dt)² r  + (w_0)² +  c sinh( r )  +

         + (w_0)² ( sinh( r ) dr/dt c/(w_0)² (dr/dt)/cosh( r ) )  =  0

        c (d/dt)² r(t) + c (dr/dt)² tanh( r ) + c sinh( r )  =  0

          (d/dt)² r(t) +   (dr/dt)² tanh( r ) +   sinh( r )  =  0


	-------------------   =  - k/m(0) x(t)
	(1 - (v/c)²)^(3/2)

	   (d/dt)² x(t)
	------------------------------   =  - k/m(0) x(t)
	(1 - (1/c)² (dx(t)/dt)^(3/2))

   If sinh( r )  :=  v/c  =  (1/c) dx/dt
	x(t) = - cosh( r ) + C is a solution, where dr/dt = 1/c

		cosh( r )  =  (1 - (v/c)²)^(1/2)
		(v/c)²  =  1 - cosh²( r )  =  - sinh²( r )
		(v/c)  =  ±i sinh( r )   --- (Seeming strangeness)
		dv/dt  =  ±i c dr/dt cosh( r )
			dv  =  ±i c dr cosh( r )

	±i c dr/dt cosh( r )
	--------------------  =  - k/m(0) x(t)
	cosh²( r )
	c dr/dt sech²( r )  =  ±i k/m(0) x(t)

	d/dt ( tanh( r ) )  =  ±i k/(m(0) c) x(t)

	x(t) dt  =  ±i (m(0) c)/k d( tanh( r ) )
	x(t) dx  =  ±i (m(0) c)/k v d( tanh( r ) )
	         =  ±i (m(0) c²)/k v/c d( tanh( r ) )
	         =  ±i (m(0) c²)/k sinh( r ) d( tanh( r ) )
   d(uv) = udv + vdu

   =  ±i (m(0) c²)/k [ tanh( r ) d( sinh( r ) ) - d( tanh( r ) sinh( r ) ) ]
   =  ±i (m(0) c²)/k [ tanh( r ) cosh( r ) dr - d( tanh( r ) sinh( r ) ) ]
   =  ±i (m(0) c²)/k [ sinh( r ) dr - d( tanh( r ) sinh( r ) ) ]

   (1/2) [x² + (x_0)²]  =  ±i (m(0) c²)/k [ cosh( r ) - tanh( r ) sinh( r ) ]
   (1/2) [x² + (x_0)²]  =  ±i (m(0) c²)/k cosh( r ) [ 1 - tanh²( r ) ]

		ch² - sh² =  1
		1 - th² = 1/ch²

   (1/2) [x² + (x_0)²]  =  ±i (m(0) c²)/k cosh^(-1)( r )

	d ( tanh( r ) )  =  ±i k/(m(0) c) x(t) dt

	tanh( r )  =  2k/(m(0) c) | x(t) dt

	           =  2k/(m(0) c) | x(t) dx/dx dt

	           =  2k/(m(0) c) | x(t) dx dt/dx


Quantum Special Relativistic Oscillator

After the messy affair of trying to relativize a classical SHO, then showing how impossible it is to carry out a scheme of canonical quantization, it is bemusing to consider a different method of getting to a quantized special relativistic oscillator, by starting with the Schrödinger equation, the fundamental equation of QM, relativizing it, and then adding to it a harmonic potential energy.

Relativizing the Schrödinger equation is formally trivial in SR, looking at things from the Schrödinger picture: it means replacing the Newtonian equation of conservation of energy

	p²/(2m_0) + V(q)  =  E

with the cognate statement in SR,

	E² - c p²  =  m_0² c4


	p² - (E/c²)  =  - m_0² c²

and then making the same magical operator substitutions as one made for the Newtonian case.

Schrödinger, in fact, did this, but for the "free particle" case with three spatial dimensions, it is routinely called the Klein-Gordon (K-G) equation. Mathematically, this is not so trivial as it might appear: the Newtonian linear parabolic PDE now becomes a relativistic linear hyperbolic PDE.

The initial conditions for a parabolic PDE require an initial function defined at t=constant, while the initial conditions for a hyperbolic PDE require both the initial function and a consistent set of partial derivatives of the initial function, all defined on a spacelike hyperbolic submanifold of Minkowski space. The complexity of specifying the initial conditions has increased considerably.

Among further details, this procedure tells us that the procedures of relativization and quantization are NOT generally commutative, even in the simplest case of a free particle.

The further problems, most generally, are that the energy eigenvalues that could be bounded below in the Newtonianian situation are no longer bounded below, and that the positive probabilities that could be calculated in the Newtonian case (stemming from the Euclidean norm of E3) are no longer necessarily positive since they stem from the pseudoeuclidean norm of Minkowski space.

In current physical theory, the first problem is solved, conceptually, by allowing that for every particle that can have energy E, there is an antiparticle that can have energy (-E), also allowing that the phase of the quantum state is associated with electric charge.

Massless particles (as the photon), which now become allowed, are their own antiparticles. Electrons, which are massive, have positrons as their antiparticles. This theory is due to Dirac. With all the symmetries involved, Feynman described a positron as an electron running backward in time. Experimental evidence seems to tell us that these antiparticles actually do exist.

The second problem is conceptually solved by constructing quantum field theory from the idea that what seemed to be a mathematical theory for a single particle cannot exist, and that the equation alone refers only to a "single particle sector" of a quantum field theory (QFT) of an indefinite number of particles.

In doing QFT, the infinite the separable Hilbert space that is already a mathematical problem in QM with its unbounded operators, the Hilbert space becomes nonseparable, where the representations are not unitarily equivalent and there are particle representations and nonparticle (myriotic - K. Friedrichs) representions.

Am I lying a bit here by ignoring the mathematical difficulties of carrying this entire program of quantization out in such a way that it is mathematically and physically sound? Believe it!


The Damped Relativistic Oscillator

   An obvious equation of motion is obtained by observing the
   nonrelativistic damped equation

	  (d/dt)² q + (b/m_0) dq/dt + (k/m_0) q  =  0

   and the simple relativistic equation

          (d/dt)² q(t) + (w_0)² (1 - (v/c)²) q(t)  =  0

   to obtain

  (d/dt)² q(t) + (1 - (1/c²) q'(t)²) ((b/m_0) q'(t) + (k/m_0) q(t))  =  0

   If the undamped SR SHO equation was fun, this will be even more fun
   since it is even more nonlinear - cubic in q' = (dq/dt).  Its full
   analysis would make a very fine MS thesis project.  If you liked
   that one, then start looking at the problems associated with the
   angular momentum field tensor for the electromagnetic field, and
   the more general EM equations where the permeability and
   permitivities are more realistically 2nd order tensor fields
   with space and time dependencies.

   If you think that all the "easy" or "simple" questions of classical
   theoretical physics have been answered and that they are well known,
   you would be making a very large mistake.  We do not even really know
   how EMT connects correctly with GR, the well known mathematical
   solutions to the cases of spherical and cylindrical symmetry to
   "the problem" notwithstanding.

Oscillator in GR

The same essential problems arise for a SHO in GR as already arise in SR: harmonicity is not a relativistically invariant concept. It depends on a specific, immutable function of force acting on a particle that almost always (except at the equilibrium point) accelerated.

The trick of making sense is to chose a "frame" and define harmonicity within that frame, letting the devil take the hindmost - or something like that.


Various related solutions

The Cosmological Constant in GR

The FCCR Oscillator

The construction of a system of Finite Canonical Commutation Relations (FCCR) in nxn Hermitean matrices that fulfills all the basic requirements of a quantum theory, is centered on the idea of an oscillator, and particularly on a simple harmonic oscillator that functions as a physically bounded clock with a finite number of Q degrees of freedom.

The one significant divergence from QM is that a genuine time operator not only can, but does exist in a Fourier operator relation to the energy.

The form of the oscillator Hamiltonian that is identified with the oscillator energy is given by the standard form

		[Q(n), P(n)]  =  i h-bar G(n)

	H_osc(n)  :=  w h-bar/2 ( Q²(n) + P²(n) )

	           =  w h-bar ( N(n) + (1/2)G(n) )

   The time operator T(n) has the same spectrum as the truncated number
   operator N(n), and is related by the Fourier transform UPSILON(n),

		 UPSILON(n) N(n) UPSILON!(n)  =  T(n)

   with matrix elements,

		<n, k| UPSILON(n) |n, j>  =  1/sqrt(n)  exp( i (2 pi)/n kj )


		T(n) |t_k>  =  k |t_k>

   the computed value of H_osc(n) in the T(n) eigenbasis [Theorem 8.10]

	<t_k | H_osc(n) | t_j>  =  

	┃ i h-bar w/2 exp( i (2 pi)/n (k-j) ) cot( pi(k - j)/n ), k!=j
	┃ h-bar w/2 (n-1),                                         k=j

These are unnormalized transition amplitudes where the full oscillator energy motivates the transitions, and the T(n) eigenvectors represent phases (pointer positions of a digital clock) of the oscillator "as a Q clock". There is a self reference and self consistency to this clock that does not refer to an external clock, or any arbitrary "progression of time". The exterior reference is to a single quantum of time within which the transition takes place. The dimension n of the matrix algebra is a measure of the size of the system/oscillator/clock relative to the Planck regime: the period of the clock is (n tau_0), so, the clock division into phase positions with phase separations (2 pi)/n, is really a a uniform separation of pointer positions by tau_0. For n unbounded, which is the strong operator topology limit to CCR, enforces the physical unboundedness of the system/oscillator/clock. To say that this is physical nonsense (and therefore that QM, based on continua in space and time, is ultimately physical nonsense) should be gratuituous.

Nevertheless, most known physics that is avaialable for probing is well above the Planck level, so the models through continua are still largely accurate and far more expedient than the more laborious finite mathematics upon which they are founded. But, to understand the Planck regime, such models must be abandoned. That is exactly what is done as we continue.

the fundamental commutators - and lack of "equation of motion" return of equation of motions theorems - VIII (also FCCR2) transition amplitudes normalization theorems VIII links Use FCCR2 contents?

The transition probabilities

n=2 a Bernoullian distribution for many clicks that gives the observed rate of clicks relative to the clock


n=4 The transition point for a genuine clock in that the favored t_k -> t_(k+1) actually means something.

In an n-asymptopia of "large clocks", the Newtonian time hypothesis emerges, even for one Planck click. This would tell how truly good the Newtonian hypothesis turns out to be. For mathematical details, viz. Quantum Theoretic Origins of Newtonian Time,

The FCCR Oscillator in m dimensions

Physics Pages

Math Pages

Home Page

The URL for this document is
Created: August 5, 2002
Last Updated: July 16, 2005
Last Updated: October 4, 2005
Last Updated: October 11, 2006
Last Updated: July 7, 2007
Last Updated: January 5, 2010
Last Updated: March 30, 2010
Last Updated: October 26, 2011
Last Updated: August 16, 2015
Last Updated: