[Previous Section] [Next Section] [Table of Contents]

The Eigenvalues and Eigenvectors of
Q(n), P(n) and auxiliary
Sine and Cosine Operators ~s(n) and ~c(n)




    <Theorem 9.1>:
   For a matrix of the form:

             |b_0 a_0 0    ...                  0      |
             |c_1 b_1 a_1 0    ...              0      |
             |0   c_2 b_2 a_2 0    ...          0      |
     M  =    |0  0    c_3 b_3 a_3 0 ...0        0      |         (9.1)
             |0       ...              a_(n-3)  0      |
             |0            0  c_(n-2)  b_(n-2)  a_(n-2)|
             |0            0  0        c_(n-1)  b_(n-1)|

   Define the sequence of polynomials P_k(z) by the
   recursion relation



     z P_k(z)  =  a_k P_(k+1)(z) + b_k P_k(z) + c_k P_(k-1)(z)   (9.2)

   for k = 0, 1, 2, ...  with P_0(z) := 1 and P_(-1)(z) := 0.
   Then, the eigenvalues of the nxn matrix M are given by:

     P_n(z)  =  0                                                (9.3)

    [Goertzel 1960], p. 81.
   It also happens that all orthogonal polynomial sets satisfy a recursion
   relation of the form (9.2).







        Eigenvalue Problem for Q(n) and P(n)

Both Q(n) and P(n) are of the form of M given in (9.1). By substituting the appropriate values for Q(n) into (9.2), it can be seen that the recursion relation determines the polynomials to be the familiar Hermite polynomials given, for example, by the usual Rodrigues formula


     H_n(z)  =  (-1)^n exp( z^2 ) (d/dz)^n exp( -z^2 )            (9.4)


              [n/2]    n! (-1)^m
           =  SIGMA   ------------ (2z)^(n-2m)
               m=0    m! (n - 2m)!

   (NB these are not normalized.)  [Appendix E]

As a more explicit alternative to invoking Theorem 9.1:

   Let

     Det( R_n( lambda ) )  :=  Det( Q(n) - lambdaI(n) )  =  0     (9.5)

be the secular equation of Q(n). By expanding in the manner of Lagrange, on the last row of the determinant, the recursion relation

   Det( R_(n+1)( lambda ) )  =
     lambda Det( R_n( lambda ) ) - (n/2)Det( R_(n-1)( lambda ) )
                                                                  (9.6)

   results.  Defining

     H_n( lambda )  :=  (-1)^n 2^n Det( R_n( lambda ) )           (9.7)

Substitution into (9.6) yields

     2 lambda H_n( lambda )  =
        H_(n+1)( lambda ) + 2 n H_(n-1)( lambda )                 (9.8)

which is the recursion relation for the Hermite polynomials. It follows from the secular equation (9.5) that the eigenvalues of Q(n) are the roots of the nth Hermite polynomial.

Thus the eigenvalues of both Q(n) and P(n) (See [Section VII]) are the roots of the nth order Hermite polynomial. The first few are approximately:


   Order Roots
     1      0
     2   +|-0.707106
     3   +|-1.224748    0
     4   +|-1.65068  +|-0.524649
     5   +|-2.02018  +|-0.958572     0
     6   +|-2.350614 +|-1.335851  +|-0.436078
     7   +|-2.651967 +|-1.67355   +|-0.816289      0
     8   +|-2.93064  +|-1.98165   +|-1.157192   +|-0.381187
     9   +|-3.19098  +|-2.26659   +|-1.46855    +|-0.723551    0
     10  +|-3.436165 +|-2.53275   +|-1.756684   +|-1.03660  +|-0.342901
     11  +|-3.66846  +|-2.7833    +|-2.02591    +|-1.32656  +|-0.656812 0
     12  +|-3.88973  +|-3.0208356 +|-2.27914395 +|-1.597825 +|-0.947782
         +|-0.314240435

The accuracy checks by "the sum of the squares of the nth Hermite polynomial roots = (n(n-1))/2", (Cf. Theorem 9.2) being accurate to 0.0001.

The exact values of the roots for n=2, ..., 9 are obtainable by elementary algorithmic methods to show that


           q(2, 0)  =  +(1/sqrt(2))
           q(2, 1)  =  -(1/sqrt(2))

           q(3, 0)  =  +sqrt(3/2)
           q(3, 1)  =   0
           q(3, 2)  =  -sqrt(3/2)

           q(4, 0)  =  +[(3/2)+sqrt(3/2)]^(1/2)
           q(4, 1)  =  +[(3/2)-sqrt(3/2)]^(1/2)
           q(4, 2)  =  -[(3/2)-sqrt(3/2)]^(1/2)
           q(4, 3)  =  -[(3/2)+sqrt(3/2)]^(1/2)

           q(5, 0)  =  + [(5/2)+sqrt(5/2)]^(1/2)
           q(5, 1)  =  + [(5/2)-sqrt(5/2)]^(1/2)
           q(5, 2)  =  0,
           q(5, 3)  =  - [(5/2)-sqrt(5/2)]^(1/2)
           q(5, 4)  =  - [(5/2)+sqrt(5/2)]^(1/2)

The equations for n=6 through n=9, although they are in principle solvable, are not solvable by only general rational methods since the discriminant in the Lagrange-Galois method is <0 and the solution then depends on taking the cube root of a complex number. similarly in the Cardano method, the roots of the quadratic resolvent are complex (casus irreducibilis) with the roots of the cubic being real and the same operation becomes necessary. The roots can only be expressed using transcendental trigonometrical functions. For example the roots of the Hermite polynomial for n=6 are the positive and negative square roots of the quantities


     (2/3) rho^(1/3) cos( (1/3)(theta + 2 pi k) ) + 5/2

     for k = 0, 1, 2 where
     rho    :=  5(3/2)^3 ( 58 )^(1/2)
     theta  :=  arctan( 3/7 )








   In order to determine the unitary diagonalizing transformation
   XI(n) such that the mapping

        Q(n)  ->  XI!(n) Q(n) XI(n)

   carries Q(n) to its diagonal form, write out the eigenvalue problem

       Q(n) |q(n, k)>  =  q(n, k) |q(n, k)>                      (9.9)

   in terms of the eigenvector components <n, k|q(n, j)>
   in the canonical basis representation, i.e.,

                    (n-1)
     |q(n, k)>  =   SIGMA <n, j|q(n, k)> |n, j>                  (9.10)
                     j=0

   Then, (9.9) shows that

      <n, j+1|q(n, k)>  =
          (2/(k+1))^(1/2) q(n, k) <n, j+1|q(n, k)> +
          (k/(k+1))^(1/2) <n, j-1|q(n, k)>
                                                                 (9.11)



   If we take

      <n, j|q(n, k)>  =  ~H_j( q(n, k) ) <n, 0|q(n, k)>          (9.12)

   where,

      ~H_j( q(n, k) )  :=  (j! 2^j)^(-1/2) H_j( q(n, k) )        (9.13)

are the normalized Hermite polynomials, the recursion (9.11) is satisfied. The use of the normalized Hermite polynomials essentially normalizes the rows of XI(n).
   To normalize the columns of  XI(n), we map for every j = 0, 1, ..., n-1,

        ~H_j( q(n, k) )  ->  (d_j)^(-1/2) ~H_j( q(n, k) )
   where

              (n-1)
     d_j  :=  SIGMA ~H_j( q(n, k) ) ~H_j( q(n, k) )              (9.14)
               k=0

   Finally, the matrix elements of what turns out to be the proper real
   orthogonal diagonalizing matrix for Q(n) can be written as



     <n, j| XI(n) |n, k>  =   (d_j)^(-1/2) ~H_j( q(n, k) )       (9.15a)


     <n, j| XI!(n) |n, k>  =  (d_k)^(-1/2) ~H_k( q(n, j) )       (9.15b)
                          =  <n, k| XI(n) |n, j>
                          =  <n, j|q(n, k)>
                          =  <q(n, k)|n, j>

   Expressing the relation XI(n) XI!(n) = I(n)
   in terms of these matrix elements we have for k, j < n,

      (n-1)
      SIGMA ~H_k( q(n, l) ) ~H_j( q(n, l) )  =  d_k delta_(kj)          (9.16)
       l=0

   This is clearly identically satisfied when k = j < n, from the
   definition of d_k.  For k,j < n and k not= j, it is a somewhat unusual
   expression of orthogonality of the Hermite polynomials as functions
   defined on the rather specific discrete set of zeros of H_n(z).





        Upper Bounds on the eigenvalues of Q(n)

    <Theorem 9.2>:

   The sum of the squares of the roots of the nth Hermite polynomial
   is given by:


     (n-1)
     SIGMA q^2(n, k)  =  Tr( N(n) ) = (n(n-1))/2                        (9.17)
      k=0

   Proof:
   Using the expression

     (1/2)( Q^2(n) + P^2(n) )  =  N(n) + (1/2)G(n)                      (9.18)

   being the analog of a QM relation for the Hamiltonian of the harmonic
   oscillator, and the equality of the spectra Sp( Q(n) ) and Sp( P(n) ),
   after taking the trace of the above expression, we have the result.

   Alternate proof:

   More directly  [Appendix E], for H_n(z), the polynomial
   term of power 2n-1 always vanishes.  From Vieta's (1540-1603) theorem,
   the sum of the roots of H_n(z) vanishes. So

         (n-1)
       [ SIGMA q(n, k) ]^2  =  0
          k=0

   Then

       SIGMA q^2(n, k)  =  - SIGMA q(n, k) q(n, j)
         k                    k=j

                   =  - 2 SIGMA q(n, k) q(n, j)
                           k>j

   But also from Vieta's theorem, reading the value of the coefficient
   of the term with power 2n-2, when H_n(z) is scaled so the leading
   highest power term has coefficient 1,

        SIGMA q(n, k) q(n, j)  =  - [n(n-1)/4]
         k>j

   From this the result follows.

   QED

Note: the result of Theorem 9.2 before, is easily obtained by classical theorems of algebra; nevertheless, the simplicity and algebraic nature of the first proof is intriguing since there is no direct confrontation with the Hermite polynomials themselves. This engenders the idea, not pursued here, that there are other such expressions of the geometry of the zeros of H_n(z), as well as other orthogonal polynomials, that are easily available in like manner.

From equation (9.17) one can prove that the roots of H_n(z) are all contained within a circle of radius n/2 in C. They are also, of course, real. This is not a very good upper bound.


    <Theorem 9.3>:

   The zeros of H_n(z), for large n, are all contained
   within a circle of radius n/2.

   Proof:

   For a set of real r_k for all k, 0 < |r_k| < 1

        (n-1)
        SIGMA r_k^2 = 1
         k=0

   implies

        (n-1)
        SIGMA r_k < sqrt( n )
         k=0

   The spectrum of Q(n) is symmetric about zero, and includes
   zero for n odd.  The sum of the squares of the eigenvalues
   can be written as a sum over positive eigenvalues:

        2 SIGMA q_+^2(n)  =  n(n-1)/2

   There are n/2 terms in the sum for n even, and (n-1)/2 terms
   for n odd.  Then,

           |    2 q_+(n)     |^2
     SIGMA |-----------------|  =  1
           |sqrt(n sqrt(n-1))|

The LHS is a sum of positive non-zero quantities, the maximum value of which must be less than 1 (this inequality might be improved by further argument), and that maximum value is for the maximal eigenvalue, q_max(n) of Q(n). See equation (9.25). Therefore we can write


        q_max(n)  < (1/2) sqrt(n sqrt((n-1))

   Asymptotically, for large n, the upper bound approaches n/2.

QED

Also available are the classical results that if n not= m, that H_n(z) and H_m(z) have no zeros in common, and that the zeros of H_(n-1)(z) lie between the zeros of H_n(z), [Whittaker 1927], p. 121.

We have determined that the eigenvalues of Q(n) are the zeros of n-th order Hermite polynomial and specified the calculation of the Diagonalizing transformation for Q(n). The Fourier transform Fr(n) of [Section VII], provides the Diagonalizing transformation PI(n) for P(n) [Section X]. Let Q_d(n) denote the diagonalized matrix for Q(n).


        Fr(n) |q(n, k)>   =  |p(n, k)>

        Fr(n) Q(n) Fr!(n)  = +P(n)

        Fr(n) P(n) Fr!(n)  = -Q(n)


        Q(n)  |n, k>  ->  Q_d(n) |q(n, k)>

   such that

        Q_d(n) |q(n, k)>  =  q(n, k) |q(n, k)>

   or

        XI!(n) Q(n) XI(n) XI!(n) |n, k> = q(n, k) XI!(n) |n, k>

   I.e.,


     |q(n, k)>  =  XI!(n) |n, k>                                        (9.19a)

   and

     |n, k>  =  XI(n) |q(n, k)>                                         (9.19b)




Asymptotic behavior of the eigenvalues of Q(n) & P(n)

For large values of n we have the asymptotic formulas: [Abramowitz 1965], p. 787 | (-1)^n sqrt(n) | lim | -------------- H_(2n)(z/(2sqrt(n))) | n->infinity | 2^(2n) n! | -> (1/sqrt( pi )) cos( z ) (9.20a) | (-1)^n sqrt(n) | lim | -------------- H_(2n+1)(z/(2sqrt(n))) | n->infinity | 2^(2n) n! | -> (2/sqrt( pi )) sin( z ) (9.20b) So, for large values of n, H_(2n)(z/(2sqrt(n))) has a zero when cos( z ) = 0, that is, for z = pi (k - 1/2) for integral k; H_(2n+1)(z/(2sqrt(n))) has a zero when sin( z ) = 0 or, when z = pi k. Thus asymptotically the roots of H_n(z) are z(n, k) pi 1 h(n, k) = --------- = ---------- (k - ---) (9.21) 2 sqrt(n) 2 sqrt(n) 2 The zeros are symmetrically positive and negative so for large n, k_max = n/2 when n is even, and k_max = (n-1)/2, when n is odd. Then for both odd and even n, z_max = pi (n-1)/2. Therefore we have the refinement on asymptotic growth of the maximal eigenvalue in <Theorem 9.4>: The eigenvalues of Q(n) and P(n) have the asymptotic form given by equation (9.21), then pi n-1 h(n, k_max) = -------- --- -> (pi/4)sqrt(n) (9.22a) 2sqrt(n) 2 so the largest eigenvalue of Q(n), and so also of P(n) grows as sqrt(n). Moreover, asymptotically, the eigenvalues of Q(n) are approximated by q(n, k) approx= (DELTA q(n)) ( (n-1)/2 - k ) (9.22b) k = 0, 1, 2, ..., (n-1), with q_max obtaining for k = 0, and so become equally spaced, (the spectrum becomes additive) the spacing being (DELTA q(n)) = pi/(2 sqrt(n)), (9.22c) and in the limit n->infinity, the roots of H_n(z) and therefore the eigenvalues of Q(n) approach denseness on the real line.

This is a better handle on asymptotic growth than the previously derived upper bound in Theorem 9.3. These asymptotic formulae for the maximal eigenvalue and for the eigenvalue spacing are not very good for low values of n. For n = 12, they are on average too small by almost 50% of the actual calculated values. The maximal eigenvalue is 3.88973 by numerically extracting the root of the Hermite polynomial, and 2.72070 by the asymptotic formula. The rms of eigenvalue differences is 0.712320, and the average of the absolute values of the differences is 0.707223, while the asymptotic formula for the spacing gives 0.453448. See [appendix E] for a fairly good refinement of the root values that is good for low values of n.

   The eigenvectors |q(n, k)> of Q(n), since Q(n) is Hermitean
   (in fact, real and symmetric) form an orthonormal set for Hilb(n).

     <q(n, k)|q(n, j)>  =  delta_(kj)

   Further, since the |q(n, k)> are also real, with real coefficients,
   they span the real subspace of Hilb(n).  They are also G-null.
    [Lemma 8.1]

     <q(n, k)|G(n)|q(n, k)>  =  0

and so reside in the null cone of G(n). The eigenvectors |p(n, k)> of P(n) are also G-null, and form an orthonormal set for Hilb(n). Since they are related to the |q(n, k)> by the Fourier transform Fr(n), they are not real. There are two reasons why both sets of eigenvectors should, for the present, be considered unphysical from the viewpoint of the |n, k> basis. First, a single such eigenvector can be seen to violate the uncertainty principle, in that the product of the uncertainties for Q(n) and P(n) in any of these eigenvectors vanishes. Second, the basis |n, k> should be considered to be physically attainable, since in this basis G(n) has the limit of the identity as n->infinity. Intuitively, only those transformations of basis from |n, k> that do not alter the form of G(n), and therefore do not infringe upon the limit, should be allowed. [Section XI]. Altering the numerical form of G(n) in a uniform way for all finite n will clearly alter the weak limit of FCCR. CCR will no longer be the limit. Such transformations of G(n) may, however, turn out to be of some interest with regard to the problems of quantum gravity. The diagonalizing transformations for Q(n) and P(n), do not have the property of leaving G(n) invariant. In fact, under these transformations, the form of G(n) is seen to be almost maximally distorted: all its diagonal elements become zero, and all its off-diagonal elements become (+|-)1. The concentrated weight of <n, n-1|G(n)|n, n-1> seems to get spread out over the new basis. If the dispersion free eigenvector |q(n, k)> and |p(n, k)> are physically forbidden, there is, automatically, an intrinsic fuzziness in the expectation values of Q(n) and P(n) that cannot be escaped. This is precisely the picture that I have of a properly quantized space. The formalism is a bit more complete in that it also entails an intrinsically fuzzy phase space.

A metaphor for the unattainable dispersion free eigenvectors of Q(n) and P(n) on the G-null cone is the unattainability in SR by Lorentz transformations, of the light cone boundary from the interior of the forward and backward light cones.


    <Theorem 9.5>:

   For a very large n, the behavior of the normalizing factor
   d_m(n) can be approximated by an integral, showing that,

          d_m(n)  approx=  (n/m) [2^m/(m m/2)]

   where (m m/2) is a binomial coefficient.
   Then using Stirling's approximation

        (m m/2)  approx=  2^m sqrt(2/(m pi))

   then
        d_m(n)  approx=  n sqrt( pi/(2m))

   Proof:

   We have from equation  (9.14)  that

                    (n-1)
        d_j(n)  :=  SIGMA ~H_j( q(n, k) ) ~H_j( q(n, k) )
                     k=0

   and from  Theorem 9.4  that the spacing is

        (DELTA q(n, k))  approx=   pi/(2 sqrt(n))

   Then

         (n-1)
         SIGMA ~H_j( q(n, k) ) ~H_j( q(n, k) ) (DELTA q(n, k)) approx=
          k=0
                                   d_j(n) DELTAq(n, k)
   and

                                        z = +(pi sqrt(n/4))
        d_j(n)  approx=  (2 sqrt(n/ pi)  INTEGRAL [~H_j( z )]^2 dz
                                        z = -(pi sqrt(n/4))

   From the asymptotic approximations  (9.20) 

        INTEGRAL [~H_(2m)( z )]^2 dz    approx=
               [2^(2m)/(2m m)] (1/m) INTEGRAL cos^2(2 sqrt(m) z) dz

        INTEGRAL [~H_(2m+1)( z )]^2 dz  approx=
               [2^(2m)/(2m m)] (2/m) INTEGRAL sin^2(2 sqrt(m) z) dz

   Performing the integrations,

      z = +(pi sqrt(n/4))
        INTEGRAL         [~H_(2m)( z )]^2 dz  approx=
      z = -(pi sqrt(n/4))

        [2^(2m)/(2m m)] (1/m) [pi sqrt(n)/4 + (1/8sqrt(m)) sin(pi sqrt(mn))]


      z = +(pi sqrt(n/4))
        INTEGRAL  [~H_(2m+1)( z )]^2 dz  approx=
      z = -(pi sqrt(n/4))

        [2^(2m/(2m m)] (2/m) [pi sqrt(n)/4 - (1/8sqrt(m)) sin(pi sqrt(mn))]

   For n very large and even reasonably large m,
   the sine terms will be negligible.  Dropping them gives the result.

   QED






    <Theorem 9.6>:


   For k large enough for the asymptotic approximations (9.20)
   to be valid and for a yet larger n, the behavior of the matrix
   elements of the diagonalizing transformation XI(n) is:

     (d_(2k))^(-1/2) ~H_(2k)( q(n, j) )  approx=

          (-1)^k sqrt((2/(pi n))) cos[pi sqrt(k/n)(j - 1/2) ]


     (d_(2k+1))^(-1/2) ~H_(2k+1)( q(n, j) )  approx=

          (-1)^k sqrt(2)/ pi) sqrt((2/(pi n))) sin[pi sqrt(k/n)(j - 1/2) ]

   where j = 0, 1, 2, ..., (n-1).


   Proof:

   Substituting  (9.21)  into  (9.20)  and rearranging gives

     H_2k( q(n, j) )  approx=

                       2^(2k) k!
               (-1)^k ------------ cos[pi sqrt(k/n) (j - 1/2) ]
                      sqrt((k pi))


     H_2k+1( q(n, j) )  approx=

                        2^(2k) k!
             2 (-1)^k ------------ sin[pi sqrt(k/n) (j - 1/2) ]
                      sqrt((k pi))

Using these, the definition (9.13) of normalized Hermite polynomials, and Theorem 9.5 provides an expression for the RHS of equation (9.15a) for the matrix elements of XI(n). The approximation for even order polynomials reduces to the desired result. For the odd order polynomials,

     (d_(2k+1))^(-1/2) ~H_(2k+1)( q(n, j) )  approx=

                  sqrt((2k+1)      k!
          (-1)^k ------------- ----------- sin[pi sqrt(k/n) (j - 1/2) ]
                 sqrt((n k pi)  (k + 1/2)!

   Evaluating the third factor by the gamma function,

         k!
     ----------  =  [ pi (2k+1)]^(-1/2) 2^(2k+1) (2k k)^(-1)
     (k + 1/2)!

   the last factor on the RHS being a binomial coefficient.
   With Stirling's approximation

        (2k k)  approx=  2^(2k)/sqrt(n)



   Substituting back gives the result for odd order polynomials.

   QED




             Eigenvalue Problem for Sine and Cosine Operators

   From the  Theorem 9.1, and the definition of SHA(n) by equation
    (2.12)  one can see that the eigenvalues, of the operators


          ~c(n)  =   (SHA(n) + SHA!(n))                         (9.23a)


          ~s(n)  =  i(SHA(n) - SHA!(n))                         (9.23b)

are the zeros of the Chebyschev polynomials of the first kind. These are the finite analogs to the "sine" and "cosine" operators that have been introduced [Carruthers 1968], [Volkin 1973] in QM to define a phase conjugate to the number operator.
Again let


        Det( R_n( lambda ) )  =  Det( ~c(n) - lambda I(n) )  =  0

be the secular equation of ~c(n). As before, expanding in the manner of Lagrange, on the last row of the determinant, the recursion relation

        Det( R_(n+1)( lambda ) )  =

            - Det( R_n( lambda ) ) - (1/2) Det( R_(n-1)( lambda ) )
                                                                 (9.24)

   results.  Defining

     CHAH_n( lambda )  :=
            (-1)^n 2^(-n/2) Det( R_n( lambda ) )                 (9.25)

   the recursion relation becomes

     CHAH_(n+1)( lambda ) - (lambda) CHAH_n( lambda ) +
                       (1/4) CHAH_(n-1)( lambda ) = 0            (9.26)

   This is exactly the the recursion relation among the Chebyschev polynomials
   of the first kind.  These are defined by

     CHAH_0(z)  =  1
                                                            (9.27)
     CHAH_n(z)  =  2^(-(n-1)) cos(n arccos z),   n >= 1.

   These form a complete system of orthogonal polynomials on the interval
   -1 <= z <= +1, with weight function

     w_CHAH(z)  =  (1 - z^(2)^(-1/2)                        (9.28)

   so that

       +1
     INTEGRAL CHAH_n(z) CHAH_m(z) w_CHAH(z) dz  =  0        (9.29)
       -1

     if m not= n

   Normalizing CHAH_n(z):


     +1        2^((n-1)/2)           2^((n-1)/2)
     INTEGRAL  ----------- CHAH_n(z) ----------- CHAH_m(z) w_CHAH(z) dz
     -1        sqrt( pi )            sqrt( pi )

                                         =  delta_nm         (9.30)

The generating function is given by

                          1 - t^2
     G_CHAH(z, t)  :=  --------------   =
                        1 - 2tz + t^2

                       infinity
                        SIGMA  CHAH_n(z) (2t)^n              (9.31)
                         n=0

For arbitrary n then,

     Sp( ~c(n) )  =  {lambda: CHAH_n( lambda) ) = 0}          (9.32)

   and for any n, CHAH_n(z) has n distinct zeros in the interval
   [-1, +1] so that Sp( ~c(n) ) is not degenerate and lambda is
   in [-1, +1].
   The zeros of CHAH_n(lambda(n, k))  are given by:

                            2k + 1  pi
     lambda(n, k)  =   cos[ ------  -- ]                      (9.33)
                               n    2

   for k = 0, 1, 2, ..., n-1, where k = 0 gives the maximal lambda
   for fixed n.  In the limit n->infinity, the zeros of CHAH_n
   become dense in the domain of definition and

          lim lambda(n, 0)  =  +1
         n->infinity

   As a general relation between neighboring lambda(n, k)



      lambda(n, k+1)  =  lambda(n, k) cos(pi/n) -
                         (2 - lambda^2(n, k))^(1/2) sin(pi/n)    (9.34a)


      lambda(n, k-1)  =  lambda(n, k) cos(pi/n) -
                         (2 - lambda^2(n, k))^(1/2) sin(pi/n)    (9.34b)


   The eigenvector and therefore the diagonalizing transformation can
   be obtained explicitly as before with XI(n).

   Let a_k = a_k( lambda(n, j) )  =  <lambda(n, j)|n, k>

   Writing the matrix equation

        ~c(n) |lambda(n, k)> = lambda(n, j)|lambda(n, k)>

   out in our canonical basis one reads that

        a_(k+1)( lambda )  = lambda a_k( lambda ) - a_(k-1)( lambda )

   Once again the recursion formula for the Chebyschev polynomials appears.
   The preceding formula is satisfied by taking

     a_k( lambda )  = a_0( lambda ) CHAH_k( lambda )

   The factor a_0( lambda ) is left free since the remaining problem is not
   to normalize the Chebyschev polynomials, which are assumed to have been
   already normalized, but rather to normalize the eigenvectors
   |lambda(n, k)>, or equivalently <lambda(n, k)|, represented in terms
   of components <lambda(n, j)|n, k>.
   The normalization conditions being for each j = 0, 1, ..., n-1:

                           (n-1)
     a_0^2( lambda(n, j) ) SIGMA  CHAH_k^2( lambda(n, j) )  =  1     (9.35)
                            k=0

   But

      (n-1)
      SIGMA CHAH_k^2( lambda(n, j) )  =
       k=0
                      (1/4) SIGMA cos^2 ( pik/2n)(2j + 1)     (9.36)


   and from the summation formula  [Jolley 1961]

      (n-1)                     n+2   cos((n+1)theta)  sin( n theta )
      SIGMA cos^2( k theta ) =  --- + -------------------------------   (9.37)
       k=0                       2             2 sin theta

   one easily deduces that

                              2 sqrt(2)
     a_0( lambda(n, j) )  =  -----------                         (9.38)
                             sqrt(n + 1)

   and is independent of j.
   Finally the normalized eigenvectors and hence the diagonalizing
   transformation expressed in the |n, k> basis is given as:

      <lambda(n, j)|n, k>  =

            2          k(2j + 1)  pi
        --------- cos[ ---------  -- ]         (9.39)
        sqrt(n+1)          n       2


   the diagonalizing transformation is unitary and also real,
   and therefore orthogonal.

   It is easy to show that

	[~s(n), ~c(n)]  =  i Diag[1, 0(n-1), -1]

   and that in the weak-* operator topology that

	lim  [~s(n), ~c(n)]  = i |0><0|
       n->inf

   and that s~(n) and ~c(n) have as limits, the sine and cosine operators
   defined in  [Carruthers 1968],  [Volkin 1973] 

   For an explicit presentation of the eigenvalues and diagonalizing
   transformation of the sine operator, notice that since

        [SHA(n), N(n)]   =  +SHA(n)
        [SHA!(n), N(n)]  =  -SHA!(n)

        [~c(n), N(n)]   =  -i ~s(n)
        [~s(n), N(n)]   =  +i ~c(n)

   Then using the BCH formula  (7.3)  with these, exactly as in
    [Section VII]  derive

        Fr(n) ~c(n) Fr!(n)  =  +~s(n)
        Fr(n) ~s(n) Fr!(n)  =  -~c(n)

   with the finite Fourier transform defined by

        Fr(n)  :=  exp( i (pi/2) N(n) )
   so
        Sp( ~s(n) )  =  Sp( ~c(n) )

   Then the respective eigenvectors and diagonalizing transformations are
   also related by Fr(n).



Go to Table of Contents
Go to Previous Chapter (VIII)
Go to Next Chapter (X)
Go to Physics Pages
Go to Home Page

Email me, Bill Hammel at:
bhammel@graham.main.nc.us
READ WARNING BEFORE SENDING E-MAIL

The URL for this document is:
http://graham.main.nc.us/~bhammel/FCCR/IX.html
Created: August 1997
Last Updated: December 4, 2000
Last Updated: July 21, 2002