home | index | units | counting | geometry | algebra | trigonometry & functions | calculus
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics

Final Answers
© 2000-2014   Gérard P. Michon, Ph.D.

Quantum Mechanics


 Niels Bohr 
 1885-1962  Paul A.M. Dirac 
 1902-1984
In science, one tries to tell people something that no one ever knew before, in such a way as to be understood by everyone.
But in poetry, it's the exact opposite.

Paul A.M. Dirac  (1902-1984; Nobel 1933)
 border
 border  border
 border

Articles previously on this page:

  • Hamilton's analogy equates the principles of Fermat and Maupertuis.
    The above articles have moved...  Click for the new location.
 border
 border
 border

Related articles on this site:

Related Links (Outside this Site)

History of quantum mechanics (Mac Tutor)  |  Rudiments of Quantum Theory
From Bohr's atom to electron waves by Michael Fowler (University of Virginia)
Measurement in Quantum Theory (Stanford)   |   Quantum Theory (Winnipeg)
Against MeasurementOn the Concept of Information   by  Holger Lyre.
Quantum Primer by Stephen K. Lower   |   Relativity and Quantum Theory
Angular Momentum (pdf)  in  Theoretical Chemistry  by  Jack Simons.
 
Quantum Theory Without Observers by Sheldon Goldstein (Rutgers University)
Bohmian Mechanics by Sheldon Goldstein  (Stanford Enc. of Philosophy).
Quantum mechanics, the way I see it   by  Dr. Willem M. de Muynck.
 
Particles, Special Relativity and Quantum Mechanics (Learning Alive Archives)
Quaternion Analog to Schrödinger Equation by Doug B. Sweetser (2000-03-24)
Emmy Noether: Equating physical symmetries and conserved quantities.
Free particle, density matrix  (density operator) etc.
In 1996, Steve K. Lamoreaux measured the force predicted by Casimir in 1948.
Wheeler's Classic Delayed Choice Experiment   by   Ross Rhodes.
The curious rotational memory of the electron  [ Part 2 ]  by  Dan Piponi (sigfpe)
What can we measure?  [ Part 2 ]  by  Dan Piponi (sigfpe)
Shut up and calculate.   "Quality posts" presented by  David Quinn  (2004).
 
Videos:  Early Years   |   Bohr Model (dated animation)
Quantum Mechanics for Dummies   |   David Bohm   |   Bohm on Perception
The Quantum Mechanical Universe, by  David L. Goodstein   1 | 2 | 3 | 4 | 5
Quantum Entanglements, by  Leonard Susskind   Part 1 | Part 3
Quantum Mechanics, by  Leonard Susskind   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
Quantum Theory made relatively simple  by  Hans Bethe, at age 93  (1999).
Heisenberg Principle  by  Walter Lewin   |   Polarisation  by  Roger Bowley The Fabric of the Cosmos - Quantum Leap  (NOVA)  by  Brian Greene et al.
Quantum MechanicsPreliminaries | Birth of the Quantum | Photons
 
Quantum Mechanics   (MudBrick Media) :  Jeeves ("android" voiceover)  Chaucer MacHale, Kevin Williams & Diana Vanderbilt   1a | 1b | 2 | 3 | 4 | 5 | 6
by Denny Wilkins,  with  Todd Hall, Craig Hall, Dennis Gould, Jerrid Jones, Reenie Aldes & Vic Irby.

DMOZ: Quantum Mechanics

Langmuir's movie  of the 1927 Solvay Physics Conference
( voice-over by  Nancy Thorndike Greenspanbiographer of Max Born )

 Fifth Solvay Conference, 1927
 
border
border
 Paul A.M. Dirac 
 1902-1984  Louis de Broglie 
 1892-1987  Niels Bohr 
 1885-1962  Max Planck 
 1858-1947

Quantum Physics


(2002-11-01)   The Quantum Substitute for Logic
How is the probability of an outcome computed in quantum theory?
   Classical 
 fair die...
If you're not completely confused by quantum
mechanics,  you do not understand it
.
John Archibald Wheeler  (1911-2008)

First, let's consider how probabilities are ordinarily computed:  When an event consists of two mutually exclusive events, its probability is the sum of the probabilities of those two events.  Similarly, when an event is the conjunction of two statistically independent events, its probability is the product of the probabilities of those two events.

For example, if you roll a fair die, the probability of obtaining a multiple of 3 is 1/3 = 1/6+1/6; it's the sum of the probabilities (1/6 each) of the two mutually exclusive events "3" and "6".  You add probabilities when the component events can't happen together (the outcome of the roll cannot be both "3" and "6").

On the other hand, the probability of rolling two fair dice without obtaining a 6 is  25/36 = (5/6)(5/6); it's the product of the probabilities (5/6 each) of two independent events, each consisting of not rolling a 6 with each throw. 

Quantum Logic and [Complex] Probability Amplitudes :

In the quantum realm, as long as two logical possibilities are not actually observed, they can be neither exclusive nor independent and the above does not apply.  Instead, quantum mechanical  probability amplitudes  are defined as complex numbers whose absolute values squared correspond to ordinary probabilities.  The phases  (the angular directions)  of such complex numbers have no classical equivalents  (although they happen to provide a deep explanation for the existence of the conserved classical quantity known as  electric charge).

To obtain the amplitude of an event with two  unobserved  logical components:

  • For EITHER-OR (exclusive) components, the amplitudes are added.
  • For AND (independent) components, the amplitudes are multiplied.

In practice, "AND components" are successive steps that  could  logically lead to the desired outcome, forming what's called an acceptable history for that outcome.  The "EITHER-OR components", whose amplitudes are to be added, are thus all the possible histories logically leading up to the same outcome.  Following Richard Feynman, the whole thing is therefore called a "sum over histories".

These algebraic manipulations are a mind-boggling substitute for statistical logic, but that's the way the physical universe appears to work.  The above quantum logic normally applies only at the microscopic level, where "observation" of individual components is either impossible or would introduce an unacceptable disturbance.  At the macroscopic level, the observation of a combined outcome usually implies that all relevant components are somehow "observed" as well (and the ordinary algebra of probabilities applies).  For example, in our examples involving dice, you cannot tell if the outcome of a throw is a multiple of 3 unless you actually observe the precise outcome and will thus know if it's a "3" or a "6", or something else.  Similarly, to know that you haven't obtained a "6" in a double throw, you must observe separately the outcome of each throw.  Surprisingly enough, when the logical components of an event are only imperfectly observed (with some remaining uncertainty), the probability of the outcome is somewhere between what the quantum rules say and what the classical rules would predict.


(2007-07-19)   On the "Statistics" of Elementary Particles
A direct consequence of quantum logic:  Pauli's Exclusion Principle

In very general terms, you may call "particle" some part of a quantum system.  Swapping (or switching) a pair of particles is making one particle take the place of the other and vice versa, while leaving everything else unchanged.  Although swapping particles may deeply affect a quantum system, swapping twice will certainly not change anything since, by definition, this is like doing nothing at all.

"Swapping" can be defined as something that does nothing if you do it twice.  Particles are defined to be "identical" if they can be swapped.

So, according to the above quantum logic, the amplitude associated with one swapping must have a square of 1.  Therefore (assuming that amplitudes are ordinary complex numbers) the swapping amplitude is either +1 or -1.

In the mathematical description of quantum states,  swapping  is well-defined only for particles of the same "nature".  Whether swapping involves a multiplicative factor of +1 or -1 depends on that "nature".  Particles for which swapping leaves the quantum state unchanged are called  bosons, those for which swapping negates the quantum state are called  fermions.

A deep consequence of Special relativity is that spin determines which "statistics" a given type of particles obeys  (Bose-Eintein statistics for bosons, Fermi-Dirac statistics for fermion).  Part of the angular momentum of a fermion can't be explained in classical terms  (it must include a nonorbital "pointlike" component).  The spin of a boson is a whole multiple of the quantum of angular momentum  h/2p,  whereas the spin of a fermion is an odd multiple of the "half quantum" h/4p.

With the concepts so defined, let's consider a quantum state where two fermions would be absolutely undistinguishable.  Not only would they be particles of the same kind (e.g., two electrons) but they would have the same position, the same state of motion, etc.  So, the quantum state is clearly unchanged by swapping.  Yet, swapping  fermions  must negate the quantum state...  Therefore, it's equal to its own opposite and can only be zero !  The probability associated to a zero quantum state is zero; this corresponds to something  impossible.  In other words,  two different fermions can't "occupy" the exact same state.

This result is called  Pauli's exclusion principle.  It's the reason why all the electrons around a nucleus don't collapse to the single state of lowest energy.  Instead, they occupy successively different "orbitals", according to rules which explain the entire periodic table of chemical elements.


(2002-11-01)   The Infamous Measurement Problem
What does a quantum observation entail?
There are no things, only processes.
David Bohm  (1917-1992

This is arguably the most fundamental  unsolved  question in quantum mechanics.

According to the above, one should deal  strictly  with amplitudes between observations (or measurements), but another recipe holds when measurements are made.  That would be fine if we knew exactly what a measurement entails, but we don't...  Should we really assume that a system can only be measured by some outside agency (the observer)?  If we do, nothing prevents us from considering a larger system that includes this observer as well, and that system's evolution would involve only measurement-free quantum rules.  If we don't assume that, we can't avoid the conclusion that a system can observe itself, in some obscure sense.  Either way, the simple quantum rules outlined above would have to be smoothly modified to account for a behavior which can be nearly classical for a large enough system.  In other words, current quantum ideas must be incomplete, because they fail to describe any bridge between a quantum system waiting only to be observed, and an entity capable of observation.

Our current quantum description of the world has proven its worth and reigns supreme, just like Newtonian mechanics reigned supreme before the advent of Relativity TheoryRelativity consistently bridged the gap between the slow and the fast, the massive and the massless (while retaining the full applicability of Newtonian theories to the domain of ordinary speeds).  Likewise, the gap must ultimately be bridged between observer and observed, between the large and the small, between the classical world and the quantum realm, for there is but one single physical reality in which everything is immersed...

 WANTED Dead or Alive: 
 Schroedinger's Cat

This bothers, or should bother, everybody who deals with quantum mechanics:  The so-called  Schrödinger's Cat  theme is often used to discuss the problem, in the guise of a system that includes a cat (a "qualified" observer) in the presence of a quantum device which could trigger a lethal device.  It seems silly to view the whole thing as a single quantum system, which would only exist (until observed) in some superposition of states, where the cat would be neither dead nor alive, but both at once.  Something must exist which collapses the quantum state of a large enough system frequently enough to make it appear "classical".  It stands to reason that Schrödinger's Cat must be dead very shortly after being killed...  Doesn't it?


(2005-07-03)   Matrix Mechanics  (1925)
Physical quantities are multiplied like matrices...  Order matters.

In 1925, Werner Heisenberg (1901-1976; Nobel 1932) discovered that  observable  physical quantities obey noncommutative rules similar to those governing the multiplication of algebraic matrices.

If the measurement of a physical quantity would disturb the measurement of the other, then a noncommutative circumstance exists which disallows even the possibility of two separate sets of experiments yielding the values of these two quantities with arbitrary precision  (read this again).  This delicate connection between  noncommutativity and uncertainty  is now known as  Heisenberg's uncertainty principle.  In particular, the position and momentum of a particle can only be measured with respective uncertainties  (i.e., standard deviations in repeated experiments)  Dx  and  Dpx  satisfying the following inequality :

DDpx   ³   h/4p       [where  h  is Planck's constant]

The early development of Heisenberg's  Matrix Mechanics  was undertaken by M. Born and P. Jordan

In March 1926, Erwin Schrödinger showed that Heisenberg's viewpoint was equivalent to his own "undulatory" approach  (Wave Mechanics, January 1926)  for which he would share the 1933 Nobel prize with Paul Dirac, who gave basic  Quantum Theory  its current form.

Heisenberg's Viewpoint  [skip on first reading]

Here is a terse summary of Heisenberg's approach in terms of the Schrödinger viewpoint which we adopt here, following Dirac and almost all modern scholars:

In the modern Schrödinger-Dirac perspective, a  ket  |y>  is introduced which describes a quantum state  varying with time.  Since it remains of unit length, its value at time t is obtained from its value at time 0 via a unitary operator  Û.

| yt >   =   Û (t,0)  | y0 >

The unitary operator Û so defined is called the  evolution operator.

Heisenberg's viewpoint consists in considering that a given system is represented by the  constant  ket  Û* |y>.  Operators are modified accordingly...

A physical quantity which is associated with the operator    in the Schrödinger viewpoint (possibly constant with time)  is then associated with the following time-dependent operator in the Heisenberg viewpoint.

Û*   Û   =   Û-1 (t,0)    Û (t,0)


(2002-11-02)   The Schrödinger Equation  (1926)
The dance of a  single  nonrelativistic particle in a classical force field.

The Schrödinger equation  governs the probability amplitude  y  of a particle of mass  m  and energy  E  in a space-dependent potential energy  V.

Strictly speaking,  E  is the total  relativistic  mechanical energy  (starting at  mc2 for the particle at rest).  However, the final  stationary  Schrödinger equation (below)  features only the difference  E-V  with respect to the potential  V,  which may thus be shifted to incorporate the rest energy of a  single particle.

For several particles, the issue cannot be skirted so easily  (in fact, it's partially unresolved)  and it's one of several reasons why the quantum study of multiple particles takes the form of an inherently relativistic theory  (Quantum Field Theory)  which also accounts for the creation and anihilation of particles.

In 1926, when the Austrian physicist Erwin Schrödinger (1887-1961; Nobel 1933) worked out the equation now named after him, he thought that the relevant quantity  y  was something like a density of electric charge...

Instead,  y  is now understood to be a  probability amplitude,  as defined in the above article, namely a complex number whose squared length is proportional to the probability of actually finding the electron at a particular position in space.  That interpretation of  y  was proposed by Max Born (1882-1970; Nobel 1954)  the very person who actually coined the term  quantum mechanics  (Max Born also happens to be the maternal grandfather of Olivia Newton-John).

The controversy about the meaning of  y hindered neither the early development of Schrödinger's theory of "Wave Mechanics", nor the derivation of the nonrelativistic equation at its core:

A Derivation of Schrödinger's Equation :

We may start with the expression of the phase-speed, or celerity   u = E/p   of a  matter wave, which comes directly from de Broglie's principle, or less directly from other more complicated analogies between particles and waves.

The nonrelativistic (defining) relations  E = V + ½ mv 2   and   p = mv   imply:

Vinculum
p   =   Ö  2m (E-V)

Therefore, the wave celerity  u = E/p  is simply:

Vinculum
u   =   E  /  Ö  2m (E-V)

Now, the general 3-dimensional wave equation of some quantity  j  propagating at celerity  u  is:

   2 j     =     2 j   +   2 j   +   2 j  
Vinculum Vinculum Vinculum Vinculum Vinculum
u 2 t 2 x 2 y 2 z 2
 
 =Dj [D is the Laplacian operator]

The standard way to solve this (mathematically) is to first obtain solutions  j  which are products of a time-independent space function  y  by a sinusoidal function of the time (t) alone.  The general solution is simply a linear superposition of these  stationary waves :

j   =   y exp ( -2pin t )

For a frequency  n, the stationary amplitude  y  thus defined must satisfy:

Dy + ( 4pn2 / u2 ) y   =   0

Using   n = E/h   (Planck's formula) and the above for   u = E/p   we obtain...

The Schrödinger Equation :

Dy   +   (8 p2 m / h2 ) (E - V)  y     =     0

This equation is best kept in its  nonrelativistic  context, where it determines allowed levels of energy  up to an additive constant.

A frequency may only be associated with a Schrödinger solution at energy E if E is the total relativistic energy  (including rest energy)  and V has been ajusted accordingly, against the usual nonrelativistic freedom, as discussed in this article's introduction.

In the above particular stationary case, we have:  E j   =   ( i h / 2p¶j/¶t
This relation turns the previous equation into a more general linear equation :

( i h / 2p¶j/¶t     =     V j   -   ( h2 / 8p2 m )  Dj

Signed Energy and the Arrow of Time

Historically, Erwin Schrödinger associated an  equally valid  stationary function with the positive (relativistic) energy  E = hn  and obtained a different equation :

j   =   y exp ( 2pin t )
( -i h / 2p¶j/¶t     =     V j   -   ( h2 / 8p2 m )  Dj

Formally, a reversal of the direction of time turns one equation into the other.  We may also allow negative energies and/or frequencies in Planck's formula  E = hn  and observe that a particle may be described by the same wave function whether it carries energy  E  in one direction of time, or energy  -E  in the other.

To retain only one version of the Schrödinger equation and one  arrow of time  (the term was coined by Eddington)  we must formally allow particles to carry a  signed energy  (typically,  E = ± mc2 ).

If the wave function  j  is a solution of one version of the Schrödinger equation, then its conjugate j*  is a solution of the other.  However, time-reversal and conjugation need not result in the same wave function whenever Schrödinger's equation has more than one solution at a given energy.  Two Simultaneous 
 Lives of a Cat

Principle of Superposition :

The linearity of Schrödinger's equation means that any sum of satisfactory solutions is also a solution.  This  principle of superposition  justifies the general Hilbert space formalism introduced by Dirac:

Until it is actually measured, a quantum state may contain
(as a linear superposition)  several acceptable realities at once.

This is, of course, mind-boggling.  Schrödinger and many others have argued that this cannot be  entirely  true:  Something in the ultimate quantum rules must escape any linear description to defeat this  principle of superposition, which is unacceptable as an overall rule for everything observed and anything  observing.


 Emmy Noether (2003-05-26)   Noether's Theorem   (1915)

The German mathematician  Emmy Noether  (1882-1935) established this deep result  (Noether's Theorem) in 1915:

For every continuous symmetry of the laws of physics,
there's a conservation law, and vice versa.

This result was first established in the context of classical rational mechanics but it remains true  (and even more meaningful)  in the quantum realm.

 Come back later, we're
 still working on this one...

E. Noether's 1918 paper, translated by M.A. Tavel (1971)   |   Noether's biography
MathPages   |   Theorem of the Day   |   Wikipedia   |   Noether's Discovery
Noether's Theorem in a Nutshell by John Baez


 PAM Dirac (2005-06-27)   Hilbert Spaces:  Dirac's  <bras|  and  |kets>
A nice notation with built-in simplification features.

The standard vocabulary for the Hilbert spaces used in quantum mechanics started out as a pun:  P.A.M. Dirac (1902-1984; Nobel 1933) decided to call  < j |  a bra and  | y >  a ket, because  < j | y >  is clearly a  bracket...

Hilbert Space and "Hilbertian Basis" :

Hilbert space  is a  vector space  over the field of complex numbers (its elements are called  kets ) endowed with an inner hermitian product  (Dirac's "bracket", of which the left half is a "bra").  That's to say that the following properties hold  (z* being the complex conjugate of z):

  • Hermitian symmetry:   < y | j >   =   < j | y >*   is a complex scalar.
  • Semilinearity:   < j |  ( x | x > + y | y >)   =   x < j | x >  +  y < j | y >
  • For any nonzero ket   | y >,  the real  < y | y >  is positive  (= ||y|| 2 ).

Hilbert space  is also required to be separable and complete, which implies that its dimension is either finite or countably infinite.  It's customary to use raw indices for the kets of an agreed-upon  Hilbertian basis :

| 1 >,   | 2 >,   | 3 >,   | 4 >   ...

Such a "basis" is a  maximal  set of unit kets which are pairwise orthogonal :

< i | i > = 1     and     < i | j > = 0     if   i ¹ j

The so-called  closure relation   Î   =  å  | n > < n |   is a nice way to state that any ket is a  generalized linear combination  of kets from the "basis":

| y >   =   Î | y >   =   å  | n > < n | y >   =   å  < n | y >  | n >

This need not be a  proper  linear combination, since infinitely many of the coefficients  < n | y >  could be nonzero:  An  Hilbertian basis  is not a  proper  linear basis unless it's finite  (cf. Hamel basis).

Operators :

A linear  operator is a square matrix  Â = [ a ij ]  which we may express as:

   =   å  a ij | i > < j |       alternately,       a ij   =   < i |  | j >

To the left of a  ket  or the right of a  bra  yields another like vector.

Hermitian Conjugation  (Conjugates, Duals, Adjoints) :

Hermitian conjugation generalizes to vectors and operators the complex conjugation of scalars.  We prefer to use the same notation  X*  for the hermitian conjugate of any object  X, regardless of its dimension.  We use interchangeably the terms which other authors prefer to use for specific dimensions, namely "conjugate" for scalars, "dual" for vectors (bras and kets) and "adjoint" for operators  (the adjugate of a matrix is something entirely different).  Many authors  (especially in quantum theory)  use an overbar for the  conjugate  of a scalar and an obelisk  ("dagger")  for the  adjoint  Adagger  of an operator  A.  In other words,  Adagger º A*

Loosely speaking, conjugation consists in replacing all coordinates by their complex conjugates and  transposing  (i.e., flipping about the main diagonal).  The conjugate transpose is also called adjoint, Hermitian adjoint, Hermitian transpose, Hermitian conjugate, etc.  The word  conjugate  can also be used by itself, since conjugation of the complex coordinates of a vector or matrix is rarely used, if ever, without a simultaneous transposition.

| y >*   =   < y |       and       < y |*   =   | y >
< j | Â* | y >   =   ( < y | Â | j > )*

The adjoint of a product is the product of the adjoints in reverse order.  For an  inner  product, this restates the axiomatic hermitian symmetry.

( X Y )*   =   Y* X*                 < y | j >*   =   < j | y >

An operator    is  self-adjoint  or  hermitian  if   = Â*.  All eigenvalues of an hermitian operator are  real.

That key theorem was established in 1855 by  Charles Hermite (1822-1901, X1842)  when he introduced the relevant concepts now named after him:  hermitian conjugation, hermitian symmetry, etc.

Two eigenvectors of an hermitian operator for  distinct  eigenvalues are necessarily orthogonal  (see proof below).  In finitely many dimensions, such operators are  diagonalizable.

An hermitian operator multiplied by a  real  scalar is hermitian.  So is a sum of hermitian operators,  or the product of two  commuting  hermitian operators.  The following combinations of two hermitian operators are always hermitian:

1/2  ( Â Ê + Ê Â )             1/2i  ( Â Ê - Ê Â )

Unitary Transformations Preserve Length :

unitary  operator  Û  is a Hilbert isomorphism:   Û Û* = Û* Û = Î.  It transforms an Hilbertian basis into another Hilbertian basis and turns | y >,   < j |   and     (respectively) into   Û | y >,   < j | Û*  and   ۠ Û*.

For an infinitesimal  eÛ = Î + ieÊ   is unitary (only) when  Ê  is hermitian.

State Vectors, Observables and the Measurement Postulate :

A quantum state, state vector, or microstate is a ket   | y >   of unit length :

< y | y >   =   1

Such a ket  | y >  is associated with the density operator   | y > < y |   (whose entropy is zero)  which determines it back, within some physically irrelevant phase factor  exp(iq).

An observable physical quantity corresponds to an hermitian operator    whose eigenvalues are the possible values of a measurement.  The average value  of a measurement of     from a pure microstate  | y >  is:

< y | Â | y >

This is a corollary of the following  measurement postulate  (von Neumann's projection postulate)  which states the consequence of a measurement, in terms of the eigenspace projector matching each possible outcome  (necessarily an eigenvalue  a  of   Â = åa a Pa ).

| y >   becomes    
Pa | y >
Vinculum
|| Pa | y > ||
    with probability   < y | Pa | y >

The above is also often called the  principle of spectral decomposition.  Note that, since  P2 = P = P*,  we have:

||  P | y > || 2  =  < y | P | y >

Vocabulary:  The  principle of quantization  limits the observed values of a physical quantity to the eigenvalues of its associated operator.  The  principle of superposition  asserts that a pure quantum state is represented by a ket...  A quantum state represented by an eigenvector of an observable is called an  eigenstate.  It  always  yields the same measurement of that observable.

Two kets  |y>  and  |j>  that are eigenstates of an hermitian operator    associated with  distinct  eigenvalues  a  and  b  are necessarily  orthogonal.

Proof :   If   |y> = a |y>  and  <j|  = b <j|   with  a ¹ b,  then we have:  <j| Â |y>  =  a <j|y>  =  b <j|y>.  Therefore, <j|y>  =  0   QED

Nonrelativistic Postulate of Evolution with Time :

In nonrelativistic quantum theory, time (t) is not an observable in the above sense, but a  parameter  with which things evolve between measurements, according to the following substitute for Schrödinger's equation, involving the  hamiltonian operator  H  (associated with the system's total energy) :

i h     d  
 | y >  
 

=
 
  H | y >
 
vinculum vinculum
2p dt
=

This is  completely wrong  unless Hamiltonians are properly adjusted to incorporate  rest energies  (see our discussion of Schrödinger's equation).


(2005-07-03)   Operators Corresponding to Physical Quantities
Building on 6 operators for the coordinates of position and momentum.

Only scalar physical quantities correspond to basic observables  (hermitian square matrices)  within the relevant Hilbert space L.  Physical vectors may also be considered, which correspond to operators mapping a ket into a vector of kets  (an element of some cartesian power of  L ).

The following table embodies the so-called  principle of correspondence, for those physical quantities which have a classical equivalent...  The  orbital  angular momentum of a pointlike particle does;  its  spin  doesn't.

Operators are specified by what they turn  y  into.
Physical Quantity Corresponding Operator
generica
b
A ( y )   or   A | y >
B ( y )   or   B | y >
suma + b (A + B)  | y >
producta b ½ (A B + B A)  | y >
space coordinatex x y
momentum along xpx ( h / 2pi )  ¶y/¶x
momentum squared|| p ||2 ( -h2/4p2Dy
potential energyV(r) V(r) y
total energyE H  | y >   (Hamiltonian)
y pz - z py Lx ( h / 2pi )  ( y ¶y/¶z - z ¶y/¶y )
z px - x pz Ly ( h / 2pi )  ( z ¶y/¶x - x ¶y/¶z )
x py - y px Lz ( h / 2pi )  ( x ¶y/¶y - y ¶y/¶x )
angular momentum
squared
|| L ||2 -(hr/2p)2 [Dy - 2y/¶r2 - (2/r) ¶y/¶r]
Physical Vector Vectorial Operator
positionr r  | y >
momentump ( h / 2pi )  Ñ  | y >
angular momentum  L r´p ( h / 2pi )  r ´ Ñ  | y >

The historical equation of Schrödinger is retrieved from the postulated evolution of kets involving the Hamiltonian  H, in the following special case :

E   =   V(r)  +  ||p||/ 2m       and       H(j)   =   V j  -  ( h2/8p2m )  Dj


(2005-07-03)   Commutators
The commutator of two operators  A  and  B  is :   [A,B] = AB - BA.
It's worth noting that if A and B are hermitian, then so is  i[A,B].

According to the above expressions,  the commutator of the two operators respectively associated with the position  x  and the momentum  p along the same axis is the operator for which the image of  y  is:

x ( h / 2pi )  ¶y/¶x  -  ( h / 2pi )  ¶(xy)/¶x     =     ( i h / 2p ) y

That commutator is thus  ( i h / 2p ) Î   (where  Π is the identity operator).


Similarly, we obtain the following expression for the operators    and   associated with components  L and  L of the orbital angular momentum:

[ Â, Â]   =   ( i h / 2p ) Âz

Proof :   Let's evaluate  Â(Ây (y)) :

( h / 2pi ) 2   (  y   [ z ¶y/¶x - x ¶y/¶z ]  - z   [ z ¶y/¶x - x ¶y/¶z ]  )
Vinculum Vinculum
z y

=     ( h / 2pi ) 2   (  y ¶y/¶x  +  yz 2y/¶zx  -  yx 2y/¶z2      
 -  z2 2y/¶yx  +  zx 2y/¶yz  )

All the second-order terms also appear in the like expression for  Â(Âx (y))  (which is obtained by swapping x and y).  So, they cancel in the difference:

[ ÂÂ- ÂÂ] (y)   =   ( h / 2pi ) 2   ( y ¶y/¶x - x ¶y/¶y )
                 =   ( i h / 2pÂz (y)     QED

  =   -
|
-
Âx
Ây
Âz
-
|
-

For the 3-component  column  operator    associated with the ("orbital") angular momentum  L,  this can be summarized thusly:

  ´    =   ( i h / 2p )  

Algebraic Rules for Commutators :

A few general relations hold about commutators, which are easily verified :

[B,A]     =    - [A,B]
[A,B]*= [B*, A*]
[A,B+C]= [A,B] + [A,C]
[A,BC]= [A,B]C + B[A,C]
Ô= [A,[B,C]] + [B,[C,A]] + [C,[A,B]]

This last relation is known as the  Jacobi identity.  It's one of two relations a  bilinear map  must satisfy to be called a  Lie bracket  (the other, which is also satisfied here, is simply:  [A,A] = Ô ).  The commutator bracket thus endows the vector space of quantum operators with the structure of a  Lie algebra.

The following relation holds for two operators whose commutator is a scalar times  Π (or, at least, if that commutator commutes with the operator  B ).

[ A, f (B) ]   =   [A,Bf ' (B)

Proof:  As usualf  is an analytic function, of derivative  f '.  The relation being linear with respect to  f,  it holds generally if it holds for  f (z) = z n...  The case n = 0  is trivial  (zero on both sides)  and an induction on n completes the proof:

[A,Bn+1]  =  [A,Bn]B + Bn[A,B]  =  [A,B]nBn + [A,B]Bn  =  [A,B](n+1)Bn


(2005-07-03)   Noncommutativity and Uncertainty Relations
The link between commutators and expected  standard deviations.

When two observables  A  and B  are repeatedly measured from the same quantum state | y >  the expected  standard deviations  are  Da  and  Db.

( Da )2     =     < y | A2 | y >  -  < y | A | y >2
( Db )2     =     < y | B2 | y >  -  < y | B | y >2

The following inequality then holds  ( Heisenberg's uncertainty relation ).

Da Db   ³     ½   | < y | [A,B] | y > |

Proof:  Assuming, without loss of generality, that both observables have zero averages  (so the trailing terms vanish in the above defining equations)  this may be identified as a type of  Schwartz inequality, which may be proved with the remark that the following quantity is nonnegative for  any  real number  x :

|| ( A + i x B ) | y > || 2  =   < y | ( A - i x B ) ( A + i x B ) | y > 
=  < y | ( x 2 B 2  +  i x AB  -  i x BA  +  A2 ) | y >
=  x 2  ( Db )2   +   x < y | i[A,B] | y >   +   ( Da )2

So, the discriminant of this  real  quadratic function of  x  can't be positive.  QED

As  we have established  that the observables for the position and momentum along the  same  axis yield a commutator equal to  ( i h / 2pÎ, we have:

DDpx   ³   h/4p

Contrary to popular belief, the above doesn't simply state that two quantities can't be pinpointed simultaneously  (supposedly because "measuring one would disturb the other").  Instead, it expounds that no experiments can be made on identically prepared systems to determine  separately both quantities with arbitrary precision...  At least whenever the following noncommutative condition holds:

< y | AB | y >     ¹     < y | BA | y >


For a given quantum state, the uncertainty in the measurement of the momentum along x always has some definite nonzero value.  No experiment can be devised which could achieve a better precision, even if the experimenter does not care  at all  about estimating the position along x.  Conversely, for that same quantum state, there's a definite limit on the precision with which the position x can be determined, even if we do not care at all about the momentum along x.

What Heisenberg's uncertainty relation specifies is that  no quantum states exists  for which the product of those two separate uncertainties is below  h/4p.  This has absolutely nothing to do with one type of measurement "disturbing" the other...

It's true that several measurements disturb each other, but it's a completely  different  issue  (e.g., a precise momentum measurement may leave the system in a new quantum state where the inherent uncertainty in position may very well be much greater than originally).

The uncertainty principle goes much deeper than that.  In particular, it says that there's no way to create a perfectly focused beam of identical particles with the same lateral velocity.  Even if you measure only  either  the lateral position  or  the lateral momentum of any given particle from the beam, your many measurements of both quantities will feature standard deviations which cannot be better than what's imposed by the above uncertainty relation.  That's the way it is.


(2012-07-10)   Transverse Certainties

Physical quantities whose commutator is a  scalar  (i.e., the identity operator multiplied into some complex number)  are said to be  conjugate  of each other and the dispersion in the measurement of one is inversely proportional to the dispersion in the measurement of the other.  This is illustrated by the position and the momentum of a particle  along the same axis.

Conversely, when the observables commute, the eigenstate of one is an eigenstate of the other and both quantities can be measured simultaneously, without any dispersion, for all possible values of either quantity.

Otherwise,  some  quantum states are eigenstates of one observable but not the other, while others  may  be eigenstates of both.  For example, the magnitude of the impulsion  (but not its direction)  can be measured with zero dispersion if the particule is found to be at a location where the magnitude  |y|  of the wave function is either zero or maximum:

y* ¶y / x   =   y* ¶y / y   =   y* ¶y / z   =   0

That's because the commutator between the operators associated to the coordinate position  x  and  || p||  vanish at such positions  (the same being true for other coordinates):

[ x, ||p||2 ] | y >   =   x (-h2/4p2 ) Dy - (-h2/4p2 ) D(xy)   =   (h2/2p2 ) ¶y / x
|< y | [ x, ||p||2 ] | y >   =   (h2/2p2 ) y* ¶y / x


(2007-07-16)   Orbital Angular Momentum and Spin
Spin is a form of angular momentum without a classical equivalent.
 Elie Cartan 
 1869-1951  

The following argument was fully developed by Elie Cartan (1869-1951) in 1913 from a purely geometrical standpoint  (not involving Planck's constant as such)  as he investigated the  Lie algebra of the group of three-dimensional rotations.  Cartan thus demonstrated, ahead of his time, how the idea of quantified spin is a consequence of three-dimensional geometry.  The pioneers of quantum mechanics rediscovered those things in the 1920's.  In 1935, Cartan himself published a remarkable textbook on his  Theory of Spinors.

Let's investigate the properties of a  vectorial  observable    which satisfies the fundamental property previously established in the case of the quantum operator associated with a classical  (orbital)  angular momentum, namely:

  ´    =   ( i h / 2p ) Â

This pretty equation is  merely  a mnemonic for  3  commutation relations:

  =   -
|
-
Âx
Ây
Âz
-
|
-

[ Ây , Âz ]   =   ( i h / 2p ) Âx
[ Âz , Âx ]   =   ( i h / 2p ) Ây
[ Âx , Ây ]   =   ( i h / 2p ) Âz

The 3 components  Â,  and    are scalar observables (i.e., square matrices with hermitian symmetry).  We introduce another such observable:

Â2   =   ÂÂ + ÂÂ + ÂÂ         [ Note that:   Â2   =   (Â*) Â   ]

Unlike   is a  square  matrix.  It's classically associated with the squared length of a classical 3-vector associated with    (if there's one).

Key remark :   Â2  is an observable which  commutes  with   Â
Proof :   Since the commutator  [ Âz Âz , Âz ]  is clearly zero, we have:

[ Â2 , Âz ]   =   [ Âx Âx + Ây Ây , Âz ]   =   [ Âx Âx , Âz ]  +  [ Ây Ây , Âz ]

Both terms can be evaluated using the above commutation relations:

[ Âx Âx , Âz ]   =   Âx [ Âx , Âz ]  +  [ Âx , Âz ] Âx   =   - (ih/2p) ( Âx Ây + Ây Âx )
[ Ây Ây , Âz ]   =   Ây [ Ây , Âz ]  +  [ Ây , Âz ] Ây   =     (ih/2p) ( Ây Âx + Âx Ây )

Therefore, those two things add up to zero and we obtain:   [ Â2 , Âz ]  =  0   QED

The above definition of  Â ensures that  < y | Â2 | y >  is  nonnegative  for any ket  |y>  (HINT:  this is the sum of 3 real squares).  Therefore, this operator can only have nonnegative eigenvalues, which  (for the sake of  future  simplicity)  we may as well put in the following form, for some nonnegative number j.

j (j+1)  (h/2p)2

The punch line  will be  that  j  is restricted to integer or half-integer values.  For now however, we may just accept this expression because it spans all nonnegative values  once and only once  when  j  goes from zero to infinity.

So,  j  can be used to index every eigenvalue of Â2.   Similarly, we may use another index  m  to identify the eigenvalue  m (h/2p)  of  Â.  For now, nothing is assumed about  m  (we'll show  later  that  2m  is an integer).

Since those two observables commute, there's an orthonormal  Hilbertian basis  consisting entirely of eigenvectors common to both of them.  We may specify it by introducing a third index  n  (needed to distinguish between kets having identical eigenvalues for both of our observables).  Those conventions are summarized by the following relations, which clarify the notation used for base kets:

Â2    | n, j, m >    =      j (j+1)   (h/2p)2    | n, j, m >
Âz    | n, j, m >    =     m (h/2p)    | n, j, m >

To determine the restrictions that  j  and  m  must obey, we introduce two  non-hermitian  operators, conjugate of each other.  They are collectively known as  ladder operators  and are respectively called  lowering operator  (or anihilation operator) and  raising operator (or creation operator) because it turns out that each transforms an eigenvector into another eigenvector corresponding to a lesser or greater eigenvalue, respectively.

Â-   =   Âx  -  i Ây         and         Â+   =   Âx  +  i Ây

Both commute with  Â2  (because Âx and Ây do).  The following holds:

||  Â+  | n, j, m >  || 2     =     < n, j, m |  Â-  Â+  | n, j, m >

Where     Â-  Â+ = Âx Âx  +  Ây Ây  +  i [ Âx , Ây ]
 = Â2  -  Âz Âz  -  ( h / 2p ) Âz

So,   ||  Â+  | n, j, m >  || 2   =   [ j(j+1) - m2 - m ]  ( h / 2p )2

As the nonnegative square bracket is equal to  j (j+1) - m(m+1)  we see that  m  cannot exceed  j.  We would find that (-m) cannot exceed  j  by performing the same computation for  ||  Â-  | n, j, m >  ||.  All told:

-j   ≤   m   ≤   j

Note that the above also proves that the ket  Â+  | n, j, m >  vanishes only when  m = j.  Likewise,  Â-  | n, j, m >  is nonzero unless  m = -j.

Except in the cases where they vanish, such kets are eigenvectors of  Â associated with the eigenvalue of index  m ± 1.  Let's prove that:

Âz Â+  -  Â+ Âz   =   [ Âz , Âx ]  +  i [ Âz , Ây ]   =   (ih/2p)  (Ây  -  i Âx )
Therefore,     Âz Â+   =   Â+ Âz  +  (h/2p) Â+

So, if  | y >  is an eigenvector of  Âz  for the eigenvalue  m (h/2p), then:

Âz  Â+  | y >  =  (m+1) (h/2p)   Â+  | y >

Thus,  Â+  | y >  is either zero or an eigenvector of  Âz  associated with the value  (m+1) (h/2p).  The same is true of  Â-  | y >  with  (m-1) (h/2p).

Since we know that  m  is between  -j  and  +j ,  we see that  both  j-m  and  j+m  must be integers  (or else iterating one of the two constructions above would yield a nonzero eigenvector with a value of  m  outside of the allowed range).  Thus, 2j and 2m must be integers  (they are the sum and the difference of the integers j+m and j-m).  If  j  is an integer, so is  m.  If  j  is an half-integer, so is  m  (by definition, an "half-integer" is half the value of an  odd  integer).

The above demonstration is quite remarkable:  It shows how a  3-component observable is quantized whenever it obeys the same commutation relation as an  orbital  angular momentum.  Although half-integer values of the numbers  j  and  m  are allowed, those  do not  correspond to an orbital momentum.  Indeed, let's show that  orbital momenta  can only lead to  whole  values of j and m.

 Come back later, we're
 still working on this one...

Wikipedia:   Spin quantum number   |   Spin


(2008-08-24)   Pauli Matrices  (1927)  &  Spin of an Electron
Three special 2-dimensional Hermitian matrices with unit squares.

In 1927, Wolfgang Pauli (1900-1958)  introduced three matrices for use in the theory of  electron spin.  Their  eigenvalues  are  +1  and  -1.

sx   =    bracket
bracket
bracket
 0 
1
 1 
0
bracket
bracket
bracket
  sy   =    bracket
bracket
bracket
 0 
i
 -i 
0
bracket
bracket
bracket
  sz   =    bracket
bracket
bracket
1
 0 
0
 -1 
bracket
bracket
bracket

They combine into a 3-vector of matrices verifying the crucial equation:

s ´ s   =   2i s

Therefore, they provide an explicit representation of the above type of "angular momentum" observables in the  simplest case  of only two values (eigenvalues).  This is meant to describe a lone fermion of spin ½,  of which the  electron  is the primary example.  The above discussion and notations apply directly to:

   =   (h/4p) s     (i.e.,  Âx  =  (h/4p) sx ,  etc. )

In this simple case, we have   Â2   =   Âx2  +  Ây2  +  Âz2   =   3 (h/4p)2 Î
The square of the spin of any electron; is thus  always  equal to  3 (h/4p)2.

The observable corresponding to the projection of the electron spin along the direction of the  unit  vector  u  of cartesian coordinates  (x,y,z)  is

Âu   =   x Âx  +  y Ây  +  z Âz   =    (h/4p)   bracket
bracket
bracket
z
 x + iy 
    x - iy 
-z
bracket
bracket
bracket

Since  x2 + y2 + y2  =  1,  the eigenvalues of  Âu  are indeed always  ± (h/4p).

Note that any Hermitian matrix with such opposite eigenvalues can be put in this form.  Thus, any quantum state is associated with an observable which will confirm its orientation with certainty (probability 1).

In 1924, Pauli had identified a "two-valued quantum degree of freedom" associated, in particular, with the valence electron of an alkali metal.  The introduction of this new quantum number allowed him to state his famous exclusion principle (i.e., two electrons orbiting the same atom have different quantum numbers).  However, he strongly rejected the idea of the young Ralph Kronig (1904-1995) that this might be due to some intrinsic rotation of the electron.  Pauli discouraged Kronig from pursuing a "clever" idea which he pronounced to have "nothing to do with reality".  Instead, the proposal was duly published by Uhlenbeck and Goudsmit, who thus got the credit for the concept of electron spin.  Indeed, no classical rotation of something as small as an electron could produce the required magnetic moment unless the "surface of the electron" [sic] moved faster than light  (this objection was raised by H.A. Lorentz).  However, a pointlike object can be endowed with something similar to rotation.  One should simply refrain from dubious explanations relying on moving subparts...

Wikipedia:   Spin   |   Pauli matrices   |   Generalizations of Pauli matrices


(2008-08-26)   Quantum Entanglement
The  singlet  and  triplet  states of two entangled electrons.

According to the previous article,  a   pure quantum state  for the spin of a lone electron is represented by a  ket  which is a linear combination of the two eigenvectors of  sz  which we shall henceforth call "up" and "down":

| u >   =    bracket
bracket
bracket
 1 
0
bracket
bracket
bracket
  | d >   =    bracket
bracket
bracket
0
 1 
bracket
bracket
bracket

This involves  a priori  two complex coefficients.  However, two kets that are complex multiples of each other represent the same quantum state, so the specification of a state actually depends on just two  real  numbers.

Another way to look at this is to remark that such a quantum state is represented by a  normalized  ket of unit length corresponding to either of two diametrically opposed points on a unit sphere.  Such a thing is indeed specified by two real numbers:  latitude and longitude  (although the global topology is not that of a sphere because diametrically opposite points are considered to be equivalent).

The juxtaposition of two such spins is represented by a linear combination of four  pairwise orthogonal  unit kets in a 4-dimensional  Hilbert space :

| u,u >       | u,d >       | d,u >       | d,d >

In that space, a quantum state is described by  6  independent real numbers  (4 complex coefficients modulo one complex scalar)  which is  2  more  "degrees of freedom"  than what might be expected for the  separate  description of two spins.  The extra possibilities are called  entangled  states.

Consider the same observables as before for the measurement of  the first spin only.  Those operators do not change at all the components of the ket which describe the second spin.

With a single spin, we saw that any given pure quantum state was always a  +1  eigenstate of a certain linear combination of sx, sy and sz.

In particular, as all measurements of the corresponding quantity were always equal to +1 so was their average.  Surprisingly, this no longer holds for the measurement of a single spin in a two-spin system.  In particular, the following two states both yield a zero average for the measurement of the first spin  along any direction :

  •   (  | u,d > - | d,u >  ) / Ö2   =   | s >     (Singlet State)
  •   (  | u,d > + | d,u >  ) / Ö2   =   | t >     (Triplet State)

Similarly, in either of those two quantum states, the average measurement of the second spin along any direction is also zero.

We may also consider a combined observable which gives the  sum  of the two spins along some direction.  The result can only be  +2, 0 or -2  and the  average  is zero for  both  the singlet and triplet states.

However,  much more  is true for the  singlet  state, since any measurement of the sum of the spins along any direction always gives zero for the singlet state.  Not just a zero average but an actual zero measurement  every time !

Thus, if you measure the spin of one of the two electrons entangled in a  singlet  state, you will know  for sure  that a measurement of the spin of the other electron along the same direction will give the opposite result.  Always.

Noise of Entangled Electrons


(2008-08-31)   Bell's Inequalities   (1964)
Statistical relations which are  violated  in quantum mechanics.

Classically, the probabilities of events can be broken down as sums of  mutually exclusive  events.  Such a decomposition implies the following inequality between various joint probabilities of three events  A, B and C:

P ( A & [not B] )   +   P ( B & [not C] )     ≥     P ( A & [not C] )

 Bell's Inequality Pictorially

The picture shows that the event of the right-hand-side is composed of the two  mutually exclusive  events shaded in red, which also appear as components of the two events from the left-hand-side.  So, their probabilities add up to something no greater than the left-hand-side sum.  This is known as  Bell's Inequality.

In quantum mechanics, there are no such things as mutually exclusive events  (unless actual observations take place which turn the quantum logic of virtual possibilities into the more familiar statistics of observed realities).  Thus, there's no reason why Bell's inequality should apply to the calculus of virtual quantum possibilities.  Indeed, it doesn't in the  above case of a  singlet  state.

 Come back later, we're
 still working on this one...

EPR paradox (1935)   |   Bell's theorem (1969)   |   John S. Bell (1928-1990)
CHSH inequality (1969)   |   Abner Shimony (1928-)   |   John Clauser (1942-)
Anton Zeilinger (1945-)   |   Alain Aspect (1947-)


(2008-08-25)   Higher-Order Spin
Equivalents of the Pauli matrices beyond spin ½.

particle of spin  j  is something which allows a measurement of its spin along any direction to have  2j+1  values  (according to Cartan's argument).

Two measurement values are allowed for  j=1/2,  3 values for j=1,  4 values for j=3/2,  5 values for j=2.  The relativistic case of massless particles is beyond the scope of this discussion:  The measured spin of a massless particle can only be clockwise or counterclockwise, at full magnitude, in the direction of motion  (that would translate into only two possible measurement results, for any nonzero value of  j ).

The relevant Hilbert space has dimension 2j+1 and the observables for the three projections of angular momentum on three orthogonal directions  (in a right-handed configuration)  can be expressed as in the above special case  (j=½)  using the counterparts of Pauli matrices in a Hilbert space of  2j+1  dimensions, namely three 2j+1 by 2j+1 matrices which combine into a "vector" verifying the following compact commutation relation:

s ´ s   =   2i s

Actual  observables  of angular momentum are simply obtained by multiplying such matrices into the half-quantum of spin  (h/4p).

Here's how we may construct such a thing:  First we impose  wlg  that  sz  is diagonal  (that simply means we decide to use eigenvectors of  sz to form a basis for our Hilbert space).  We do know the eigenvalues of  sz  from Cartan's argument, so  sz  is entirely specified up to the ordering of the (real) elements in the diagonal.  We choose (arbitrarily) to order our base kets so that those (distinct) eigenvalues appear in decreasing order on the diagonal of  sz.  So,  sz  is simply the diagonal matrix whose  2j+1  elements are  (2j, 2j-2, 2j-4, ... -2j).

We are looking for the  hermitian  matrix  sx  in terms of  (j+1)(2j+1)  scalar unknowns  (including  2j+1  real ones).  In the case  j = 3/2  this would mean a total of  10  unknowns  (6 complex and 4 real ones)  in a  4 by 4  matrix :

sx   =     bracket
bracket
bracket
  a
b*
c*
d*
  b
e
f*
g*
  c
f
h
k*
  d
g
k
m
  bracket
bracket
bracket

The reader is encouraged to use that explicit example, with index-free notations, to embody the following outline of a general derivation.

Now,  sy  is obtained directly from the equation:

[ sz , sx ]   =   2 i sy

This yields  an expression of  sy  where each entry is proportional to the corresponding (unknown) entry of  sx.  Next, we may use the relation:

[ sy , sz ]   =   2 i sx

This tells us that all terms of  sx  (and, therefore, also those of   sy )  must vanish  except  at positions adjacent to the main diagonal.  Now, we're faced with only  2j  unknown complex coefficients  (which are unconstrained, at this point)  and just  one more  commutation relation to satisfy, namely:

[ sx , sy ]   =   2 i sz

It turns out that this final equation gives us the  squares of the absolute values  of the aforementioned remaining  2j  unknowns.  Each of them is thus determined up to an arbitrary  phase factor  (for a total of  2j  arbitrary multipliers of unit length).  In the following tabulation, we have chosen a "standard" convention for those phase factors which makes all the coefficients of  sx  real and  positive.

Spin  j = 1/2  :
sx   =    bracket
bracket
bracket
 0 
1
 1 
0
bracket
bracket
bracket
  sy   =    bracket
bracket
bracket
 0 
i
 -i 
0
bracket
bracket
bracket
  sz   =    bracket
bracket
bracket
1
 0 
0
 -1 
bracket
bracket
bracket

Spin  j = 1  :
sx   =    Ö2   bracket
bracket
bracket
0
 1 
0
 1 
0
1
0
 1 
0
bracket
bracket
bracket
  sy   =    Ö2   bracket
bracket
bracket
0
 i 
0
 -i 
0
i
0
 -i 
0
bracket
bracket
bracket
  sz   =    bracket
bracket
bracket
2
 0 
0
0
 0 
0
0
0
 -2 
bracket
bracket
bracket

Spin  j = 3/2  :
sx =   bracket
bracket
bracket
0
 Ö
0
0
 Ö
0
2
0
0
2
0
 Ö
0
0
 Ö
0
bracket
bracket
bracket
  sy =   bracket
bracket
bracket
0
 iÖ
0
0
 -iÖ
0
2i
0
0
-2i
0
 iÖ
0
0
 -iÖ
0
bracket
bracket
bracket
  sz =   bracket
bracket
bracket
 3 
0
0
0
0
 1 
0
0
0
0
 -1 
0
0
0
0
 -3 
bracket
bracket
bracket

Spin  j = 2  :
bracket
bracket
bracket
0
 2 
0
0
0
2
0
 Ö
0
0
0
 Ö
0
Ö6
0
0
0
 Ö
0
2
0
0
0
 2 
0
bracket
bracket
bracket
  bracket
bracket
bracket
0
 2i 
0
0
0
-2i
0
 iÖ
0
0
0
 -iÖ
0
iÖ6
0
0
0
 -iÖ
0
2i
0
0
0
 -2i 
0
bracket
bracket
bracket
  bracket
bracket
bracket
 4 
0
0
0
0
0
 2 
0
0
0
0
0
 0 
0
0
0
0
0
 -2 
0
0
0
0
0
 -4 
bracket
bracket
bracket

Relativistic arguments  (beyond the scope of this discussion)  do not allow  elementary  particles beyond spin 2.  Composite  objects with higher spins do not have a fixed value of  j.  However, if their possible decay into things of lower spin is ignored, they would behave like  fictional  high-spin objects, starting with:

Spin  j = 5/2  :
bracket
bracket
bracket
0
 Ö
0
0
0
0
Ö
0
Ö8
0
0
0
0
 Ö
0
3
0
0
0
0
3
0
Ö
0
0
0
0
 Ö
0
Ö5
0
0
0
0
Ö
0
bracket
bracket
bracket
  bracket
bracket
bracket
0
 iÖ
0
0
0
0
-iÖ
0
iÖ8
0
0
0
0
 -iÖ
0
3i
0
0
0
0
-3i
0
iÖ
0
0
0
0
 -iÖ
0
iÖ5
0
0
0
0
-iÖ
0
bracket
bracket
bracket
  bracket
bracket
bracket
 5 
0
0
0
0
0
0
 3 
0
0
0
0
0
0
 1 
0
0
0
0
0
0
 -1 
0
0
0
0
0
0
 -3 
0
0
0
0
0
0
 -5 
bracket
bracket
bracket

By  Cartan's argument, we have:   sx2 + sy2 + sz2   =   4 j (j+1) Î 2j+1

The same pattern holds for any spin  j :   The  nth  coefficient down the upper subdiagonal of  sx  for spin  j  is simply given by the expression:

Vinculum
(sx )n,n+1   =   Ö   n ( 2j + 1 - n )   exp ( i jn )       [ e.g.,  jn = 0 ]

If used, each phase factor applies to two matching elements in sx and sy which are above the diagonal.  The conjugate phase applies to the transposed elements (below the diagonal).  This would turn the ordinary (2x2) Pauli matrices into:

sx   =    bracket
bracket
bracket
 0 
 e-ij 
 eij 
 0 
bracket
bracket
bracket
  sy   =    bracket
bracket
bracket
 0 
 i e-ij 
 -i eij 
 0 
bracket
bracket
bracket
  sz   =    bracket
bracket
bracket
1
 0 
0
 -1 
bracket
bracket
bracket

The eigenvectors of those three matrices are respectively  proportional  to:

bracket
bracket
bracket
 1 
 e-ij 
bracket
bracket
bracket
  and   bracket
bracket
bracket
 -1 
 e-ij 
bracket
bracket
bracket
  bracket
bracket
bracket
 1 
 i e-ij 
bracket
bracket
bracket
  and   bracket
bracket
bracket
 -1 
 i e-ij 
bracket
bracket
bracket
  bracket
bracket
bracket
 1 
 0 
bracket
bracket
bracket
  and   bracket
bracket
bracket
 0 
 1 
bracket
bracket
bracket

We may call  twists  the  2j  such phase factors which are part of sx and sy.  For spin  ½, the single twist can be eliminated by redefining which axis (perpendicular to the z-axis) is associated with the twist-free version of sx.  This works for a single spin but cannot be done simultaneously for several spins...  It's as if a spin possessed an internal phase which indicates, so to speak, the actual  angular position  in a "rotation" around a given axis.

The same trick can always be used to make the sum of the twists vanish in a single higher spin, but what is the physical significance of the 2j-1 remaining degrees of freedom?  They seem to determine, in a nontrivial way, the relative positional phases in the "rotations" around each direction of space.  In particular, what does  j  mean in the following observables for spin 1 ?

sx  =   Ö2  bracket
bracket
bracket
 0 
 e-ij 
 0 
 eij 
 0 
 eij 
 0 
 e-ij 
 0 
bracket
bracket
bracket
    sy  =   Ö2  bracket
bracket
bracket
 0 
 i e-ij 
 0 
 -i eij 
 0 
 i eij 
 0 
 -i e-ij 
 0 
bracket
bracket
bracket
    sz  =   bracket
bracket
bracket
2
 0 
0
0
 0 
0
0
0
 -2 
bracket
bracket
bracket

The columns in the following [unitary and hermitian] matrices are eigenvectors:

1
Vinculum
2
  bracket
bracket
bracket
 1 
 Ö2 e-ij 
 1 
 Ö2 eij 
 0 
 -Ö2 eij 
 1 
 -Ö2 e-ij 
 1 
bracket
bracket
bracket
  1
Vinculum
2
  bracket
bracket
bracket
 1 
 i Ö2 e-ij 
 -1 
 -i Ö2 eij 
 0 
 -i Ö2 eij 
 -1 
 i Ö2 e-ij 
 -1 
bracket
bracket
bracket
  bracket
bracket
bracket
1
 0 
0
0
 1 
0
0
0
 1 
bracket
bracket
bracket


(2005-06-30)   Density operators = macrostates   (von Neumann, 1927)
Quantum representation of systems in  imperfectly  known  mixed states.

microstate  (or pure quantum state)  is represented by a normed ket from the relevant Hilbert space, up to an irrelevant phase factor.  A more realistic  macrostate  is a statistical mixture  (called  mixed state  or  Gemischt)  which can be represented by a unique [hermitian]  density operator  r  with positive eigenvalues that add up to 1.

r   =   å   pn  | n > < n |

In particular, the unique density operator representing the pure quantum state associated with the normed ket  | y >  is given by the following expression, which is unaffected by phase factors  (since multiplying  | y >  by a complex number of unit norm will multiply  < y |  by the reciprocal).

r   =    | y >  < y |

A statistical mixture consisting of a proportion  u  of the macrostate represented by  r1  and a proportion  1-u  of the macrostate represented by  r2  is represented by the following density operator:

r   =    u r1  +  (1-u) r2

The trace of an operator is the sum of the elements in its main diagonal  (this doesn't depend on the base).  All density operators have a trace equal to 1.  Conversely, all operators of trace 1 can be construed as density operators.

Tr ( Â )   =   ån   < n | Â | n >

The measurement of any observable    yields the eigenvalue  a  with the following probability, involving the projector onto the relevant eigenspace:

p ( a )   =   Tr ( r Pa )

Thus, systems are experimentally different if and only if they have different density operators.  We may as well talk about  r  as  being  a macrostate.

The  average value  resulting from a measurement of    = åa a Pa   is:

<  >   =   åa a p(a)   =   åa  Tr ( r a Pa )   =   Tr ( r  )

Mere interaction with a measuring instrument turns the macrostate  r  into
åa Pa r Pa     Recording the measure  a  makes it    Pa r Pa / Tr ( r Pa )

This is known as "Lüder's rule" or  Lüders' projection postulate.  It was first discussed in 1951 by Gerhart Lüders, in  "Über die Zustandsanderung durch den Messprozess" (On the state-change due to the measurement process)  which appeared in  Annalen der Physik, 8 (6) 322-328.

In 1957, Gerhart Lüders (1920-1995) proved the CPT theorem  (discovered independently by Wolfgang Pauli and John S. Bell)  using Lorentz symmetry.  In 1958 Lüders also helped establish the relativistic connection between spin and statistics.  He received the Max Planck Medal in 1966.

An [analytic] function of an operator, like the logarithm of an operator, is defined in a standard way:  In a base where the operator is diagonal, its image is the diagonal operator whose eigenvalues are the images of its eigenvalues.

The  statistical entropy  S ( r )  is defined in units of a positive constant  k :

S ( r )   =   -k  Tr (  r Log ( r )   )

S  is positive, except for a pure state  r = | y > < y for which  S = 0.  Algebraically, the following  strict  inequality holds, unless   r = r'.

S ( r )   <   -k  Tr (  r Log ( r' )  )


An isolated nonrelativistic system evolves according to the Schrödinger-Liouville equation, involving its hamiltonian  H :

( ih / 2p )  dr/dt   =   H r - r H

With thermal contacts, a quasistatic evolution has different rules (T and H vary).  Introducing the  partition function  (Z) :

Z   =   Tr exp ( - H / kT )       and       r   =   exp ( - H / kT ) / Z

The variation of the internal energy   U  =  Tr ( r H )   may be expressed as

dU   =   Tr ( dr H )  +  Tr ( r dH )   =   dQ  +  dW
 
U - TS   =   -kT Log Z

 Come back later, we're
 still working on this one...

Density matrix

border
border
visits since July 6, 2005
 (c) Copyright 2000-2014, Gerard P. Michon, Ph.D.