home | index | units | counting | geometry | algebra | trigonometry & functions | calculus
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics
 border
 border

Final Answers
© 2000-2014   Gérard P. Michon, Ph.D.

(Involutive) Fourier Transform
&  Tempered Distributions

If you want to find the secrets of the Universe,
think in terms of energy, frequency and vibration.

Nikola Tesla  (1856-1943)
 

Related articles on this site:

Related Links (Outside this Site)

Fourier Transform  by  Eric Weisstein.
Self-Characteristic Distributions  by  Aria Nosratinia.

Wikipedia :   Tempered distributions and Fourier transform   |   Fourier transform   |   Deconvolution

Mathematics for the Physical Sciences (1966)  by  Laurent Schwartz  (Dover)

VideoThe Fourier Transform and its Applications   by  Brad Osgood
(Stanford University, EE261)   1  |  2  |  3  |    ...    |  29  |  30   [ Playlist ]

 
border
border

Fourier Transform  &  Tempered Distributions


(2008-10-21)   Convolution Product
Tight restrictions on one factor allow great leeway on the others.

Whenever it makes sense, the following integral  (from to )  is known as the value at point  x  of the  convolution product  of  f  and  g.

 f * g  (x)   =   ò   f (u)  g (x-u)  du

So defined, the convolution operator is  commutative  (HINT:  Change the variable of integration from  u  to  w = x-u ).  It's also  associative :

 f * g * h  (x)   =   òò   f (u)  g (v)  h (x-u-v)  du dv

More generally, a convolution product of  several functions  at a certain value  x  is the integral of their ordinary (pointwise) product over the (oriented) hyperplane where the sum of their arguments is equal to the constant  x.  This viewpoint makes the commutativity and associativity of convolution obvious.

Loosely speaking, a key feature of convolution is that a  convolution product  of two functions is  at least  as nice a function as  either  of its factors.  We shall soon discover in what sense a convolution product into a nice enough function can be a well-defined function even when the  other factor  is not a function at all...

This happens, for example, when that "other factor" is Dirac's  d  distribution  (a unit spike at point zero)  which is, almost by definition, the neutral element for the convolution operation:

d *  f   =    f  * d   =    f

To be rigorous, such equations require the extension of our domain of discourse from numerical functions to  distributions  (sometimes improperly called "generalized functions")  using the approach we are about to discuss.

The basic idea occurred to my late teacher  Laurent Schwartz (1915-2002)  one night in 1944.  (He got a  Fields Medal  for that, in 1950.)


(2008-10-23)   Duality between (some) functions and functionals
An  Hermitian product  defined over dual spaces ("bras" and "kets").

Let's consider a pair  f  and  g  of complex functions of a real variable.

In the same spirit as the above convolution product, we may define an  inner product  (endowed with Hermitian symmetry)  for  some  such pairs of functions  via the following definite integral  (from to )  whenever it makes sense:

< f  |  g   =   ò   f (u)*  g (u)  du

In the above,   f (u)*  is the  complex conjugate  of  g (u).

Other introductions to the  Theory of distributions  usually forgo that complex conjugation and present the above as a mere  pairing  of two functions rather than a full-fledged  Hermitian product.  They also use a comma rather than a vertical bar as a separator.  We use the latter notation  (Dirac's notation)  which is firmly linked with  Hermitian symmetry  in the minds of all physicists familiar with Dirac's  bra-ket  notation  (pun intended).  Here,  kets  are well-behaved  test functions  and  bras,  which we shall define as the duals of  kets,  are the new mathematical animals called  distributions,  presented in the next article.


(2008-10-23)   Theory of Distributions
The set of  distributions  is the  dual  of the set of  test functions.

linear form  is a linear function which associates a  scalar  to a vector.

For a vector space of finite dimension, the linear forms constitute another vector space of the same dimension dubbed the  dual  of the original one.

On the other hand, the dual of a space of infinitely many dimensions need not be isomorphic to it.  Actually, something stange happens among spaces of infinitely many dimensions:  The smaller the space, the larger its dual...

Thus, loosely speaking, the  dual  of a very restricted space of  test functions  is a very large space of new mathematical objects called  distributions.

The name "distribution" come from the fact that those objects provide a rigorous basis for classical 3-dimensional distributions of electric charges which need not be ordinary volume densities but may also be "concentrated" on surfaces, lines or points.  The alternate term of "generalized functions" is best shunned, because distributions do generalize measure densities rather than pointwise functions.  (Two functions which differ in finitely many points correspond to the same distribution, because they are "physically" undistinguishable.)

The most restricted space of  test functions  conceived by  Laurent Schwartz  (in 1944)  is that of the  smooth functions of compact support.  It is thus the space which yields, by duality, the most general type of  distributions.

Such distributions turn out to be  too  general in the context of Fourier analysis because the Fourier transform of a function of compact support is never itself a function of compact support.  So, Schwartz introduced a larger space of test functions, stable under Fourier transform, which yields a dual space of so-called "tempered distributions" for which the Fourier transform is well-defined by duality, as explained below.

The  support  of a function is the closure of the set of all points for which it's nonzero.  Compactness is a very general topological concept  (a subset of a topological space is compact when every open cover contains a finite subcover).  In Euclidean spaces of  finitely many  dimensions, a set is compact if and only if it's both closed and bounded  (that's the  Heine-Borel Theorem ).  Thus, the support of a function of a real variable is compact when that function is zero outside of a finite interval.  Examples of  smooth functions  (i.e., infinitely differentiable functions)  of compact support are not immediately obvious.  Here is one:

x (x)   =   exp (   1   )   if  x  is between  -1  and  1
 Vertical bar for alternatives
1-x2
x (x)   =   0  elsewhere


(2008-10-22)   Schwartz Functions
The smooth functions whose derivatives are all  rapidly decreasing.

A function of a vector  x  is called  rapidly decreasing  when its product by  any  power of  ||x||  vanishes at infinity  (i.e., it tends to zero as  ||x||  tends to infinity).

Schwartz functions  are smooth functions whose partial derivatives of any order are all  rapidly decreasing  in the above sense.  The set of all  Schwartz functions  is called the  Schwartz Space.


(2008-10-22)   Tempered Distributions
Using  Schwartz functions  as  test functions.

The dual of the  Schwartz Space  is a set of distributions known as  tempered distributions.

Not all distributions have a Fourier transform but  tempered  ones do.  The Fourier transform of a tempered distribution is a tempered distribution.

Two functions which differ in only finitely many points  (or, more generally, over any set of vanishing Lebesgue measure)  represent the  same  tempered distribution.  However, the explicit pointwise formulas giving the inverse transform of the Fourier transform of a function, if they yield a function at all, can only yield a function verifying the following relation:

f (x)   =   ½  [  f - (x)  +  f + (x) ]

If it's not continuous, such a function only has discrete  jump discontinuities  where it takes a value equal to the average of its lower and upper limits.

When a distribution can be represented by a function, it's wise to equate it to the representative function which which has the above property, because it's the only one which can be retrieved pointwise from its Fourier transform without using dubious ad hoc methods.


(2008-10-24)   The Involutive Definition of the Fourier Transform
The basic definition for functions extends to  tempered distributions.

I am introducing a viewpoint  (the involutive convention)  which makes the Fourier transform its own inverse  (i.e., the Fourier transform so defined is an  involution).

The following definition eliminates all the reciprocal coefficients and sign changes which have hampered the style of generations of scholars.  (The complex conjugation which is part of our definition was shunned in the many competing historical definitions of the Fourier transform.)  It is consistent with the Hermitian symmetry imposed above on the "pairing" of distributions and test functions to blur the unnecessary distinction between that "pairing" and a clean Hermitian product.

The Involutive Fourier Transform  F
Ff ) (s)   =   ò   e 2p i sx  f (x)*  dx

As usual, the integral is understood to be a  definite  integral from to f (x)*  is the complex conjugate of  f (x).

Example:  Square Function and Sine Cardinal (sinc)

The  square function  P (x)  =  ½ [ sgn(½+x) + sgn(½-x) ] and the  sampling function   f (s)  =  sinc(ps)   are  Fourier transforms of each otherProof:

As the  square function  P  vanishes outside the interval  [-½, ½]  and is equal to  1  on the interior of that interval,  the Fourier transform  f  of  P  is given by:

 f (s)   =   ò  ½    e 2pisx   dx   =   e pis - e - pis   =   sin  ps   =   sinc  ps      QED
Vinculum Vinculum
2p i s p s
 


 Antoine Parseval 
 (1755-1836) (2008-10-23)   Parseval's Theorem  (1799)
In modern terms:  The Fourier transform is  unitary.

The Swiss mathematician Michel Plancherel (1885-1967) is credited with the modern form of the theorem.  The core idea occurs in a statement about series published in 1799 by Antoine Parseval (1755-1836).

 Come back later, we're
 still working on this one...


(2008-10-24)   Some distributions and their Fourier transforms
The unit Gaussian distribution and Dirac's comb (shah) are fixed-points.

f (x)  =  e -p x2  is its own Fourier transform.

Proof :   Let  g  be the Fourier transform of  f.   We have:

  g (s)   =    ò   e 2p i sx   e -p x2  dx     Differentiating both sides, we obtain:
  g' (s)   =    -i  ò   e 2p i sx    ( -2pe  -p x)  dx     which we integrate by parts:
  g' (s)   =    +i  ò   ( 2p i s  e 2p i sx )    e  -p x  dx     =     -2pg (s)

So,  g  satisfies the differential equation  dg = -2p s g ds  whose solution is:

g (s)   =   g (0)  e -p s2

Because of a well-known miracleg (0)  =  1.  So   g (s)  =  e -p s2      QED

 Come back later, we're
 still working on this one...


 Simeon Poisson 
 1781-1840 (2008-10-24)   Poisson summation formula.  Sampling formula.
The unit Dirac comb (shah function) is its own Fourier transform.

 Come back later, we're
 still working on this one...

Poisson summation formula


(2008-10-24)   Two-dimensional Fourier transform.
Under coherent monochromatic light, a translucent film produces a distant light whose intensity is the Fourier transform of the film's opacity.

The Huyghens principle...

 Come back later, we're
 still working on this one...

One practical way to observe the "distant" monochromatic image of a translucent plane is to put it at the focal point of a lens.  The Fourier image is observed at any chosen distance past the lens.  From that Image, an identical lens can be used to reconstruct the original light in its own focal plane.

Interestingly, that type of setup provides an easy way to observe the convolution of two images...  Just take a photographic picture of the Fourier transform of the first image by putting it in the focal plane of your camera and shining a broad laser beam through it.  Make a transparent slide from that picture.  This slide may be then be used as a sort of  spatial filter... 

 Come back later, we're
 still working on this one...


(2009-07-12)   Radon transform   (1917)
The relation between a tomographic scan and two-dimensional density.

This transform and its inverse were introduced in 1917 by the Austrian mathematician  Johann Radon (1887-1956).

In the plane, a  ray  (i.e., a nonoriented straight line)  is uniquely specified by:

  • The inclination  j  of the [upward] normal unit vector  n  (0≤j<p
  • The value  r  =  n.OM  (which is the same for any point M on the line).

The cartesian equation of such a line depends on the parameters  j  and r :

x cos j  +  y sin j   =   r

In the approximation of geometrical optics, the  optical density  of a (bounded) transmission medium along the path of a given "ray of light" is the  logarithm  of the ratio of the incoming light intensity to the outgoing intensity.

This is to say that  density = log (opacity)  since the opacity of a substrate is defined as the aforementioned ratio.  In that context, logarithms are usually understood to be decimal  (base 10)  logarithms.  Opacities are multiplicative whereas densities are simply additive along a path.  Because of that property, we use the latter exclusively.

This traditional vocabulary forces us to state that the light-blocking capability of a substrate around a given point is its  optical density per unit of length  which we denote by the symbol  m.  It varies with location:

m   =   m(x,y)   ≥   0

The (total) density along a straight ray specified by the parameters  r  and  j  defined above can be expressed by a two-dimensional integral using a one-dimensional  d  distribution:

Rm (r,j)   =   òò  m(x,y)  d(r - x cos j - y sin j)  dx dy

 Come back later, we're
 still working on this one...


(2008-11-02)   Competing Definitions of the Fourier Transform
Several  definitions of the Fourier transform have been used.

Only the above definition  makes the Fourier transform its own inverse.  (Well, technically, you could replace  i  by  -i  in that definition and still obtain an involution, but this amounts to switching right and left in the orientation of the complex plane.)

A few competing definitions are tabulated below as pairs of transforms which are inverses of each other  The first of each pair is usually called the  direct  Fourier transform and the other one is the matching  inverse  Fourier transform, but the opposite convention can also be used.  The last column gives expressions in terms of the involutive Fourier transform  F  introduced above  (and listed first).

Competing Definitions of the Fourier Transform and its Inverse
  1  
  g (s)   =    ò   e 2p i sx  f (x)*  dx
 g   =   F ( f )
 f   =    F ( g )
2
  g (n)   =    ò   e -2p i nt  f (t)  dt

  f (t)   =    ò   e  2p i nt  g (n)  dn
 g   =   F ( f )*
 
 
 f   =    F ( g* )
3
  g (w)   =    ò   e - i wt  f (t)  dt

  f (t)   =     1    ò   e   i wt  g (w)  dw
Vinculum
2p
 

 Come back later, we're
 still working on this one...

border
border
visits since January 10, 2009
 (c) Copyright 2000-2014, Gerard P. Michon, Ph.D.