(20081021) Convolution Product
Tight restrictions on one factor allow great leeway on the others.
Whenever it makes sense, the following integral
(from ¥ to +¥)
is known as the value at point x of the convolution product
of f and g.
f * g (x)
=
ò
f (u) g (xu) du
So defined, the convolution operator is commutative
(HINT: Change the variable of integration
from u to
w = xu ). It's also associative :
f * g
* h (x)
=
òò
f (u) g (v) h (xuv) du dv
More generally, a convolution product of several functions
at a certain value
x is the integral of their ordinary (pointwise) product over the (oriented)
hyperplane where the sum of their arguments is equal to the constant x.
This viewpoint makes the commutativity and associativity of convolution obvious.
Loosely speaking, a key feature of convolution is that a
convolution product
of two functions is at least as nice a function
as either of its factors.
We shall soon discover in what sense a convolution product into a nice enough function
can be a welldefined function even when the other factor
is not a function at all...
This happens, for example, when that "other factor" is Dirac's
d distribution (a unit spike at point zero)
which is, almost by definition,
the neutral element for the convolution operation:
d * f
=
f * d
=
f
To be rigorous,
such equations require the extension of our domain of discourse from
numerical functions to distributions
(sometimes improperly called "generalized functions")
using the approach we are about to discuss.
The basic idea occurred to my late teacher
Laurent Schwartz
(19152002) one night in 1944.
(He got a Fields Medal for that, in 1950.)
(20081023) Duality between (some) functions and functionals
An Hermitian product defined over dual spaces ("bras" and "kets").
Let's consider a pair f and g of
complex functions of a real variable.
In the same spirit as the above convolution product,
we may define an inner product (endowed with
Hermitian symmetry)
for some such pairs of functions
via the following definite integral (from ¥ to
+¥) whenever it makes sense:
< f  g >
=
ò
f (u)^{*}g (u) du
In the above, f (u)^{*}
is the complex conjugate of g (u).
Other introductions to the Theory of distributions usually
forgo that complex conjugation and present the above as a mere pairing
of two functions rather than a fullfledged Hermitian product.
They also use a comma rather than a vertical bar as a separator.
We use the latter notation
(Dirac's notation)
which is firmly linked with
Hermitian symmetry in the minds of all physicists
familiar with Dirac's braket notation (pun intended).
Here, kets are wellbehaved
test functions and bras,
which we shall define as the duals of kets,
are the new mathematical animals called distributions,
presented in the next article.
(20081023) Theory of Distributions
The set of distributions is the
dual of the set of test functions.
A linear form is a linear function which associates
a scalar to a vector.
For a vector space of finite dimension,
the linear forms constitute another vector space of the same dimension
dubbed the dual of the original one.
On the other hand, the dual of a space of infinitely many dimensions need
not be isomorphic to it.
Actually, something stange happens among spaces of infinitely
many dimensions: The smaller the space, the larger its dual...
Thus, loosely speaking, the dual of a very restricted
space of test functions is a very large space
of new mathematical objects called
distributions.
The name "distribution" comes from the fact that those objects provide
a rigorous basis for classical 3dimensional distributions of
electric charges which need not be ordinary volume densities
but may also be "concentrated" on surfaces, lines or points.
The alternate term of "generalized functions" is best shunned,
because distributions do generalize measure densities rather than
pointwise functions.
(Two functions which differ in finitely many points
correspond to the same distribution, because they are
"physically" undistinguishable.)
The most restricted space of test functions
conceived by Laurent Schwartz
(in 1944) is that of the
smooth functions of compact support.
It is thus the space which yields, by duality, the most general type of
distributions.
Such distributions turn out to be
too general in the context of Fourier analysis
because the Fourier transform of a function of compact support
is never itself a function of compact support.
So, Schwartz introduced a larger space of test functions,
stable under Fourier transform, whose duals are called "tempered distributions"
for which the Fourier transform is welldefined by duality, as explained below.
The support of a function is the
closure of the set of all points for which it's nonzero.
Compactness is a very general topological concept
(a subset of a topological space is compact when every
open cover contains a finite subcover).
In Euclidean spaces of finitely many dimensions,
a set is compact if and only if it's both closed and bounded
(that's the HeineBorel Theorem ).
Thus, the support of a function of a real variable is compact
when that function is zero outside of a finite
interval.
Examples of smooth functions (i.e.,
infinitely differentiable functions) of compact support are
not immediately obvious. Here is one:
x (x) =
exp (
^{ }1^{ }
)
if x is between 1 and 1
1x^{2}
x (x) = 0
elsewhere
(20081022) Schwartz Functions
The smooth functions whose derivatives are all rapidly decreasing.
A function of a vector x is said to be
rapidly decreasing when its product into any
power of x
tends to zero as x tends to infinity.
Schwartz functions are smooth functions whose
partial derivatives of any order are all rapidly decreasing
in the above sense.
The set of all Schwartz functions is called the
Schwartz Space.
(20081022) Tempered Distributions
Using Schwartz functions as test functions.
Not all distributions have a Fourier transform
but tempered ones do.
The Fourier transform of a tempered distribution is a tempered distribution.
Two functions which differ in only finitely many points
(or, more generally, over any set of vanishing
Lebesgue measure)
represent the same tempered distribution.
However, the explicit pointwise formulas giving the
inverse transform of the Fourier transform of a function,
if they yield a function at all, can only yield a function
verifying the following relation:
f (x) = ½
[ f^{  }(x) + f^{ + }(x) ]
If it's not continuous,
such a function only has discrete jump discontinuities
where its value is equal to the average of its lower and upper limits.
When a distribution can be represented by a function, it's wise
to equate it to the representative function which which has the above property,
because it's the only one which can be retrieved pointwise
from its Fourier transform without using dubious ad hoc methods.
(20081024) The Involutive Definition of the Fourier Transform
The basic definition for functions extends to tempered distributions.
I am introducing a viewpoint
(the involutive convention) which makes the Fourier transform so defined its own
inverse (i.e., it's an involution ).
This eliminates all the reciprocal coefficients and sign changes which
have hampered the style of generations of scholars.
(The complex conjugation which is part of our definition was
shunned in the many competing historical definitions of the Fourier transform.)
It is consistent with the Hermitian symmetry
imposed above on the "pairing"
of distributions and test functions
to blur the unnecessary distinction between that "pairing" and
a clean Hermitian product.
The Involutive Fourier Transform F
F ( f ) (s)
=
ò
e^{ 2p i sx}_{ }f (x)* dx
As usual,
the integral is understood to be a definite integral
from ¥ to
+¥.
f (x)* is the complex conjugate of f (x).
Example: Square function ( P )
and sine cardinal (sinc)
The square function P (x)
= ½ [ sgn(½+x) + sgn(½x) ]
and the sampling functionf (s) = sinc(ps)
are Fourier transforms of each other.
Proof:
As the square function
P vanishes outside the
interval
[½, ½] and is equal to 1 on the interior
of that interval, the Fourier transform f
of P is given by:
f (s)
=
ò
½
e^{ 2pisx}_{ } dx
=
e^{ pis}

e^{  pis}
=
sin^{ } ps
=
sinc ps
½
2p_{ }i s
p_{ }s
(20081102) Competing Definitions of the Fourier Transform
Several definitions of the Fourier transform have been used.
Only the above definition makes the Fourier transform
its own inverse. (Well, technically, you could replace
i by i in that definition
and still obtain an involution, but this
amounts to switching right and left in the orientation of the
complex plane.)
A few competing definitions are tabulated below as pairs of transforms
which are inverses of each other
The first of each pair is usually called the direct
Fourier transform and the other one is the matching inverse
Fourier transform, but the opposite convention can also be used.
The last column gives expressions in terms of the
involutive Fourier transform F
introduced above (and listed first).
Competing Definitions of
the Fourier Transform and its Inverse
1
g (s) = ^{ }
ò
e^{ 2p i sx}_{ }f (x)* dx
g = F ( f )
f = F ( g )
2
g (n) = ^{ }
ò
e^{ 2p i nt}_{ }f (t) dt
f (t) = ^{ }
ò
e^{ 2p i nt}_{ }g (n) dn
g = F ( f )*
f = F ( g* )
3
g (w) = ^{ }
ò
e^{  i wt}_{ }f (t) dt
f (t) = ^{ }
^{ }1^{ }
ò
e^{ i wt}_{ }g (w)
dw
2p
(20081023) Parseval's Theorem (1799)
In modern terms: The Fourier transform is unitary.
The Swiss mathematician Michel
Plancherel (18851967) is credited with the modern formulation of the theorem.
The core idea occurs in a statement about series published in 1799
by Antoine Parseval
(17551836).
(20161010) Fourier transform of a delayed signal:
A timedelay corresponds to a phase shift in the frequency domain.
(20081024) Some distributions and their Fourier transforms
(20081024) The Fourier transform of a Gaussian curve is Gaussian.
The unit Gaussian distribution is a fixedpoint of the Fourier involution.
f (x) =
e^{ p x2}
is its own Fourier transform.
Proof :
Let g be the Fourier transform of f.
We have:
g (s) = ^{ }
ò
e^{ 2p i sx}_{ }e^{ p x2}
dx
Differentiating both sides, we obtain:
(20081024) Twodimensional Fourier transform.
Under coherent monochromatic light, a translucent film produces a distant light
whose intensity is the Fourier transform of the film's opacity.
One practical way to observe the "distant" monochromatic image of a translucent plane
is to put it at the focal point of a lens.
The Fourier image is observed at any chosen distance past the lens.
From that Image, an identical lens can be used to reconstruct the original light
in its own focal plane.
Interestingly, that type of setup provides an easy way to observe the convolution
of two images...
Just take a photographic picture of the Fourier transform of the
first image by putting it in the focal plane of your camera and shining a broad
laser beam through it. Make a transparent slide from that picture.
This slide may be then be used as a sort of spatial filter...
(20090712) Radon transform (1917)
The relation between a tomographic scan and twodimensional density.
This transform and its inverse were introduced in 1917 by the Austrian mathematician
Johann Radon (18871956).
In the plane, a ray
(a nonoriented straight line)
is uniquely specified by:
The inclination
j of the [upward] normal unit vector
n
(0≤j<p).
The value
r = n.OM
(the same for any point M on the line).
The cartesian equation of such a ray depends on
the parameters j and
r :
x cos j +
y sin j =
r
In the approximation of geometrical optics,
the optical density of a bounded transmission
medium along a given "ray of light" is the logarithm
of the ratio of the incoming light intensity to the outgoing intensity.
That's to say that density = log (opacity) since the opacity of
a substrate is defined as the aforementioned ratio.
In that context, logarithms are usually understood to be decimal
(base 10) logarithms.
Opacities are multiplicative whereas densities are simply additive
along a path. We use the latter exclusively.
This traditional vocabulary forces us to state that
the lightblocking capability of a substrate around a given point is its
optical density per unit of length which we denote by the symbol
m.
It varies with location:
m =
m(x,y) ≥ 0
The (total) density along a straight ray specified by the parameters
r and
j defined above can be expressed by a twodimensional
integral using
a onedimensional d distribution:
R_{m }(r,j) =
òò
m(x,y)
d(r 
x cos j  y sin j)
dx dy