Vocabulary :
For an objective function of several variables, a "stationary point"
is a set of values of those variables where all partial derivatives
of the function vanish.
As illustrated below, a local extremum must be
a stationary point but the converse need not hold.
Because of the inclusive meaning
assigned by default to the simplest mathematical terms
(which is the exact opposite of the exclusive meaning often
attributed to simple words in everyday language)
most mathematicians consider "stationary point" and "saddlepoint"
to be synonymous:
At a saddlepoint, the relevant quantity may
rise in some directions and fall in others
but it's not required to do so...
There might very well be an extremum there!
Some authors reserve the term "saddlepoint"
to nonextremal stationary points.
We prefer to call those proper
saddlepoints
(thus following normal mathematical nomenclature).
(20150130) Optimizing without resorting to Calculus
A few beautiful special cases are best handled with ad hoc methods.
Without the general methods presented in the rest of this page,
every optimization problem appears to call for a brilliant insightful solution.
Here are a few classical such solutions, which ought to be remembered not only for
the sheer beauty of the arguments leading to them but also for the
simplicity in which the final results can be described.
Arguably, ad hoc methods are best suited for problems whose
solutions are simple.
Once guessed, such solutions may have beautiful justifications.
In general, however, correctly guessing the solution of a complicated optimization problem
is all but impossible and calculus is required for both the discovery and the justification
of the solutions. When calculus itself becomes difficult to apply,
the latest trend is to use numerical methods instead of symbolic maniplulation
(this should only be a last resort, though).
Singlevariable optimization: Laws of classical optics.
Fermat's principle.
Heron's problem.
Twovariable optimization: Torricelli points.
Connecting roads.
Can be viewed as a twodimensional specialization of Plateau's laws.
Line optimization
The isoperimetric inequality states that
the circle is the planar figure whose enclosed area is maximal for a given
perimeter (and/or whose perimeter is minimal for a given area).
The following rigorous justification of this fact was given
in 1846 by the Swiss geometer Jakob Steiner (17961863).
(20070923) Optimizing a smooth function f
of a single variable.
'Tis little more than finding where the function's
derivative vanishes.
For a point x in the interior
of the domain of f and a small enough value
of e,
both points xe
and x+e are in the allowed domain and
yield values of f which are on both sides of
f (x) if we just assume that f^{ }'
is nonzero. Therefore, there can be an extremum at point x
only when f^{ }' (x) vanishes.
On the other hand, there's no such requirement for a point x
at the border of the allowed domain, because small displacements
of only one sign are allowed.
Away from the border, a saddlepoint x
(let's use that general term to indicate that
f^{ }' (x) vanishes) will be
the location of a
minimum (resp. a
maximum )
when f^{ }'' (x)
is positive (resp. negative). If it's zero, further
analysis is needed to determine whether x
is the location of an extremum or not.
That discussion involving secondorder
(or higher) derivatives is typical of what happens with
several variables when it comes to decide whether a saddlepoint
is an actual extremum, as illustrated in the
next article.
(E. M. of Wisconsin Rapids, WI. 20001121)
Saddlepoints & Extrema
Determine the points where the [objective] function
z = 3x^{3}+3y^{3}9xy
is maximized or minimized. [ Check for
secondorder condition. ]
A necessary (but not sufficient) condition for a smooth function
of two variables to be extremal
( "minimized" or "maximized" ) on an open domain
is to be at a saddlepoint
where both partial derivatives vanish.
In this case that means:
9x^{2}9y = 0
and
9y^{2}9x = 0
So, extrema of z can only occur when
(x,y) is either (0,0) or (1,1).
To see whether a local extremum actually occurs,
we must examine the secondorder behavior of the
function about each such saddlepoint
(in the rare case where the secondorder
variation vanishes, further analysis would be required).
Well, if the secondorder partial derivatives are L, M and N, then
the secondorder variation at point (x+dx, y+dy) is the quantity
½ [ L(dx)^{2} + 2M(dxdy) + N (dy)^{2} ]
We recognize the bracket as a quadratic expression whose sign remains the same
(regardless of the ratio dy/dx) if and only if
its (reduced) discriminant(M^{2}LN) is negative.
If it's positive, the point in question is not an extremum.
In our example, we have L = 18x, M = 9 and N = 18y.
Therefore:
M^{2 } LN = 81 (1  4xy)
At (0,0) this quantity
is positive (+81).
Thus, there's no extremum there.
On the other hand, at the point (1,1)
this quantity is negative (243) so the point (1,1) does
correspond to the only local extremum of z.
Is this a maximum or a minimum? Well, just look at the sign of L
(which is always the same as the sign of N for an extremum).
If it's positive, surrounding points yield higher values and,
therefore, you've got a minimum. If it's negative you've got a maximum.
Here, L = 18, so a minimum is achieved at (1,1).
To summarize: z has only one local extremum;
it's a minimum of 3, reached at x=1 and y=1.
Does this mean we have an absolute minimum?
Certainly not!
(When x and y are negative,
a large enough magnitude of either will make
z fall below any preset threshold.)
(20070922) Unconstrained (or "absolute") saddlepoints
Saddlepoints of a function of several independent variables.
We're seeking saddlepoints (stationary points) of a
scalar objective function
M of n variables x_{1} , ... , x_{n}
which we view as components of a vector x.
M ( x_{1} , ... , x_{n} ) = M ( x )
If those n variables are independent,
then a saddlepoint is obtained only at a point where
the differential form dM vanishes.
This is to say:
0 = dM = grad M . dx
As this relation must hold for any
infinitesimal vectorial displacement
dx, such absolute saddlepoints of M
are thus characterized by:
grad M = 0
That vectorial relation is shorthand for
n separate scalar equations:
¶ M
= 0
¶_{ }x_{i}
(20070922) Lagrange Multipliers
Optimizing entails one Lagrange multiplier per constraint.
Instead of a set of independent variables
(as discussed above) we may have to deal with several
variables tied to each other by several constraints.
The method of choice was originally introduced by
Lagrange (before 1788)
as an elegant way to deal with mechanical constraints:
For example, the variables may be subject to a single constraint
(e.g., the cartesian equation of a surface on which a pointmass is free to move).
E ( x_{1} , ... , x_{n} ) = constant
While an unconstrained saddlepoint
of M was obtained when dM = 0.
A constrained saddlepoint is obtained when dM is proportional to dE.
More generally, when k functions
E_{1} ... E_{k} are given to be constant,
a constrained saddlepoint of M is achieved when the
differential form_{ } dM is a linear
combination of the differentials of E_{1} ... E_{k}.
In other words, there are k constants
l_{i}
(each is known as the Lagrange multiplier
associated with the corresponding constraint)_{ } such that the following
function has an unconstrained (or absolute)
saddlepoint.
(20070923) The fattest cone of given base...
What apex minimizes the lateral area of a cone of given base and volume?
The volume of a cone is one third of the product of its base area by its
height. So, by imposing the cone's volume, we're actually imposing
the height of the cone and looking for an optimal
apex somewhere in a fixed plane parallel to the base...
(20070923) Calculus of Variations
(cf. Lagrangian mechanics)
EulerLagrange equations hold whenever a path integral is stationary.
Historically,
the first problem of the type described below
was the brachistochrone problem
(find the shape of the curve of fastest descent)
which had been considered by Galileo
in 1638 and solved by Johann Bernoulli in 1696
(Bernoulli withheld his solution to turn the problem into a public challenge,
which was quickly met by several prestigious mathematicians).
The subject was investigated by Euler in 1744 and
by Lagrange in 1760.
It was named calculus of variations by Euler in 1766
and made fully rigorous by Weierstrass
before 1890.
Let L be a smooth enough function
of 2n+1 variables:
L =
L ( q_{ 1 }, q_{ 2 } ... q_{ n },
v_{1 }, v_{2 } ... v_{n }, t )
We assume that the first n variables (q) are actually functions of the
last one (t) and also that the subsequent variables (v)
are simply their respective derivatives with respect to t:
v_{ i }(t) = ^{ }
d
q_{ i }(t)
dt
This makes L a function of t alone and we may consider
the following integral S (called the "action" of L )
for prescribed configurations at both extremities.
In this context, a configuration is defined as a complete set
of values for the qvariables only (irrespective of what the vvariables may be)
so, what we are really considering now are prescribed fixed values of
all the 2n quantities
q_{ i }(a) and
q_{ i }(b).
S =
ò
^{ b}
L dt
_{a}
The fundamental problem of the calculus of variations is to
find what local conditions make S stationary,
for optimal functions q_{ i }.
Namely:
EulerLagrange equations
¶^{ }L
= _{ }
d
(_{ }
¶^{ }L
_{ })
¶ q_{ i}
dt
¶ v_{i}
Proof :
From a set of optimal functions q_{ i }
and arbitrary (sufficiently smooth) functions
h_{ i } which vanish at both extremities
a and b, let's define the following family of
functions, depending on one parameter e :
Q_{ i }(t) = q_{ i }(t) +
e h_{ i }(t)
V_{ i }(t) = v_{ i }(t) +
e
d
h_{ i }(t)
dt
Those yield a value S(e) of the action which must be
stationary at e = 0
(since the functions q_{ i}
are optimal). Thus, the derivative of S(e)
along e does vanish
at e = 0, which is to say:
0 =
ò
^{ b}
å _{i}
[_{ }
h_{ i}
¶^{ }L
+ _{ }
d h_{ i}
¶^{ }L
_{ }] dt
_{a}
¶ q_{ i}
dt
¶ v_{i}
Since every h_{ i} vanishes at both extremities
a and b , we may
integrate by parts
the second term of the square bracket to obtain:
0 =
ò
^{ b}
å _{i}
h_{ i}
[_{ }
¶^{ }L
 _{ }
d
(_{ }
¶^{ }L
_{ })
_{ }] dt
_{a}
¶ q_{ i}
dt
¶ v_{i}
As h_{ i} is arbitrary, the square bracket must vanish
everywhere, for every i.
(If this wasn't so, the whole equality would be violated for a choice
of h_{ i} vanishing wherever the square bracket
doesn't have a prescribed sign).
Theoretical and Practical Examples :
The above is most commonly used as the basis for
variational mechanics and
related aspects of theoretical physics based on a
principle of least action.
However, it's also the correct answer to more practical concerns:
On 20081027,
Bill Swearer
wrote: [edited summary]
As a pilot, I've always been interested in writing a proper piece of
flightplanning software to optimize the plane's path with regard to time, fuel,
or any combination thereof. [...]
I've always felt the professionallyprovided solutions were crude approximations that do
not take into account the full range of possibilities, especially over long distance,
as one crosses varying jet streams at different altitudes, etc. [...]
What is the correct mathematical approach?
Thanks a lot. Bill Swearer
Well, just express carefully
the local cost function
(L) as a function of the position
of the aicraft and of the related derivatives
(to a good approximation,
the latter are only useful to compute horizontal speed).
Predicted meteorological conditions (changing with time
throughout space) can be used for best planning.
The EulerLagrange equations then tell the pilot what
to do at all times.
(20090705) A Proof of Noether's Theorem (1915)
Proving Noether's theorem for continuous symmetries.
A slight modification of the above proof yields a straight
derivation of one of the
greatest statements of mathematical physics. Let's just do it...
Suppose that, for specific functions
h_{i} , a symmetry exists which leaves L
unchanged, (to firstorder variations of
e about 0)
when q_{ i} is replaced by
Q_{ i } = q_{ i } +
e h_{ i }
Formally, this leads to a situation similar to the previous one,
since we still know that the derivative of
S(e) vanishes at
e = 0
(albeit for very different reasons).
However, the h functions need not vanish at the extremities
a and b.
So, an extra "integrated term" appears
in the following expression of the derivative of
S(e)
which results from our integration by parts:
0 =
å _{i}
h_{ i}
¶^{ }L

^{ b}
+
ò
^{ b}
å _{i}
h_{ i}
[_{ }
¶^{ }L
 _{ }
d
(_{ }
¶^{ }L
_{ })
_{ }] dt
¶ v_{i}
_{a}
_{a}
¶ q_{ i}
dt
¶ v_{i}
Now, as the integral still vanishes (because
the previously established EulerLagrange equations make every
square bracket vanish) the extra term must be zero as well.
This means that the following quantity doesn't change:
(20120402) The geodesics of a twodimensional surface.
Paths of least length.
On a 2D surface M (u,v) the infinitesimal
distance corresponding to a displacement du,dv is given by
the first fundamental quadratic form:
F_{1 }(du,dv)
= (dM)^{2}
= E (du)^{2} + 2 F du dv + G (dv)^{2}
The problem of minimizing the length of the path between two points on that surface is
a standard exemple of the calculus of variations.
Let's introduce notations compatible with the above:
q_{1} = u
q_{2} = v
v_{1} = du/dt
v_{2} = dv/dt
L = [ F_{1 }(v_{1 }, v_{2 })
]^{½}
=
[ E v_{1}^{2} + 2 F v_{1 }v_{2}
+ G v_{2}^{2}
]^{½}
(20100706) The brachistochrone curve is a cycloid (1696)
The curve of fastest descent in a uniform gravitational field.
We are now presenting the brachistochrone problem as a simple application
of the calculus of variation. Historically, the relation is reversed,
as this problem actually helped define the need for the latter, which was formalized
many years after the brachistochrone problem was solved.
In June 1696,
Johann Bernoulli (16671748) challenged
the readers of Acta Eruditorum
with his famous brachistochrone problem:
Along what curve would a pointmass fall from one prescribed point to another lower
one (not directly underneath) in the least possible time?
Johann Bernoulli's own solution was published in the same journal
a year later, along with 4 of the 5 solutions sent by famous contributors
(the solution of Guillaume
de l'Hôpital was only published in 1988).
We shall use a coordinate system where the origin is the starting point.
We choose to orient the yaxis downward so that y is positive for
all target points. Let's call u the slope of the trajectory dy/dx.
In a gravitational field g the conservation of mechanical energy for a mass
m dropped from the origin at zero speed tells us that:
m g y = ½ m [ (dx/dt)^{2} + (dy/dt)^{2 }]
Therefore, ( 2 g y )^{ ½} dt = ( 1 + u^{2 })^{ ½} dx
To minimize the descent time, we seek to minimize the path integral of dt or, equivalently,
to minimize the integral of (2g)^{½} dt = L dx
L dx = L (y,u,x) dx = ( 1 + u^{2 })^{½} / y^{½} dx
(20081110) The Isoperimetric Inequality
Among planar loops of unit length, the circle encloses the largest area.
The surface area S enclosed by a closed planar curve
of given perimeter P is largest in the case of a circle.
This ancient result can be expressed by the following relation,
known as the isoperimetric inequality:
S ≤ P^{ 2}/ 4p
The threedimensional equivalent of the isoperimetric inequality says that the
closed surface of area S which encloses the largest volume
V is a sphere.
V^{ 2} ≤ S^{ 3}/ 36 p
The above can be generalized to n dimensions:
No hypersurface of hyperarea S encloses a larger
hypervolume V than the hypersphere.
This makes the relations given in the
oldest Numericana article yield:
(20081110) Plateau's Problem
The surface of least area with a given border.
The Plateau rules (1873)
state that the solutions are smooth surfaces of constant mean curvature with only
two possible types of singularities:
lines where 3 such smooth surfaces meet at
120° angles and isolated points
where 4 of those lines meet in a regular tetrahedral configuration.
The first published proof of those rules is due to Jean Taylor (1976).
(20081118) Borderless embedded surfaces of minimal area
Minimal surfaces without edges or selfintersections in ordinary space.
Such surfaces are technically known as
complete embedded minimal surfaces.
Before 1982, only four examples of these were known:
The plane._{ }
The catenoid (Euler, 1741).
In 1776, Meusnier remarked that the catenoid has zero
mean curvature._{ }
The helicoid
(Meusnier,
1776).
Catalan remarked that the plane and the helicoids are the only minimal
surfaces which are ruled._{ }
A fourth example was found by
Riemann in 1860 which consists of
infinitely many horizontal planes with slanted tunnels between adjacent pairs.
In 1982,
Celso
J. Costa
(then a graduate student in Brazil) stumbled upon a
minimal surface topologically equivalent to a torus with three holes.
Costa suspected that his surface had no selfintersections but,
at first, couldn't prove it...
David Hoffman teamed up with
William
H. Meeks III .
and
Jim Hoffman,
(then a graduate student) to produce a computer visualization of the
stunning symmetries of the strange surface described by Costa
(which contains two straight lines meeting at a right angle).
Dividing Costa's surface into 8 congruent pieces,
they proved, indeed, that it never intersects itself!
Loosely speaking, Costa's surface features two complementary
pairs of "tunnels" through the
equatorial plane which connect the inside of the catenoid's northern half
and the outside of its southern half, or viceversa.
Subsequently, Dave Hoffman and Bill Meeks discovered that Costa's surface is just the simplest
member of a whole family of complete minimal
embedded surfaces constructed in the same way but with more "tunnels"...
Applied to helicoids, the idea yields yet another family of complete minimal
embedded surfaces where tunnels provide shortcuts between
slices of space which are otherwise connected only by circling around the helicoid's axis.
Denis Viala (20080228) Connect the dots, without crossings...
Given n blue dots and n red dots in the plane (no three aligned)
there are always n disjoint segments with blue and red extremities.
HINT: This little puzzle appears among
optimization problems... Proof.
(20081109) Shortest Way to Connect Three Points
How to connect three dots with lines of least total length?
The three vertices of a triangle ABC
can be connected using just two of its three sides.
If the angle between those two sides is 120°
or larger, this is the most economical way to do so.
On the other hand, for triangles where the largest inside angle is
less than 120°, the most economical connecting network consists of
three straight lines (OA, OB, OC)
connecting the vertices to some optimal center (O)
where the three lines
OA, OB and OC
meet at 120° angles.
(20081109) The Honeycomb Theorem
In 1999, Thomas Hales finally proved an "obvious" fact.
Nobody questioned
the celebrated Honeycomb conjecture
before 1993, when Phelan and Weaire showed its
3D counterpart (Kelvin's conjecture)
to be false!
The issue was settled in
1999
with a proper proof by Thomas C. Hales :
Thus, the 2D Honeycomb conjecture
is now a theorem! Here it is:
Honeycomb Theorem
In any partition of the plane into tiles of unit area,
each tile cannot have a lesser perimeter than the regular hexagon
of unit area.
(Loosely speaking, the regular honeycomb tiling
is the most economical one.)
The first proof by
Thomas C. Hales
(1999)
was surprisingly intricate.
It was crucial to get rid of the
convexity restriction which earlier authors had reluctantly imposed.
(The isoperimetric inequality
implies that the boundaries between
optimal shapes are either circular arcs or straight lines.
Both sides of such a boundary are convex only in the latter case.)
The reason curved boundaries are ruled out
is not at all obvious; it is false
in the 3D case.
(20081109) On Minimal Foams
Kelvin's problem & counterexamples to Kelvin's conjecture.
The Kelvin problem asks for a partition of space into cells of equal volume and minimal area.
It has been described by
Simon Lhuilier
(17501840) and others as one of the most difficult problems in geometry.
The uniform truncated octahedron
can tile space without voids.
Lord Kelvin (18241907)
observed that a partition of space into
uniform truncated octahedra
was quite economical.
In 1887, he conjectured that the optimal foam would be obtained by curving the faces
of that tetradecahedron in accordance with Plateau's laws.
Kelvin's foam thus consists of a
single type of cell whose vertices are
exactly the same as a uniform truncated octahedron.
The square faces are planar figures with curved edges.
The hexagonal faces are
monkey saddles
with straight [great] diagonals.
In 1993, Denis Weaire and Robert Phelan disproved Kelvin's conjecture
by describing a structure
which is more economical than Kelvin's:
The A15 WeairePhelan
structure
is one example of the socalled FrankKasper foams.
It consists of two distinct types of cells having
either pentagonal or hexagonal faces;
one is a squashed dodecahedron, the other is a tetradecahedron.
In a draft
(20081025)
of a paper published on
20090602,
Ruggero Gabbrielli presents
a new type of unitcell foam (featuring some quadrilateral
faces) whose cost (5.303) is intermediary between
the WeairePhelan foam (5.288) and Kelvin's foam (5.306).
Gabbrielli has dubbed this structure "P42a".
Its repeating pattern consists of 14 curved
"polyhedra" of 4 different types
(10 tetradecahedra and
4 tridecahedra, for
an average
of 96/7 = 13.714285... faces per cell).
The above pictures for the A15 and P42a foams were
obtained by Ruggero Gabbrielli using the
GAVROG 3dt visualization software.
Gabbrielli has also posted several related
interactive
3D visualizations
under Java.
The 1993 optimization on A15 and Gabbrielli's recent research both used Ken Brakke's
Surface Evolver
(with specific code provided by John M. Sullivan, in the latter case at least).
At this writing, the A15 foam described by Weaire and Phelan in
1993 is still conjectured to be the answer to Kelvin's problem.
It is about 0.3% less costly than Kelvin's original foam.
The WeairePhelan foam was used as the basis for the design of the celebrated
Watercube
of the 2008 Olympics
(Beijing National Aquatics Center) which is the largest structure
covered with ETFE film.