It is not because things are difficult that we do not dare,
it is because we do not dare that they are difficult.
Lucius Annæus
Seneca^{ } (5 BC - AD 65)
The famous problems discussed below are either still unsolved
or remained open for quite a while,
in spite of considerable scrutiny.
Some entail substantial cash rewards.
Open Problems (or recently resolved
long-standing ones)
(2002-12-07) RH = Riemann's Hypothesis (August 1859)
The real part of all nontrivial zeroes of Riemann's
z function is ½.
Without doubt it would be desirable to have a rigorous proof of this proposition.
However, I have left this research aside for the time being after some quick unsuccessful attempts,
because it appears to be unnecessary for the immediate goal of my study...
^{ }Bernhard Riemann, 1859.
As he solved the Basel problem
in 1735, Leonhard Euler (1707-1783)
introduced the following series, now universally called Zeta,
focusing on integral values of s > 1 (for which
the series converges).
Using the
fundamental theorem of arithmetic
(which states that every positive integer has a
unique factorization into primes)
Euler observed (in 1737) that
the term of rank n in the above series is obtained once and only once in the expansion
of the product of all the following geometric series
for any set of primes containing, at least, every prime divisor p of n.
So, the sum of the whole series is equal to the product of all such things for all
primes p. This yields the following celebrated
Euler product formula :
z(s) =
P_{ p prime}
( 1 - p^{ - s })^{ -1}
That formula characterizes the set of all prime numbers
(it isn't true for any other set of integers).
It's been the usual starting point of modern attacks on the set of primes,
including the simultaneous proofs (1896) of the
prime number theorem (PNT)
by Jacques Hadamard (1865-1963) and
Charles de la Vallée-Poussin (1866-1962).
Both the series and the infinite product converge
for any complex number s
in the half-plane where the real part of s is greater than 1
(Re(s) > 1).
Consider now the related Dirichlet
eta function, defined by the following alternating series
which converges on the right half-plane
(Re(s) > 0).
This is clearly equal to
z(s) -
2 z(s) / 2^{s} . Therefore:
z(s) =
h(s) / (1 - 2^{1-s })
Except for s = 1, this provides directly an analytic expression for
z(s) in terms of an alternating series
which converges when Re(s) > 0. Nice...
Dirichlet
used the above to show that z
has a simple pole of residue
1 at s = 1. (HINT: The numerator and the
denominator are respectively asymptotic to
Log 2 and
(s-1) Log 2 as s goes to 1.)
In 1859,
Bernhard Riemann (1826-1866) showed that
the Zeta function can actually be extended to the entire complex plane,
except at the pole s = 1,
by analytic continuation
(a concept invented by Weierstrass
in 1842).
establishing in the process a relation between values at (1-s) and
s (which had been conjectured by Euler in 1749, in an equivalent form):
p^{-½ s}
G(
s
)
z(s)
=
p^{-½(1-s)}
G(
1-s
)
z(1-s)
2
2
Using the known properties of the Gamma function
(whose reciprocal 1/G
is an entire function
with zeroes at all the nonpositive integers) this relation confirms the existence of
a simple pole for the Zeta function at
s = 1 and reveals its so-called
trivial zeroes at -2, -4, -6, etc.
Using the regularity of the Zeta function for Re(s) > 1
(due to the aforementioned convergence of the defining series in that domain)
this same relation shows that nontrivial zeroes can only exist
in the so-called critical strip (0 ≤ Re(s) ≤1).
They could thus a priori be of two different types:
Pairs of zeroes { s, s* } on the critical line
(i.e., Re(s) = ½).
Quadruplets of zeroes { s, s*, 1-s, 1-s* ) off that critical line.
The famous Riemann Hypothesis (RH) is
the conjecture, formulated by
Riemann in 1859,
that there are no nontrivial zeroes of the latter type:
"RH" The Riemann Hypothesis
(1859)
All nontrivial zeroes of the Zeta function are on the critical line :
Re(s) = ½
Zeta shares its nontrivial zeros with the above convergent series: (Re(s)>0)
Listed below are the imaginary parts of the 29 smallest zeroes
of the Zeta function
located in the upper half-plane
(conjugate zeroes exist in the lower half-plane
whose imaginary parts are simply the opposites of these).
The gigantic ZetaGrid distributed project
of Sebastian Wedeniwski
managed to compute more than 10 trillion
zeros over the course of its lifetime (2001-2005) but they were scooped
by Xavier
Gourdon and Patrick Demichel who achieved that same goal earlier with modest means by using
superior software based on an
algorithm
devised in 1988 by
Andrew M. Odlyzko (1949-) and
Arnold Schönhage (1934-)...
Many statements have been shown to hold if RH is assumed to be true, and
a number of them are known to imply RH, so they are actually equivalent to it.
Several nice statements have been made which seem
true and are simpler and stronger than
RH (each implies RH but the converse need not be true).
This includes a conjecture made in the doctoral dissertation of
Sebastian Wedeniwski (the aforementioned mastermind of
ZetaGrid, 2001-2005) :
Where are the zeros of zeta of s ? G.F.B. Riemann has made a good guess:
" They're all on the critical line," said he.
" And their density's one over two pi Log t. "
This statement of Riemann's has been like a trigger
And many good men, with vim and with vigor,
Have attempted to find, with mathematical rigor,
what happens to zeta as mod t gets bigger.
The names of Landau and Bohr and of Cramer,
Hardy and Littlewood, and Titchmarsh are there.
In spite of their efforts and skill and finesse,
in locating the zeros there's been no success.
In 1914, G.H. Hardy did find,
An infinite number that lay on the line,
His theorem however won't rule out the case,
That there might be a zero at some other place.
Let P be the function pi minus Li,
The order of P is not known for x high,
If square root of x times log x we could show,
Then Riemann's conjecture would surely be so.
Related to this is another enigma,
Concerning the Lindelhöf function mu(sigma)
Which measures the growth in the critical strip,
On the number of zeros it gives us a grip.
But nobody knows how this function behaves.
Convexity tells us it can have no waves,
Lindelhof said that the shape of its graph,
Is constant when sigma is more than one-half.
Oh, where are the zeros of zeta of s?
We must know exactly, we cannot just guess,
In order to strengthen the prime number theorem,
The integral's contour must not get too near 'em.
André Weil has bettered old Riemann's fine guess,
by using a fancier zeta of s.
He proves that the zeros are where they should be,
provided the characteristic is p.
There's a moral to draw from this sad tale of woe.
That every young genius among you should know:
If you tackle a problem and seem to get stuck,
just take it mod p and you'll have better luck.
What fraction of zeros on the line will be found,
when mod t is kept below some given bound?
Does the fraction, whatever, stay bounded below,
as the bound on mod t is permitted to grow?
The efforts of Selberg did finally banish,
All fears that the fraction might possibly vanish.
It stays bounded below, which is just as it should,
But the bound he determined was not very good.
Norm Levinson managed to show, better yet.
At two-to-one odds it would be a good bet,
If over a zero you happen to trip,
it would be on the line and not just in the strip.
Levinson tried in a classical way,
Weil brought modular means into play.
Atiyah then left and Paul Cohen quit,
So now there's no proof at all that will fit.
But now we must study this matter anew,
Serre points out manifold things it makes true.
A medal might be the reward in this quest,
For Riemann's conjecture is surely the best.
Nobody knows for sure if that sequence is infinite,
although everybody is guessing that it is.
This statement is one of the two oldest
unsolved problems in mathematics (the other one pertains to
odd perfect numbers).
In 1966, Chen Jingrun (1933-1996)
proved that there are infinitely many primes p
such that p+2 is either prime or semiprime (in which case p
is called a Chen prime). A semiprime is defined as
the product of two primes.
The Twin Primes Conjecture says that the
following is true for K = 2:
There are infinitely many pairs of primes whose difference is K.
It's widely believed that the above statement holds for
any even integer K.
Polignac's conjecture (1849)
is the belief that there are infinitely many pairs of
consecutive primes differing by K, for any even K.
The weaker statement that the above holds for at least one nonzero
value of K is equivalent to saying that the difference
between consecutive primes doesn't tend to infinity.
This was only proved recently:
In April 2013,
Yitang Zhang
established an upper-bound of
70 million for the least K verifying the above.
Zhang was quoted as saying that the methods in
his 55-page paper
could be perfected to pull this upper-bound downward.
Quickly, Terry Tao initiated a
polymath
project which reduced the upper-bound to 4680,
using Zhang's own methods.
In November 2013,
James Maynard
(b. 1987) proposed a streamlined approach which gave directly
an upper-bound of 600.
He joined forces with Tao's group and they finally managed to reduce the bound to 246
(as of 2014-04-14).
Assuming a generalized version of the
Elliott-Halberstam
conjecture (1968) the above lower-bound would be reduced down to 6
[Polymath, August 2014].
However, new methods seem needed for the ultimate reduction to 2.
"Bounded Gaps between Primes" by Yitang Zhang (Annals of Mathematics, 179, pp.1121-1174)
(2016-01-29) Golbach Conjectures (1742)
Every even number greater than 2 is the sum of two primes.
That conjecture was first formulated by the talented recreational mathematician
Christian Goldbach
(1690-1764) who wrote to Euler
about it in 1742.
An equivalent satement is obtained with odd primes by excluding the number
4 = 2+2 (which isn't the sum of two odd primes).
The weaker statement that odd numbers are sums of three primes can be construed
as a corollary, from the remark that an odd number above 5 is 3 plus an even number above 2.
That weaker statement is less formidable; it was shown to hold for sufficiently
large odd numbers in 1923 by
Hardy &
Littlewoodassuming
the Riemann Hypothesis.
Even Goldbach Conjecture : (Goldbach's conjecture)
All even numbers above 2 are sums of two primes.
All even numbers above 4 are sums of two odd primes.
Odd Goldbach Conjecture : (Goldbach's weak conjecture)
All odd numbers above 5 are sums of three primes.
All odd numbers above 7 are sums of three odd primes.
In 2012 and 2013, Harald Helfgott (b. 1977)
published two papers which provide a complete proof of the weak conjecture.
The strong conjecture remains an open question which has only been
checked by computer
for even numbers up to 4 10^{18} or so.
On 1752-11-18, Goldbach also formulated the lesser-known
conjecture that any odd number is twice a square plus a prime.
Two counterexamples (5777 and 5993) were discovered in 1856 by
Moritz A. Stern (of Stern-Brocot tree fame).
Asymptotically,
there should be as many primes between n^{2} and (n+1)^{2}
as between 1 and n (roughly n / ln n,
by the prime-number theorem).
So, we're very confident that Legendre's conjecture won't fail
in the long run. That's a good heuristic argument
but it doesn't constitute a proof.
It seems likely that there are infinitely many primes of this form but this
is not known for sure. This old question
was popular enough in 1912 for Landau to include it in his
list of four unsolved problems related to primes.
(2002-11-17) P = NP (?)
What's an NP-complete problem?
A computational problem is said to be in the class P
of polynomial time problems whenever there's
an algorithm which can find a valid solution in
a number of elementary steps which is always less than a certain
polynomial function
of the size of the input data (one measure of this
size could be the number of digits used in a reasonnably thrift encoding of
the input data).
The class NP (nondeterministic polynomial time problems)
consists of those problems which could be so solved nondeterministically,
which is a fancy way to say that an unlimited number of lucky guesses are allowed
in the process which arrives at a solution.
Such a nondeterministic process must still be such that only valid solutions
are produced...
To put it in simpler words, a problem is in NP if and only if a solution of
it can be checked in polynomial time
(an explicit nondeterministic algorithm would then be to guess a correct solution
and check it).
In 1972, Richard Karp (1935-)
discovered that there are problems in NP which he dubbed "NP-complete"
because they are at least as tough to solve as any other problem in NP, in the
following sense:
Any NP problem can be reduced in polynomial time by a deterministic algorithm
to the solution of an NP-complete problem whose data size is no more than
a polynomial function of the original input data.
Therefore, if any NP-complete problem could be solved deterministically
in polynomial time (i.e., if it was a P problem) then all NP problems
would be in P and we would thus have P = NP.
Karp's original NP-complete problem (dubbed SAT)
was the satisfiability of boolean expressions:
Is there a way to satisfy a given
boolean expression (i.e., make it "true") by assigning true/false values
to the variables in it?
The SAT problem is clearly in NP (just guess
a correct set of values and compute the boolean
expression to make sure it's true).
Conversely,
Stephen Cook (1939-)
proved from scratch in 1971 that any problem in NP can be reduced in polynomial
time to a commensurable boolean satisfiability problem, thus establishing
SAT to be NP-complete. This result is now known as the
Cook-Levin theorem
because it was also obtained independently by
Leonid Levin (1948-).
If a known NP-complete problem like SAT can be reduced polynomially to some NP problem Q,
the problem Q is then established to be NP-complete.
This way, from Karp's original NP-complete problem,
the list of known NP-complete problems has grown to include
literally hundreds of "classical" examples.
The tantalizing thing is that many such NP-complete problems are very practical
problems which, at first, look like they could be solved in polynomial time.
Yet, nobody has ever "cracked" one of these in polynomial time
or proved that such a thing could not be done...
Therefore we still don't know whether P=NP or not.
(2002-12-21) The Collatz Problem
Collatz sequences are sequences of integers where an even N is followed by N/2
and an odd N by 3N+1.
Do they all end up with 4, 2, 1, 4...?
The problem is most commonly named after the German mathematician
Lothar
Collatz (1910-1990) who formulated the conjecture in 1937.
It's also called the hailstone problem (because of its dynamics)
and has also been popularized under several other names at times
when the thing became more prominently associated with a particular
individual or a specific institution:
The Syracuse problem (Syracuse University).
Kakutani's problem (Yale University, early 1960's).
Ulam's problem.
The sequence
A087003,
may be defined as the Dirichlet inverse of the
character modulo 2.
It was first defined by Labos E.
as the sum of all the Möbius values found at the points of the Collatz trajectory
until a "4" is found (2003-10-02).
However, Marc Lebrun (2004-02-19) has shown that either definition simply
means that A087003 is equal to the Möbius function
at odd points and vanishes at even points...
All told, the Collatz trajectories turn out to be irrelevant !
(2006-08-29) The Poincaré Conjecture (1904)
This was proved in 2002 by Grigori Yakovlevich Perelman,
in the generalized form proposed by
Thurston (geometrization conjecture).
(2010-10-15) Fermat's Last Theorem (Fermat 1637, Wiles 1995)
x^{n} + y^{n} = z^{n}
has no solutions in positive integers for n > 2.
Hanc marginis exiguitas non caperet. This margin is too small to contain [my proof].
Pierre de Fermat (1601-1665)
Die Gleichung a^{n}=b^{n}+c^{n}
für n>2 in ganzen Zahlen [ist] nicht auflösbar. A. Ernest Wendt (1894)
Solutions for n = 2 are called Pythagorean
triples. They are fairly easy to enumerate systematically, starting with
x=3, y=4, z=5. Many special cases known in ancient times were recorded on
Chaldean clay tablets.
In the Middle Ages, Leonardo Fibonacci proved that there was no solutions for
n = 4 (Liber
Quadratorum, 1225).
(2013-01-09) The Oesterlé-Masser ABC Conjecture
Moshizuki has proposed a 500-page proof whose philosophy is obscure.
The radical (rad) of a positive integer n is the
integer whose prime factorization consist of the
same primes as that of n with multiplicity 1.
The function rad is a multiplicative function.
The ABC conjecture says that the inequality
rad ( a b ) > (a+b)^{e}
has infinitely many exceptions when e = 0
but finitely many when e > 0.
The conjecture was formulated in 1985 by the French BourbakistJoseph Oesterlé (b. 1954) and the British mathematician David Masser
(b. 1948).
(2016-05-30) Union-closed sets conjecture (Péter Frankl, 1979)
In a nontrivial finite union-closed family of finite sets,
is there always an element that belongs to at least half of the sets?
The "nontrivial" qualifier indicates that we're considering only families containing at least one
nonempty set. A family of sets is said to be union-closed when it
contains any union of its members.
The conjecture clearly holds for families containing at least one singleton.
(2016-05-30) The Egyptian conjecture (Straus & Erdös, 1948)
For n≥2, is 4/n always the sum of three unit fractions?
Besides 2/3 and 3/4, ancient Egyptians only allowed fractions with numerator 1
(unit fractions).
They represent other fractions as sums of those (without repetitions).