With our order of presentation, fundamental concepts aren't used before
they're introduced. The edifice rests on
greatest common divisors,
Euclid's algorithm and Bézout's lemma.

The basic structure of the prime numbers can then be freely examined
in the second part.

(2016-11-07) Divisors
(or Factors ) and divisibility relation :
Divisors are well-defined within any additivemonoid.

In an additive monoid, an element x is said to be a divisor
(or factor ) of another element y when the latter is the
sum of finitely many like terms equal to the former.
In that case, we also say that "x divides y".

x + x + ... + x = y

The divisibilityrelation "x divides y" is a preorder relation, since:

It's reflexive
(any element, including zero, divides itself ).

It's transitive (i.e., if x divides y and y divides z, then x divides z)
courtesy of the associativity of addition.

For the monoid of nonnegative integers, a third property is true,
which makes divisibility a full-fledged partial ordering relation :

It's antisymmetric (i.e., if x divides y and y divides x, then x = y).

It's easy to prove this by remarking first that, among nonnegative integers, a number is always
greater than or equal to any of its divisors.
Recall that zero (the neutral element for addition) divides only itself.

The above is all we need for the rest of this page, starting with the
next section. The folloing dicussion is not relevant to integers.

Generalization :

For the above to make sense, we don't really need an associative addition.
The additive counterpart of power associativity
is enough.

(2015-01-25) Relatively prime
(or coprime ) integers.
Two integers are coprime when their greatest common divisor is 1.

The greatest common divisor (GCD) of two
integers is defined as the largest positive integer that divides
both of them evenly. The GCD
is easily computed using Euclid's algorithm
(and/or its subtractive version).

Likewise, several integers are coprime when their only common positive divisor is 1
(French: nombres premiers entre eux,
nombres étrangers ).
Such numbers need not be pairwise coprime (e.g., 6, 10 and 15).

Theorem :
Arguably, the most important fact about coprime integers is:

If the number d divides m n and is coprime with n,
then d divides m.

Proof :
Using Bézout's lemma,
(itself a consequence of Euclid's algorithm,
quoted in the above definition of coprime numbers)
we know that there exists two integers u and v such that:

u d + v n = 1

Since d divides m n it also divides the following quantity:

u d m + v m n =
m ( u d + v n ) = m

Note that this result is crucial in the
proof of the fundamental theorem of arithmetic given below.
(In other words, the above establishes the unicity of prime factorizations;
it's not the other way around.)

A positive integer p is said to be prime
when it has just two distinct divisors
among positive integers (1 and p).

Besides the number 1 (one) itself, which is not considered prime,
any positive integer is thus either a prime or a composite
number (namely, the product of two lesser factors).

Two distinct prime numbers p and q are clearly
coprime
(HINT: the set of their common divisors is the
intersection of {1,p} and {1,q}).
That fact, known as the coprimality of primes,
implies that a prime which divides a product of distinct primes must be equal
to one of them (HINT: Prove this by
induction on the number of factors.)
That's needed to prove the fundamental theorem of arithmetic.

(2006-11-25) Lemma
Any integer greater than 1 has at least one prime factor.

This humble statement is often overlooked.
Yet, it's a required preliminary to two otherwise unrelated proofs
for two fundamental theorems :

We proceed by induction, using the fact that any
integer greater than 1 is either prime or composite.

If such an integer is prime, we're done
(the prime divisor we're after is the number itself).
Otherwise, it has a smaller divisor greater than 1. Using the induction
hypothesis, that divisor has at least one prime factor.
By transitivity, that prime factor also divides the original number.

(2006-11-25) Fundamental Theorem of Arithmetic
Every positive integer has a unique
factorization into primes.

For example:
168 = 2^{ 3} 3 7

Such a factorization can be specified by the
sequence of the exponents to which the successive primes should be
raised. In the example of 168, this is:

( 3, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, ... )

We say that "2 has multiplicity 3
in 168". The multiplicities of 3 and 7
in 168 are equal to 1.
Any other prime has multiplicity 0.

When discussing the prime divisors of a certain integer, it's
sometimes more elegant to consider the multiplicity of a prime
without assuming it's nonzero.
(On 2015-12-31, I found it convenient
to formulate this way a conjecture about the prime divisors of the form
6n+1 of the Wendt determinant
of odd order n).

Conversely, any sequence
of nonnegative integers with finitely many nonzero elements is associated with
a unique positive integer.
In the factorization of the integer 1, all exponents are zero:

( 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... )

The sequence of exponents associated with the product of two positive integers is the
direct sum of the sequences associated with those factors.

Such sequences can be represented by lists ending at the
last nonzero term.
That would be the list (3,1,0,1) for 168
and the empty list ( ) for 1.

Proof of the Fundamental Theorem of Arithmetic :

Using the above lemma, we first prove (by induction)
that any positive integer n has a prime factorization
(i.e., it's a product of prime numbers). Gauss
(1798)
didn't didn't think that proof was needed. I beg to differ:

If n = 1, the factorization we seek is simply empty.

Otherwise, the lemma says that n has a prime divisor p.
The induction hypothesis tells us that the positive integer n/p has a
prime factorization.
Combining p with that factorization, we obtain a prime factorization of n.

To establish that the prime factorization of an integer is unique, we remark that
dividing repeatedly two such factorizations by any prime they have in common, must yield
an empty product.

Otherwise, we'd have a prime dividing a product of primes without being equal to any of them
(which contradicts the coprimality of distinct primes).

(2006-05-25) Euclid's Theorem (c. 300 BC)
There are infinitely many prime numbers.

All we have to prove is the existence of a prime above any given prime P.
This can be established in just two lines (one of the greatest proofs ever):

If Q is the product of all primes less than or equal to P, then any prime
factor of Q+1 can't divide Q and must, therefore,
be a prime greater than P.

This great proof is often needlessly presented as a
proof by contradiction.
It's also often obscured by preliminary discussions about such things as
the nature of infinity, which aren't needed either.
Finally, the need for a not-so-obvious lemma is rarely stated, if ever.

The Set of Prime Numbers

(2007-04-30) Dirichlet's theorem on primes in arithmetic progressions:
If a and N are coprime, infinitely many primes are of the form kN+a.

This statement was conjectured by Gauss
(Euler had previously stated the special case a = 1).
It was proved by Dirichlet in 1837, using the
Dirichlet characters
and related L-series which he introduced
for that very purpose.

(2007-04-30) Green-Tao Theorem (2004)
There are arbitrarily long prime arithmetic progressions (PAP).

In 1975, Endre Szemerédi
(Abel prize 2012) proved that any set of integers of positive upper density
contains arbitrarily long
arithmetic progressions
(Szemerédi's theorem).
This had been conjectured by
Erdös (1913-1996) and
Turán (1910-1976) in 1936.
That result doesn't apply to the set of primes (which has zero upper density)
but Erdös stengthened the Erdös-Turán conjecture
in 1973 to state that arbitrarily long arithmetic progressions exist
in any set of positive integers when the series of its reciprocals diverges.
That's a very strong statement which would imply the Green-Tao theorem of 2004.

In 2006
(pdf),
Tao and Tamar Ziegler
generalized that, by showing that there are
arbirarily long polynomial progressions of primes.
More precisely:

For any sequence of k integer-valued polynomials
(Q_{1}, Q_{2} ... Q_{k })
and any positive e,
there are infinitely many choices of integers
x and
m < x^{ e}
which make all expressions x+mQ_{i}(m) prime.

The original Green-Tao theorem corresponds to the
special case Q_{i}(m) = i.

The existence of infinitely many arithmetic progressions of length 3
among primes was established in 1939, by the Dutch mathematician
Johannes van der Corput
(1890-1971). Originally, Green and Tao wanted to prove that there are
infinitely many equally spaced sequences of 4 primes, but they saw that their
methods proved the existence of such sequences of any length...

Smallest Prime Arithmetic Progressions (PAP) of Given Length

If an arithmetic progression (AP) of k primes
starts above k,
then its common difference (N) must be a multiple
of all the primes less than or equal to k (or else, one such prime would
be a proper factor of a term in the progression).
This is made explicit in the above, using the
(fairly standard) notation
p# to denote the primorial of p, namely the
product of all primes between 2 and p
(A002110).

Recent Results and New Records :

Since the record
breakers
are rarely found in strict order of size, we can't
reliably extend the above table. Instead, we'll just mention some newer results,
starting with the first known example of 22 primes in arithmetic progression,
discovered by Andrew Moran, Paul A. Pritchard and Anthony Thyssen on
1993-03-17:

11410337850553 + 4609098694200 k (k = 0 to 21).

On 2004-07-24,
Markus Frind, Paul Jobling and Paul Underwood found
an arithmetic progression of 23 primes
(ending with 449924511422857) :

56211383760397 + 44546738095860 k
(k = 0 to 22).

A smaller instance
(ending with 1036239621869317) was later found by Frind
(2006-04-01)
featuring a common difference
N = 9523 . 23#.

403185216600637 + 2124513401010 k
(k = 0 to 22).

The first arithmetic progression of 24 primes was discovered by
Jaroslaw "Jarek"
Wróblewski
on 2007-01-18:

468395662504823 + k . 205619 . 23# (k = 0 to 23).

Wróblewski and Raanan Chermoni found a PAP of length 25 on 2008-05-17:

6171054912832631 + k . 366384 . 23# (k = 0 to 24).

In a distributed PrimeGrid project using software written
by Jaroslaw Wroblewski and Geoff Reynolds, the
Playstation 3 of Benoît Périchon found
26 primes in arithmetic progression on 2010-04-12:

43142746595714191 + k . 23681770 . 23# (k = 0 to 25).
A204189

The second example, discovered on 2012-03-16 by James Fry, is smaller:

3486107472997423 + k . 1666981 . 23# (k = 0 to 25).

Consecutive Primes in Arithmetic Progression :

It's much more difficult to find consecutive primes
in arithmetic progression.
A sequence of length 10 was first found on
1998-03-02.
Namely,
P + 210 k (for k = 0 to 9)
with the following 93-digit value for P.

It's highly unlikely that a longer sequence (length 11) will be found any time soon,
although it's conjectured that there are infinitely many
instances of k consecutive primes in arithmetic progression,
for any k...

(2009-06-26) The von Mangoldt function
L(n)
L(n) = Log p if n
is a power of the prime p. Otherwise, L(n) = 0

The fundamental theorem of arithmetic is essentially equivalent
to the following relation, which states that the logarithm of an
integer is equal to the sum of the values of the
von Mangoldt function over all its divisors:

The function L is named after
Hans von
Mangoldt (1854-1925) who got his doctorate at
Berlin under
Weierstrass and
Kummer.
He paved the way for the two proofs of the Prime number theorem (1896)
by providing rigorous proofs for two statements which had only partial proofs
in Riemann's 1859 seminal paper on the Zeta function.

In 1895, von Mangoldt proved a
conjecture stated by
Bernhard Riemann (1826-1866)
in
1859.
The number of roots of the zeta function with a positive imaginary part less than
2kp is indeed:

(2006-11-25) PNT: The Prime Number Theorem (1896)
A random integer N is prime with a probability roughly equal to 1/ln(N).

In 1792, at age 15,
Gauss
made the above statement as a private conjecture in
his notebook.
This is usually recast in terms of various (equivalent)
approximations to the prime counting function
which lets p(x)
denote the number of primes below x :

There's no controversy about the value of p(x)
between prime values of x.
At those points of discontinuity however,
the modern convention is used (consistent with the results of
Fourier analysis) which makes the value
at a jump discontinuity equal to the half-sum of the left and right limits.
Bizarre as it may look at first,
this normalization is also adopted for other counting functions,
including the prime-power counting function (J)
discussed below.

This flavor of the prime counting function is what
Riemann described when he
introduced the concept in 1859 (in his paper, he used the symbol F for this).
That's sometimes called the normalized
prime counting function, using a nought subscript to single it out.
However, since there's no agreement on what the unnormalized
version should be, I recommend dropping both the qualifier and the subscript !

In 1808, Legendre
proposed the following approximation, involving
a parameter B (about -1.08366) sometimes known as Legendre's constant.
(For extremely large values of N, the best value of B would be 1.)

p(x) ~ x / (B + ln x)

The aforementioned conjectural remark of the young Gauss is equivalent to equating
p(x) and
either flavor of the
logaritmic integral li(x) or Li(x).
Gauss put it in this form in 1849 (although his remark
appeared in print only posthumously, in 1863).
The resulting statement became known as the prime number theorem (PNT).

p(x)

~

li(x) ~ Li(x)

~

x / ln(x)
+ x / (ln x)^{2}
+ 2x / (ln x)^{3}
+ ... + k! x / (ln x)^{k+1}
+ ...

Although it would be better to retain the first 3 terms in the above
asymptotic expansion
of the logarithmic integral,
the PNT is usually stated by retaining only the leading term x/ln(x). Namely:

Prime-Number Theorem (PNT)

lim x® ¥

p(x)

= 1

x / Log x

In 1848, well before this relation was unconditonally proved,
Chebyshev showed that
if the limit exists, then it must be 1.
(video
by Ram Murty).

Two independent proofs of this famous theorem were given in 1896,
by Hadamard and
Vallée-Poussin.
In 1951, Wiener made clear that both of those
proofs rely on the established fact that the Riemann Zeta Function
z
does not have any zeroes of the form 1+it.

In 1949, a beautiful elementary proof of the PNT was
again found by two mathematicians simultaneously:
Paul Erdös and
Atle Selberg.

The celebrated Riemann Hypothesis
(which states that the nontrivial zeroes of
z are all of the form
½+it )
is equivalent to the following statement:

p(x)
=
Li(x) + O(Öx ln x)
=
li(x) + O(Öx ln x)

Against all numerical evidence, which never shows
p(x) above li(x),
John E. Littlewood
proved, in 1914, that the sign of
p(x)-li(x) changes infinitely many times.
It's now known that the first such reversal of sign
must happen for some number x with 370 digits or fewer.

(2014-07-24) Riemann's weighted prime power counting function J
Giving partial credit to the powers of primes.

It turns out that Euler's Zeta function,
the undisputed Queen of analytic number theory, is directly tied with a special
counting function closely related to the prime-counting function:

Instead of just counting the primes below x, like the function p does,
the function J also gives a fractional score of 1/n
to the n-th power of a prime.
Since there are clearly p ( x^{1/n })
such numbers below x, we have:

J ( x ) =
p ( x )
+
p ( x^{1/2 }) / 2
+
p ( x^{1/3 }) / 3
+
p ( x^{1/4 }) / 4
+ ...

J has jump discontinuities at prime powers,
where its value is defined as the half-sum of its
lower and upper limits (to be consistent with
Fourier analysis).
The above formula can be inverted using the Möbius
function m :

p ( x ) = J ( x )
- J( x^{1/2 }) / 2
- J( x^{1/3 }) / 3
+ ... +
m(n) J( x^{1/n }) / n
+ ...

Proof :
To check this, use the above definition of J
to expand the RHS and then,
for every integer k, regroup all terms where
x is raised to 1/k :

The bracketed sum is a known function of k which
vanishes unless k=1.

J is sometimes termed Riemann's Jump Function.
Bernhard Riemann introduced it
(using the symbol f at the time) in his famous 1859 paper
on the Zeta function, as part of the following construction...

The paper entitled
"Über die Anzahl der Primzahlen unter
einer gegebenen Grösse" (Nov. 1859)
is the only one Riemann ever wrote on the subject of the
Zeta function (z) and/or Number Theory.
In it, he formulates the wonderful conjecture
now known as the Riemann hypothesis (RH)
but the rest of the paper doesn't depend on that.

When Re(z) > 1, the series are absolutely convergent and we may write:

Riemann then substitutes every occurrence of p^{-kz}
with a definite integral:

p^{-kz} = z

ò

¥

x^{-z-1} dx

p^{k}

This turns the whole sum into an integral involving the above function J :

Log z(z) = z

ò

¥

J(x) x^{-z-1} dx

0

Riemann's Explicit Formula (1859) :

Now comes the great part:
The above integral transformation would later be called a
Mellin transform
(Hjalmar Mellin was only 5 years old at the time).
Riemann realized that it could be inverted to retrieve J from z.
As we already know how the prime-counting function is derived from J, this will give us an explicit
way to obtain it from z...

The sum is understood to be performed over all the zeroes of the Zeta function, taken
in increasing order of magnitude
(the value of the sum depends on this order
because the series isn't absolutely convergent).

(2009-06-26) Number of divisors of an integer N
A large number N is expected to have about Log N divisors.

In 1838, Dirichlet
evaluated the average number of divisors of
all positive numbers not exceeding N.
This involves the Euler-Mascheroni constant
g :

Log N + 2 g - 1
»
Log N + 0.15443...

(2009-06-26) _{ } On the number w(N)
of prime factors of an integer N
A large number N has about
Log Log N prime factors (1917)

The number of distinct prime factors of n is traditionally
denoted w(n).
The function w
(see A001221)
is an additive function. That's to say:

w( ab ) =
w( a )
+ w( b )
whenever a and b
are coprime.

In 1917, G.H. Hardy
and S. Ramanujan proved
w(n) to be
asymptotically equal
to Log Log n .
In 1933,
Paul Turán
(1910-1976) gave an innovative one-page proof of that fact,
also featured in his doctoral dissertation (1934).

(2006-05-24) The Largest Known Prime
Until a fast formula is found, the record will be broken again and again.

The largest known prime has very often been of the
form 2^{n}-1. Such numbers are called Mersenne numbers
and their prime values are known as Mersenne primes
(we discuss elsewhere their history,
the special form of their factors
and the connection with
perfect numbers).
It's easy to see that a Mersenne number can't be prime unless the
exponent (n) is itself prime. (This happens to be also a consequence of
a nice general property of integer sequences which start with 0 and 1 and obey
a second-order recurrence,
as we demonstrate elsewhere.)

The primality of the exponent is not sufficient. For example,
the 11th Mersenne number 2047 is the product of 23 and 89, whereas the
23rd is divisible by 47...
Nevertheless, the primality of Mersenne primes is (currently)
significantly easier to establish
than that of all other integers of similar magnitudes.

The gap between the Mersenne primes 2^{127}-1
and 2^{521}-1 was sufficiently large to allow other approaches
to break the record, as documented in the table below.

This happened again between the discovery of the
primality of 2^{216091}-1 (Slowinski, 1985)
and that of 2^{756839}-1 (Slowinski, Gage et al., 1992)
when an "Amdahl 1200" computer
was used to prove the primality of the following number
(J. Brown, C. Noll, B. Parady, G. Smith, J. Smith and S. Zarantonello, 1989).

391581 ´ 2^{ 216193} - 1

Since 1996, the scene has been dominated by the "Great Internet Mersenne Prime Search"
(GIMPS) which has harnessed thousands of microcomputers and found
allthe latest record primes
(see GIMPS for an update).

The
"Largest Known Prime", by Date
(until the dawn of the Computer Era)

k (2^{127} - 1) + 1
for k = 114, 124, 388, 408, 498

41

1876

E. Lucas

Lucas Test

2^{127}-1

39

1867

Fortuné Landry

Trial Division (Optimized)

(2^{59}-1) / 179951

13

1851

W. Looff

10^{12} - 10^{6} + 1

12

before 1772

Leonhard Euler

2^{31}-1

10

1588

Cataldi

Trial Division

2^{19}-1

6

2^{17}-1

Le Lasseur's Number :
The 17-digit number 13373763765986881 divides the 360^{th}
Fibonacci number.
It was the second-largest known prime when it was discovered in 1879 by the noted French amateur
Henri Le Lasseur
(Henri Le Lasseur
de Ranzay, 1821-1894; also spelled "de Ranzey").
It has been argued that this number was (briefly) the largest
"known" prime, as the prime status of M127 (39 digits)
was supposedly not firmly established at the time. This ain't so, unfortunately.
The heroic proof of
the primality of a "general" number as large as Le Lasseur's number
would have deserved a bright spot in the record books,
instead of a mere footnote...
Le Lasseur's number was then the largest prime whose primality had been established by
general-purpose methods.
It's more than 4000 times larger than the runner-up
3203431780337 (Landry's number, 1867).

By contrast, the Frenchman Aimé Ferrier
officially reported his 44-digit record-breaking prime on
Bastille day, July 14, 1951. He had been working on this since May
and Jeff Miller may have been aware of Ferrier's ongoing work.
According to the 1997 recollections
of family member David Miller
as reported by Chris Caldwell :
"Jeff Miller went to some length to make sure
Ferrier's result was not overlooked ".
Miller may well have changed his original strategy (leading to his 79-digit record)
specifically to beat Ferrier's upcoming result
which would have overshadowed Miller's other results (the first of which
broke the 75-year old record of Lucas).
However, Miller deliberately reported both
his own 79-digit number and
Ferrier's 44-digit prime as having been discovered "in early July". This may have been a
professional courtesy to Ferrier,
although a deeper enquiry (which probably never took place) may or may not have revealed
that Ferrier's results came a few hours too late to enter the record book.
Giving priority to Ferrier puts both numbers in the record book and still gives
credit to Miller and Wheeler for having broken the long-standing
record of Lucas with the earliest of their 41-digit numbers.
Ferrier's number itself stands out as the largest prime ever discovered
without the help of an electronic computer.

This healthy ambiguity ought to be strictly respected now.
This is just what D.H. Lehmer did when
he summarized the "Recent Discoveries of Large Primes"
very shortly after those events [MTAC, 5, 36, Oct. 1951].

A machine printed the primality of 2^{4253}-1
before that of 2^{4423}-1.
However, a human being (Alexander Hurwitz)
read about the latter before anybody
knew about the former, which was thus never largest anong "known primes"
(for more details, see our presentation of Mersenne primes and
perfect numbers).

(2007-05-08) The Lucas-Lehmer Test
A fast way to check the primality of the p^{th }Mersenne number 2^{p}-1.

The Lucas-Lehmer test is a special case of the modern way to check the primality
of n when all the prime factors of n+1 are known.
It boils down to a procedure devised in 1878 by
Edouard Lucas (1842-1991) and streamlined by
D.H. Lehmer (1905-1991) in 1930:

Consider the following recursively-defined
sequence, modulo
2^{p}-1

L_{0} = 4
L_{n+1} =
L_{n}^{2} - 2 [mod 2^{p}-1]

For an odd prime p,
2^{p}-1 is prime if and only if
L_{p-2} is zero. That's all!

So, the primality of
2^{p}-1 can be determined with just p-2 multiplications.

(2017-08-03) The AKS Primality Test (2002)
Telling whether any number is prime or composite in polynomial time.

The AKS algorithm was a great theoretical breakthrough which proved unconditionally
that the primality of an integer can be checked deterministically in polynomial time.
However, for all practical purposes, the randomized
Rabin-Miller test remains unsurpassed.

PRIMES is in P
by Manindra Agrawal, Neeraj Kayal & Nitin Saxena (2002).

(2006-05-24) Is there a formula which gives only primes?
What's a "formula" anyway?

As of this writing, checking the primality
of 2^{p}-1 for increasingly large values of the exponent p
is the most efficient way to name large primes
(see GIMPS).

Anything faster than that would be major news. Something vastly
faster could essentially end the above record-breaking game
by making it easy to "name" explicitely primes with so many digits
that they could not possibly be written down.
This would not necessarily end the search for primes of
a specific type (like "Mersenne primes") but it would make the title of
"largest known prime" as meaningless as that of "largest known integer"
(whatever integer is named, something like
"two to the power of that" will name something vastly larger).

It's not enough to have a "formula" for larger primes.
It must be an effective formula...
The formula discussed next isn't an effective one!

Mills' Formula :

There are uncountably many values of x
for which x^{3n}
will always round down to a prime integer.
This was shown in 1947 by a student of
Emil Artin at Princeton:
William H. Mills
(1921-2007?).
Mills used deep results about prime gaps which had been obtained by
Guido Hoheisel
(1894-1968) in 1930 and by
Albert Ingham
(1900-1967) in 1937.

Assuming the
Riemann Hypothesis to be true,
the lowest such x
may be explicitely computed, by bracketing its powers:

x^{1}, rounded down, is the lowest prime, namely 2

x^{3}, rounded down, is the lowest prime between
2^{3} and 3^{3}, namely 11

x^{9}, rounded down, is the lowest prime between
11^{3} and 12^{3}, i.e., 1361

x^{27}, rounded down, is the lowest prime between
1361^{3} and 1362^{3}, etc.

Thus, raising x = 2.229494772491595235722852237656- to the power
of 3^{n} (and rounding down) only yields prime numbers, namely:

2, 11, 1361, 2521008887, 16022236204009818131831320183,
etc.

One fallacy with that "etc." is that we had to prove the primality
of the last value to obtain x with this much precision
(conversely, the precision given is barely enough to obtain this
last prime number correctly).
The whole thing is
merely a way to encode
an infinite sequence of prime values into the infinitely
many decimals of a real number. It doesn't help in
constructing such a sequence.

The other fallacy is that the iteration of the process depends on the "obvious fact"
that there's always a prime between consecutive cubes.
Although the
Hoheisel-Ingham theorem does guarantees that for
sufficiently large cubes,
it doesn't say exactly how large is large.
Therefore, we cannot proceed "by inspection" for the above sequence
until a putative lower bound is reached.
Currently,
our only option is to assume the validity of Riemann's hypothesis
and conclude on that conditional basis
(otherwise no satisfactory value of x can be established).

The above sequence is usually expressed as A^{3}, A^{9}...
A^{3n}... where n is a nonzero
integer and A (the cube root of the above x) is now known as
Mills' constant
(cf. A051021).
Thanks to T.D. Noe
for clarifying this, on 2008-09-23
(reminding me that we attended graduate school together).

A = x^{1/3} =
1.306377883863080690468614492602605712916784585156713644368...

Using squares instead of cubes in the above would be fine if we knew
that there's always a prime between consecutive squares. If that's true,
then the lowest number that truncates down to a prime when
raised to the power of a 2-power is 2.3247099696648664983923017-
The first primes so obtained are
2, 5, 29, 853, 727613, 529420677791 and 280286254072681840639693.
See A059784.

(2009-01-23) Ulam's Lucky Numbers and Ludic Numbers
Prime-likes sequences obtained by modifying the
Sieve of Erathostenes.

The prime numbers not exceeding some integer n can be obtained
by crossing out some of the positive integers not exceeding n,
according to the following sieving procedure, known as the
Sieve of Eratosthenes
which was devised by Eratosthenes of Cyrene (276-194 BC).

In the above, we could instead cross out every p^{th} number
among the subsequent numbers not yet crossed out. In this case, we do not obtain
the sequence of primes, but the so-called ludic numbers

If we start with the odd numbers and repeatedly remove every p^{th }
number (counting from the very beginning of the surviving sequence) we obtain the
[odd] lucky numbers (French: nombres fastes ):