home | index | units | counting | geometry | algebra | trigonometry | calculus | functions
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics
 border
 border

Final Answers
© 2000-2021   Gérard P. Michon, Ph.D.

Elementary and
Special Functions

Articles formerly on this page:

 Michon
 
 

Related articles on this site:

Related Links (Outside this Site)

Special functions.
 
border
border

Elementary Functions  &  Special Functions


(2002-01-25)   Basic Functions
What are some of the common and "special" numerical functions?

Well, people have been inventing special functions ad nauseam.  The list is quite literally endless, but we may attempt the beginning of a classification for those functions which are common enough to have a universally accepted name.  Let's start with the truly elementary functions:

  • Polynomial functions: The value y is obtained from the variable x using only a  finite  number of additions and/or multiplications involving given constants.  The simplest such functions are the  null function  (zero value, infinite degree)  and the other  constant functions  (degree 0; y = a ¹ 0).  Next are linear functions (degree 1;  y = ax+b),  quadratic functions (degree 2; y = ax2+bx+c),  cubic functions, quartic (or biquadratic), quintic, sextic (rarely hexic), etc.  Specific qualifiers are virtually unused for polynomials beyond degree 6: degree 7 is heptic rather than septic [sic!]  degree 8 is octic,  9 is nonic,  10 is decic.  We're told that some people have called degree 100  hectic Just a joke!
     
  • Rational functions: The functions you obtain when division is allowed as well.  A rational function is the quotient of two polynomials.  The simplest of these is the reciprocal function y = 1/x.
     
  • Algebraic functions: The term applies to any function for which the value y and the variable x are algebraically related, which is to say that there's a two-variable polynomial P such that P(x,y) = 0.  Only a few of these functions have an explicit name and/or symbol.  The most notable is the square root function y = Öx.  Like the square root function, most algebraic functions can only be defined continuously over the complex plane as multivalued functions.  Alternately, such functions may be construed as univalued (ordinary) functions of a variable whose domain is a so-called Riemann surface for which several points may have the same projection on the complex plane.
     
  • Elementary transcendental functions: The simplest of these is  y = ex, the (natural) exponential function y = exp(x), a function which is equal to its own derivative (all other such functions are proportional to it).  The exponential function is defined (univalued) over the entire complex plane, and so are the other transcendental functions which may be defined directly in terms of it, including trigonometric functions (circular functions)  or  hyperbolic functions  like:
          y   =   sin(x)   =   ½ (exp(ix) - exp(-ix))
          y   =   sh(x)   =   ½ (exp(x) - exp(-x))
    Modern usage is to consider only the 3 preferred trigonometric functions (sine, cosine and tangent) whereas their 3 reciprocals (cosecant, secant and cotangent) are being deprecated.  A similar remark applies to the 3 preferred hyperbolic functions (sh, ch, th), whose reciprocals are rarely used, if ever.
     
    Also classified as  elementary transcendental functions  are the inverses of the above, starting with the (natural) logarithm function, y = ln(x), which is the inverse of the exponential (x = exp(y)).  If continuity is required in the realm of complex numbers, the logarithm function may only be defined as a multivalued function.  The same thing is true of the inverse trigonometric functions (arcsin, arccos, arctg) or the inverse hyperbolic functions, which complete the modern list of elementary functions.
     
  • Important named combinations of elementary functions. 
    One example is the Gudermannian or hyperbolic amplitude, named after the German mathematician Christoph Gudermann (1798-1852):
     
    gd(x)   =   2 arctg( ex ) - p/2   =   2 arctg( th x/2 )
     
    The fact that  gd is  odd  is clear from the latter expression  (but obfuscated by the former one).  The derivative of gd(x) is 1/ch(x),  and the inverse of the Gudermannian is a primitive of 1/cos(x)...  In a Mercator conformal map, the distance to the equator of a point at latitude gd(u) is proportional to u. 
     
    sinc(x) = sin(x)/x  is the so-called  sine cardinal  or sampling function  (which arises to express  Fourier transtorms of rectangular functions).
     
  • Gamma function Click Here 
 for Details Arguably, the most common special function, or the least "special" of them.  The other transcendental functions listed below are called "special" because you could conceivably avoid some of them by staying away from many specialized mathematical topics.  On the other hand, the Gamma function y = G(x) is most difficult to avoid.
     
  • Elliptic functions and elliptic integrals.
     
  • Exponential integral (Ei).
    Ei(x)   =    ò  x   e t dt
    vinculum
    -¥ t

     
  • Logarithmic integral  (li):   li x  =  Ei (ln x)
    Euler's older version is capitalized:   Li x  =  li x - li 2
     
  • sine & cosine integral (si, ci).
     
  • Bessel functions.
     
  • Lambert's W function (multivalued); if y = W(x), then x = y exp(y).
     
  • Airy's functions  (Ai & Bi).  Independent solutions of   y''  =  x y
     
  • Riemann's Zeta function.  A simple function with a rich structure, best known for its nontrivial zeroes  (its trivial zeroes are the even negative integers).  Infinitely many of these have been shown (by G.H. Hardy) to have a real part of ½, and billions of them have actually been found on that critical line, but it's still not known whether all of them are there, as Bernhard Riemann (1826-1866) first conjectured in 1859.  The far-reaching implications of this statement, known as the Riemann Hypothesis, make it the  most important unproved mathematical proposition  of our times.
     
  • ... ...

As advertised, the list is endless...


(J. S. of Canada. 2000-10-15)
How do you solve these equations to exact values for x?
  1.   ln(x-2) - 3   =   ln(x+1)
  2.   sin(2x) sin(x)  +  cos(x)   =   0

1)   If   ln(x-2)-3 = ln(x+1)   then   ln((x-2)/(x+1)) = ln(e3) so we must have (x-2) = (x+1)e3   and x can only be equal to   (2+e3)/(1-e3).
    Now, however, this value of x happens to be negative (it's about -1.157) which makes it unacceptable, since both (x-2) and (x+1) should be positive (or else you can't take their logarithm).  Therefore, the original equation does not have any solutions at all!

2)   Rewrite   sin(2x)sin(x)+cos(x) = 0   as   2 cos(x)sin(x)sin(x) + cos(x) = 0, or   cos(x)[2sin2(x)+1] = 0.  As the second factor cannot be zero,  this equation boils down to   cos(x) = 0,  which has infinitely many solutions of the form   x = (k+½)p,  where k is any integer  (positive or not).


( John of Garland, TX. 2000-11-19)
How are the values of trigonometric functions calculated?
For example, how do we determine that sin(32°) = 0.52991?

Basically, the following relation is used:
sin(x) = x - x3/6 + x5/120 - x7/5040 + x9/362880 - ... + (-1)k x2k+1/(2k+1)! + ...

To use this for actual computations, you've got to remember that x should be expressed in radians (1° = p/180 rad). In your example, x = 32 ° = 0.558505360638... rad. The series "converges" very rapidly:

After 1 term,  S = 0.55850536063818
After 2 terms, S = 0.52946976180816
After 3 terms, S = 0.52992261296708
After 4 terms, S = 0.52991924970365
After 5 terms, S = 0.52991926427444
After 6 terms, S = 0.52991926423312
After 7 terms, S = 0.52991926423332
(no change at this precision after this)

Your computer and/or calculator uses this along with a technique called economization (the most popular of which is the Chebyshev economization) which allows a polynomial of high degree (or any reasonable function) to be very well approximated by a polynomial of lower degree.

In the case of the sine function, the convergence of the above series is so good that economization only saves you a couple of multiplications for a given precision. In some other cases (like the atan function), it is quite indispensable.

Footnote: about atan: The atan function has a nice Chebyshev expansion which allows one to bypass the intermediates step of a so-called Taylor expansion like the above. This is rather fortunate because the convergence of atan's Taylor expansion is quite lousy when x is close to 1. Modern atan routines use an economized polynomial for x between 0 and 1, and reduce the computation of atan(x) to that of atan(1/x) when x is above 1.
See the following article for more details...


(2000-11-19)   What is Chebyshev economization?

Over a finite interval, it is always possible to approximate a continuous function with arbitrary precision by a polynomial of sufficiently high degree.  In  some cases [one example is the sine function in the previous article] truncation of the function's Taylor series works well enough.  In other cases, the Taylor series may either converge too slowly or not at all  (the function may not be analytic or, if it is analytic, the radius of convergence of its Taylor series may be too small to cover comfortably the desired interval).

If a good polynomial approximation of the continuous real function  f (x)  is desired over a finite interval, the following approach may be used and is in fact the most popular one.  We may consider without loss of generality that the desired range of  x  is [-1,1] (if it's not, a linear change of variable will make it so).  Thus, a new variable q  (whose range is [0,p] )  can be introduced via the relation  cos q = x.  Either variable is a  decreasing  function of the other.

The fundamental remark is that cos(nq) is a polynomial function of cos(q). In fact, either of the following relations defines a polynomial of degree n known as the Chebyshev polynomial [of the first kind] of degree n. The symbol "T" is conventionally used for these because of alternate transliterations from Russian, like Tchebycheff or Tchebychev which are a better match for the Russian pronounciation  (the spellings "Chebychev" and "Tchebyshev" also appear).

cos(nq) = Tn(cos q)     or     ch(nq) = Tn(ch q)     [ch = hyperbolic cosine]

The trigonometric formula   cos(n+2)x = 2 cos x cos(n+1)x - cos nx   translates into a simple recurrence relation which makes Chebyshev polynomials very easy to tabulate:  Tn+2(x)   =   2x Tn+1(x) - Tn(x)

T0(x)  =   1
T1(x)  =   x
T2(x)  =   -1+2x2
T3(x)  =   -3x+4x3
T4(x)  =   1-8x2+8x4
T5(x)  =   5x-20x3+16x5
T6(x)  =   -1+18x2-48x4 +32x6
T7(x)  =   -7x+56x3-112x5 +64x7
T8(x)  =   1-32x2+160x4 -256x6+128x8

We must remark prominently that, if   y2 = x2-1   (y need not be real ),  then:

Tn(x) = [ (x+y)n + (x-y)n ] / 2

This is a consequence of  de Moivre's relation (with x = cos q  and y = i sin q ):

[ cos q + i sin q ] n   =   exp(i q) n   =   exp(i nq)   =   cos nq + i sin nq

Now,  f (cos q)  is clearly an even function of q  which is continuous when  f  is.  As such, it has a tame Fourier expansion which contains only cosines and translates into the so-called Chebyshev-Fourier expansion of  f(x):

 f (cos q) = co /2 + 
¥
å
n=1
 cn cos(nq)     therefore:     f (x) = co /2 + 
¥
å
n=1
 cnTn(x)

The last expression is a series which is always convergent.  For "infinitely smooth" functions, it converges exponentially fast (as a function of n, the coefficient has to be smaller than the reciprocal of a polynomial of degree k+1, for any k, or else the Fourier series of the k-th derivative of  f (cos q) would not converge).   Joseph Fourier 
 (1768-1830) This is much more than what can be said about a Taylor power series...  A truncated Fourier-Chebyshev series is thus expected to give a much better approximation than a Taylor series truncated to the same order.

What is known as Chebyshev economization is often limited to the following dubious technique: Take a good polynomial approximant with many terms (possibly coming from a Taylor expansion) and express it as a linear combination of Chebyshev polynomials (whose coefficients may be obtained from the inversion formula below). This expression may be truncated at some low order to obtain a good approximation as a polynomial of lower degree.

A better approach, whenever possible, is to compute the exact Chebyshev expansion of the target function and to truncate that in order to obtain a good approximation by a polynomial of low degree...  The following inversion formula can be used for to obtain the Chebychev expansion  [watch out for the explicit halving of c]  of an  analytic  function given by its Taylor expansion:

 f (x)   =    
¥
å
n=0
 an x n      =   ½ co  +  
¥
å
n=1
 cnTn(x)
 
 cn   =    2  
¥
å
p=0
   ì
î
2p+n
p
ü 
þ
  a 2p+n
Vinculum
22p+n

x    =     T1(x)
2 x2= 1+ T2(x)
4 x3  =   3 T1(x)+ T3(x)
8 x4= 3+ 4 T2(x) + T4(x)
16 x5= 10 T1(x)+ 5 T3(x) + T5(x)
32 x6= 10+ 15 T2(x) + 6 T4(x)+ T6(x)
2 n-1  x n    =    
(n-1)/2
å
k=0
   ì
î
n
 k 
ü
þ
 Tn-2k (x)     +    either... or   ½ C(n, n/2 )   if n is even.
  0   if n is odd.

The above complete inversion formula (infinite sum) is occasionally handy, but one may also always obtain the coefficients cn via the Euler formulas, which give:

cn   =     2    ó
õ
 1
-1  
f (x) Tn(x)   dx     =     2    ó
õ
 p 
0
  f (cos q) cos(nq) dq
Vinculum Vinculum Vinculum
p Vinculum p
Vinculum
 Ö 1-x2

In at least one (important) case, we may even obtain the Chebyshev expansion directly by algebraic methods...  Consider the arctangent function, which gives the angle in radians between -p/2 and p/2 whose tangent equals its given [real] argument.  That function is variously abbreviated Arctg (Int'l/European), arctan (US), atg or atan (computerese). The following relation is true for small enough arguments.  [It's  true modulo p for unrestricted arguments, because of the formula giving tg(a+b) as (u+v)/(1-uv) if u and v are the respective tangents of a and b.]  This may thus be considered an algebraic relation between formal power series:

Arctg( (u+v)/(1-uv) )   =   Arctg(u) + Arctg(v)

With this in mind, we may as well use this formal identity for the complex numbers u = k [x+iÖ(1-x2 )] and v = k [x-iÖ(1-x2 )], so that  2kn Tn(x) = (u n + v n).  This turns the RHS of the above identity directly into a Chebyshev expansion where the coefficient cn is simply the coefficient of the arctangent power series multiplied by 2kn.  On the other hand, the LHS becomes Arctg(2kx/(1-k2 )).  If we let k be Ö2-1, this boils down to Arctg(x) and we have:

Arctg(x)   =     ån   [2(Ö2-1)2n+1 (-1)n / (2n+1)]   T2n+1(x)
=   2(Ö2-1)   ån   [(2Ö2-3)n / (2n+1)]   T2n+1(x)

That's [almost] all there is to it: We got the Chebyshev expansion at very little cost! How good is the convergence of this series? Well, we may first remark that it converges even if the magnitude of x exceeds unity. More precisely, when x is larger than 1, Tn(x) is asymptotically equal to half the n-th power of  x+Ö(x2-1) , a quantity which equals the reciprocal of Ö2-1 when x is Ö2. Therefore, the series converges if and only if the magnitude of x is less than (or equal to) Ö2.

More importantly, when the magnitude of x is not more than 1, a partial sum approximates the whole thing with an error smaller than the coefficient of the first discarded term. Suppose we want to use this to find a polynomial approximant of the arctangent function at a precision of about 13 significant digits (we need it only over the interval [-1,1], as we may obtain the arctangent of x for x>1 as p/2 minus the arctangent of 1/x). We find that for 2n+1=31, the relevant coefficient is about 0.88 10-13 so that the corresponding term is just about small enough to be dropped. The method will thus give the desired precision with an odd polynomial of degree 29, whose value can be computed using 16 multiplications and 14 additions. A similar accuracy would require about 10 000 000 000 000 operations with the "straight" Taylor series... Some economization, indeed!


The above "formal" computation gives the same results as the (unambiguous) relevant Euler formula for the coefficients of the Chebyshev expansion of the arctangent function. This may puzzle a critical reader, since the whole thing seems to work as long as the quantity 2k/(1-k2 ) is equal to unity, and this quadratic condition is true not only when k is Ö2-1, but also for the alternate root -(Ö2+1) as well. This latter value, however, leads to a formal Chebyshev series which diverges for any value of x...

(Mark Barnes, UK. 2000-10-24)
What can you tell me about the Gamma function? I can work out values for G(x) if x is integral or x is an integer plus one half.  Click Here 
 for Details How can I calculate values for G(x), if x is some other value, like 2.8 or 67/9? What actually is the function?

The following intimidating definitions of the transcendental Gamma function hide its simple nature:  G(z+1) is merely the generalization of the factorial function (z!) to all real or complex values of the number z  [besides negative integers].

 Leonhard Euler 
 (1707-1783)
  • Euler integral of the 2nd kind (valid only if Re(z)>0):
    G(z) = ò0¥e-t tz-1 dt
    G(z) = ò1¥e-t tz-1 dt + å (-1)n/(n!(n+z))
  • Euler's definition (1729): 
    G(z) = limn®¥ nzn! / (z(z+1)...(z+n))
  • Weierstrass's definition:  (g being the Euler-Mascheroni constant, namely 0.5772156649015328606065120900824024310421593359399235988... ):
    G(z) = e-g z / z Õ ez/n/(1+z/n)

G(z) has an elementary expression only when z is either a positive integer n, or a positive or negative half-integer  (½+n  or  ½-n):

 
G(n)
 
=
 
(n-1)!         G(1/2 + n)
 
=
Öp  
 (2n-1)!!          G(1/2 - n)
 
=
(-2)n Öp
vinculum vinculum

=

=
2n
=
(2n-1)!!

In this, k! ("k factorial") is the product of all positive integers less than or equal to k, whereas k!! ("k double-factorial") is the product of all such integers which have the same parity as k, namely k(k-2)(k-4)... Note that k!, is undefined (¥) when k is a negative integer (the G function is undefined at z = 0,-1,-2,-3,... as it has a simple pole at z = -n with a residue of (-1)n/n! , for any natural integer n). However, the double factorial k!! may also be defined for negative odd values of k:  The expression (-2n-1)!! = -(-1)n / (2n-1)!! ) may be obtained through the recurrence relation  (k-2)!! = k!! / k , starting with k=1.  In particular (-1)!! = 1, so that either of the above formulas does give G(1/2) = Öp , with n=0. (You may also notice that either relation holds for positive or negative values of n.)

When the real 2x is not an integer, we do not know any expression of G(x) in terms of elementary functions:

G(1/3) = 2.67893853470774763365569294097467764412868937795730...
G(1/4) = 3.62560990822190831193068515586767200299516768288006...
G(1/5) = 4.59084371199880305320475827592915200343410999829340...

The real [little known] gem which I have to offer about numerical values of the Gamma function is the so-called "Lanczos approximation formula" [pronounced "LAHN-tsosh" and named after the Hungarian mathematician Cornelius Lanczos (1893-1974), who published it in 1964]. Its form is quite specific to the Gamma function whose values it gives with superb precision, even for complex numbers. The formula is valid as long as Re(z) [the real part of z] is positive. The nominal accuracy, as I recall, is stated for  Re(z) > ½, but it's a simple application of the "reflection formula" (given below) to obtain the value for the rest of the complex plane with a similar accuracy. The Lanczos formula makes the Gamma function almost as straightforward to compute as a sine or a cosine.  Here it is:

G(z) = [1+C1/(z)+C2/(z+1)+ ... +Cn/(z+n-1) + e(z)] ´ Ö(2p) (z+p-1/2)z-1/2 / ez+p-1/2

e(z) is a small error term whose value is bounded over the half-plane described above. The values of the coefficients Ci depend on the choice of the integers p and n. For p=5 and n=6, the formula gives a relative error less than 2.2´10-10 with the following choice of coefficients: C1=76.18009173, C2= -86.50532033, C3=24.01409822, C4= -1.231739516, C5=0.00120858003, and C6= -0.00000536382.

I used this particular set of coefficients extensively for years (other sources may be used for confirmation) and stated so in my original article here.  This prompted Paul Godfrey of Intersil Corp. to share a more precise set and his own method to compute any such sets (without the fear of uncontrolled rounding errors). Paul has kindly agreed to let us post his (copyrighted) notes on the subject here.

Some of the fundamental properties of the Gamma function are:

  • Reflection formula: G(z)G(1-z) = p/sin(pz)
     
  • Recursion formula: G(1+z) = zG(z)
     
  • Exact values (when n is an integer; see above when n is negative):
    G(n) = (n-1)! and G(n+1/2) = Öp (2n)! / (n!4n)
     
  • Gauss multiplication formula:
    G(nz) = (2p)(1/2-n/2) n(nz-½)  G(z)  G(z+k/n)  ...  G(z+(n-1)/n) ]
     
  • Legendre duplication formula  (i.e., multiplication formula  with  n = 2 ):
    G(2z) = (2p) 2(2z-½)  G(z)  G(z+½)

Other interesting remarks about the Gamma function include:

  • | G(ix) | 2   =   p / (x sinh px )     for x real

The Gamma Function (38 pages)  by  Emil Artin (1931; English translation by Michael Butler, 1964)


Louis Vlemincq  (Belgium.  2004-02-19; e-mail)   Lambert's W function
How is the equation   t + ln(t) = T ln( I / i )   solved for t and i ?

Taking the exponential of both sides makes it easy to solve for i:

t et   =   [I / i] T
 
i   =   I / ( t e t ) 1/T

To solve for t, you must use Lambert's W function, one of the more common "special" functions presented above:  Apply W to both sides of the first of the above equations.  By definition,  W(t exp(t))  is equal to t.  Therefore:

t   =   W( [I / i] T )

This solution is valid for positive values of t  (the original equation does not make sense for negative ones).  By itself, the equation  x = t exp(t)  has 2 real solutions for t when x is between -1/e and 0 and no real solution when x is less than -1/e.

The radius of convergence of the  Taylor series  of W is 1/e (0.36787944...)

 
W(z)
 

=
¥
å
n=1
   (-n) n-1  
  z n       [ for |z| < 1/e ]
 
vinculum

=
n!

On 2004-02-20, Louis Vlemincq wrote:
Thanks a lot for your kind, quick and learned answer.
It will be most useful to me.
 Best regards,
Louis Vlemincq,   Transmission Specialist,  BelcomLab.
BELGACOM   /   2, rue Carli, 1140 Evere   /   Belgium

This can be used to define W everywhere except at the singular point  z = -1/e  by  analytic computation  as a multivalued function whose branch cuts are not trivially related.

Johann Heinrich Lambert (1728-1777)   |   MathWorld   |   Wikipedia
Omega constant   |   Gompertz-Makeham law
 
Branch differences and Lambert W  by  D.J. Jeffrey & J.E. Jankowski  (2014).
 
The Lambert W Function (11:57)  Mathoma  (2015-02-12).
The Famous Equation x^2=2^x (11:57)  by  Steve Chow  (blackpenredpen, 2019-10-29).


(2020-03-17)   Dilogarithm, trilogarithm & other polylogarithms
Jonquière's function  (Alfred Jonquière, 1888).

The polylogarithm of order  s  is the  analytic function  of  z  defined by

Lis(z)   =   Li(s,z)   =    ¥    zn
 Vinculum
ns
å
n=1

Riemann's Zeta function  is a special case:   Lis(1)  =  z(s)

For  s=2  and  s=3.  the names  dilogarithm  and  trilogarithm  are used.  Other standard  numerical prefixes  are available if the need arises...  Historically,  the dilogarithm function was studied well before other polylogarithms,  starting with the following formula due to Euler:

Dilogarithm Reflection Formula   (Euler, 1768)
Li2 (1-x)  +  Li2 x    =    p2/6  -  Log (1-x)   Log x

The dilogarithm is still sometimes called  Spence's function  to recognize the work published in 1809 by the Scottish mathematician  William Spence (1777-1815).  Spence was concerned with polylogarithms of all orders  (integers only)  which he called  Logarithmic Transcendents.

Legendre posed some properties of dilogarithms as exercises  (1811).  Niels Abel (1802-1829)  discussed dilogarithms at greater length in 1826.  The name itself  (German;  bilogarithmische Function)  was coined in 1828 by the Swedish mathematician  Carl Johan Hill  (né  Rudelius, 1793-1875).

In December 1888,  the Swiss mathematician  Alfred Jonquière (1862-1899)  presented the general case  (for integer values of s)  to the  Royal Swedish Academy of Sciences,  under the title:  Ueber einige Transcendente welche bei der wiederholten Integration rationaler Funktionen aufreten.  Shortly thereafter,  Jonquière extended his definition to allow all complex values  of  s  in a  note  published in French in  Bulletin de la Socété Mathématique de France17,  pp. 142-152  (1889).

Besides polylogarithms,  Alfred Jonquière  is best remembered for a musical treatise:  Grundriss der musikalischen Akustik  (1898).  He is unrelated to the family of the French geometer and naval officer  Ernest de Jonquières (1820-1901)  whose name is spelled with a trailing "s".

Thus,  Jonquière's function  (the general polylogarithm)  can be considered to be a differentiable function of two complex variables,  s  and  z,  verifying the following recurrence relation:

Li0(z)   =   z / (1-z)
    Li1(z)   =   - Log (1-z)
z Lis(z)   =   Lis-1(z) / z

The latter relation can be integrated unambiguously,  using   Lis(1) = z(s) :

Lis(z)   =   z(s)  +   ò  z   Lis-1(t)   dt
Vinculum
1 t

This isn't applicable to s=1  because both terms diverge.

 Come back later, we're
 still working on this one...

Wikipedia   |   Brilliant   |   John D. Cook   |   Polylogarithmic identities
 
Kummer's function   |   Eduard Kummer (1810-1893)   |   Lerch transcendent   |   Mathias Lerch (1860-1922)
 
Note sur la série  S xn / ns   by  Niels Abel  (1826, posthumously published). 
Note sur la série  S xn / ns   by  Alfred Jonquière.  Bulletin de la S.M.F 17 pp. 142-152 (1889).
 
Video :   Dilogarithm (8:28)  by  Steve Chow  (blackpenredpen, 2019-12-04).


(2021-07-21)   Deformed Exponential Function   expq  [ expq (0) = 1 ]
Only  entire  solution to   y(x) = y (qx)  [q  being complex  with  |q| ≤ 1]

Power series with recursively-defined coefficients:  a0 = 1 ;  an+1an  qn
Vinculum
n+1

From a certain viewpoint,  the simplest 
entire function after the  exponential function.

Georges Valiron (1884-1955)   in 1938.

expq (z)   =    ¥
å

n = 0
zn
 Vinculum
n!
  q n(n-1)/2where  |q| ≤ 1
exp0 (z)=1 + z
exp1 (z)=exp (z)
exp-1 (z)=cos z  +  sin z   =   Ö2  sin (z+p/4)

For less obvious expressions when  q  is a  root of unity,  see the technique presented  elsewhere on this site.  Such results include expressions like:

expi (z)    =    ½ [ e (7z+1) i p/4  +  e 5z i p/4  -  e (3z+1) i p/4  +  e z i p/4 ]

When the parameter is real,  all the zeros of  exp q  are real.
q Smallest zeros of   exp q   [not corrected for serious rounding errors]
x0 (q)x1 (q)x2 (q)x3 (q)
1.0 None
0.5 -1.4880785455997-4.881140894897 -13.560408528-34.77531624774
0.3 -1.2126027949723-7.2451809863719 -34.92945202954-152.66086135776
0.1 -1.0555090266525-20.399676871004 -303.09758881324-4025.2405581538
0.0-1Single zero.
-0.1 -0.95458352354998+19.723663966193 -298.05399119731+
-0.3 -0.88552109420781+6.4875164979152 -32.944238048438+
-0.5 -0.83727893410698+3.9111921204854 -11.936455757853+31.964496107034
-1.0 -0.78539816339745+2.3561944901924 (k-¼) p     for any integer  k.

When  q  is a real between  0  and  1,  a theorem due to Edmond Laguerre (1834-1886; X1853)  says that all zeros are simple,  real and negative.

The first column in the above table  x0 (q)  has been the object of considerable attention.  Alan Sokal  (2011)  found that  1+1/x(q)  has an expansion consisting only of  positive  terms  (he checked up to order 899):

q
Vinculum
2
 + q2
Vinculum
4
 + q3
Vinculum
12
 + q4
Vinculum
16
 + q5
Vinculum
48
 + 7 q6
Vinculum
288
 + q7
Vinculum
96
 + 7 q8
Vinculum
968
 + 49 q9
Vinculum
6912
 + 113 q10
Vinculum
23040
 + 17 q11
Vinculum
4608
 + 293 q12
Vinculum
92160
 + ...

Sur le déplacement des zéros des fonctions entières par leur dérivation (Uppsala PhD)  Martin Ålander  (1914).
 
Lecture #1  and  Lecture #2  by  Alan Sokal (1955-)  at  Queen Mary  (March 2011).
The deformed exponential function  by  Alan Sokal  (Marc Kac seminar, Utrecht, 2011-06-10).
 
An asymptotic formula for the zeros of the deformed exponential function  Cheng Zhang  (arXiv, 2015-01-12).
 
Zeros of the deformed exponential function  by  Liuquan Wang  & Cheng Zhang  (2018).

Tsallis functions  (Box-Cox 1964, Tsallis 1988, etc.)

The name  deformed exponential  has often been  hijacked  in recent years.  What follows coincides with the above only up to second order:

eq (x)   =   1 + x + q x2/2 + O(x3 )

Constantino Tsallis  presented what's next in 1988 and 1994.  It had been analyzed in 1964  (using the real parameter  l = 1-q)  by the statisticians  George E.P. Box (1919-2013)  and  David Cox (1924-)  and also,  in 1967,  by  Jan Havrda  and  Frantisek Charvát,  as they named the concept of  structural a-entropy.  They all refer to:

eq (x)   =   [ 1 + (1-q) x ]1/(1-q)

That expression is  well-defined  only when 1+(1-q)x  is a positive real,  although it does reduce to the ordinary exponential as  q  tends to  1.

That's  e 1-q (x,1)  using the  deformed exponential function of two variables  of  Miomir StankovicSladjana Marinkovic  &  Predrag Rajkovic  (2011):

eh (x,y)   =   [ 1 + h x ] y/h

Extending to   e0 (x,y)  =  e xy   by continuity in the neighborhood of  h = 0.

Tsallis statistics   |   Tsallis entropy   |   Tsallis distribution   |   q-Gaussian   |   Constantino Tsallis (1943-)


 Sofia Kovalevskaya 
 (1850-1891) (2017-06-30)   Painlevé Transcendents
Solving the simplest non-linear differential equations.

A solution of a  linear  differential equation  can only have  fixed singularities  at points where the coefficients of the equation are singular.

On the other hand,  the solutions of a  nonlinear  diffential equations,  may present other singularities which depend on the initial conditions.  Those are called  movable singularities  (also known as  spontaneous  or  internal ;  French:  singularité mobile).

 Sofia Kovalevskaya A nonlinear ordinary differential equation is said to have the  Painlevé property  when the only movable singularities of its solutions are  poles  (no spontaneous  branch points or other essential singularities are allowed).  This class of equations was first investigated by  Sofia Kovalevskaya (1850-1891)  in 1888, ahead of the work of  Paul Painlevé.
Before he embarked on a high-profile political career  (serving twice as prime minister of France)  Paul Painlevé (1863-1933; ENS 1883)  was known as one of the best mathematicians of his generation.  Paul Painlevé

 Come back later, we're
 still working on this one...

Painkevé-type equations   |   Moveable singularity   |   Painlevé transcendents
 
Emile Picard (1856-1941)   |   Paul Painlevé (1863-1933)   |   Bertrand Gambier (1879-1954)

border
border
visits since Dec. 6, 2000
 (c) Copyright 2000-2021, Gerard P. Michon, Ph.D.