trigonometry & functions |
analysis | sets & logic | number theory | recreational | misc | nomenclature & history | physics
Related Links (Outside this Site)Volume of a Tetrahedron & Programming Languages by Prof. W. Kahan.
Values of Physical Constants (CODATA: 1973, 1986, 1998, 2002, 2006, 2010...)
Determination of G by Nolting, Schurr, Schlamminger and Kündig (2000).
Well, if the value 0.0200 comes from rounding, it's actually between 0.01995 and 0.02005. So the result is between 1.047008 and 1.047249. Stating the result as 1.0471 gives the impression that the true value is between 1.04705 and 1.04715. This is slightly too precise (by a factor of 2) but that's not grossly misleading (so, it's OK in my book). The alternative would be to state the result as 1.047, which is too coarse (by a factor of 5).
If you only rely on significant figures to express the precision of your results, you're always faced with a similar choice between two different levels of precision that differ from each other by a factor of 10. Just choose the lesser of two evils, knowing that you will occasionally have to misrepresent the precision of your result by a factor of 3 (or a bit more).
Such unsatisfying limitations can't be circumvented within the "significant figures" scheme. When the precision of a result has to be stated more rigorously, it's best to give either its upper and lower bounds (at a 99% confidence level) or to indicate an estimate of the standard deviation (as a two-digit number between parentheses after the least significant digit, as discussed in the next article).
In the second example, -log(0.001178) may denote any value
Interestingly, logarithms are the quintessential example of a case where the number of significant figures in the result is not directly related to the number of significant figures of the input data. In the following pathological example, the input has only 3 significant figures but the result does have 9 significant figures:
log ( 7.89 ´ 10 123456 ) = 123456.897
Use strict inequalities to indicate the rounded value is a true bound.
Strict inequalities are easy: x < 1.5 and x < 3/2 are equivalent.
When non-strict inequalities are used with rounded numbers, they acquire completely different meanings, similar to the meaning acquired by equalities in that case... What such an equality states is that a strict inequality is true for the tighest different bound expressible at the same level of precision. For example, x ≤ 1.5 means that x < 1.6.
The former is more intuitive than the latter, as it gives the best acceptable value at the relevant precision. This is familiar to old-school engineers but others may struggle when confronted with this, especially in tables.
Standard deviation expresses the uncertainty or precision of a result.
In many cases, the above rules concerning significant digits are too coarse to convey a good indication of the claimed precision.
Professionals state the accuracy of their figures by giving the uncertainty expressed in units of the last figure between parentheses (see examples).
Technically, this uncertainty is expressed either as the relevant standard deviation or as 1/3 of the "firm" bounds you may have on either side of the mean (both definition are equivalent if we identify "firm bounds" with the 99.73% confidence level in a normal Gaussian distribution).
Straight rounding errors are not at all "normally distributed" along a Gaussian curve. Instead, the error is uniformly distributed over an interval whose width is equal to one unit of the least significant digit retained. This entails a standard deviation of 1/Ö12 = 0.29 in terms of that unit.
In our previous example of a product of three rounded value, what we have to determine is the standard deviation of the following random variable:
( 2.9 + 0.1 X ) ( 3.5 + 0.1 Y ) ( 10.0 + 0.1 Z)
Where X, Y and Z are independent random variables, each uniformly distributed between -½ and +½. The average (mathematical expectation) of that random variable is 101.5 and its standard deviation is 1.3444711... (HINT: this involves averaging the square of the above inside a cube of unit volume).
Thus, our product can be expressed with standardized precision as
101.50(134) or 101.5(13)
The latter form is the more common one, since standardized precision is most often expressed with 2 significant digits (3 digits is an overkill).
A close examination reveals that some authors round uncertainties upward systematically. This practice comes, presumaby, from a dubious concern of never claiming too much precision. That misplaced modesty is unscientific. Uncertainty should be treated like any other quantity and be quoted at its alloted (2-digit) level of precision. It would be a mistake to give the above as 101.5(14).
Stating a nonzero number as a multiple of a power of 1000.
Engineering notation is superficially similar to scientific notation, except that the exponent of 10 is restricted to a multiple of 3 (thus, the relevant power of 10 is actually a power of 1000). For this to be possible in all cases, the coefficient is allowed to go from 1 (included) to 1000 (excluded).
Because there may be trailing zeros before the decimal point in engineering notation, the number of significant digits is not always clear. This is the main reason why the systematic use of the engineering notation is strongly discouraged in print, unless accuracy is stated with the above convention.
By extension, we also call engineering notation any system resembling scientific notation where the absolute magnitude of the coefficient is not restricted to the 1-10 range (it could, occasionally, be more than 1000 or less than 1). List of results spanning several orders of magnitude are sometimes more readable this way, since we can merely compare coefficients as the order of magnitude (a power of 10) remains constant.
Alternative approaches for robust solutions of quadratic equations.
Because the hyperbolic sine function (sh) of a real argument may take all real values, any normalized quadratic equation with a negative constant term can be recast into the following form:
x 2 + 2 a x sh q - a 2 = 0
Its two real solutions, are then given by the following robust formulas:
-a exp q and a exp -q
Using the reverse hyperbolic function Argsh to obtain q, if need be, will never entail any loss of floating-point precision...
This transformation is also helpful to express with pretty formulas the solutions to some elementary problems in mathematical physics.
Other similar techniques can be found in the following section, including the robust expression for a difference of square roots (which is an alternative way to deal numerically with delicate solutions of some quadratic equations).
How to avoid subtracting nearly equal quantities.
In what follows, the number x need not be small, but it may well be...
In each of the examples below, a floating-point computation of the left-hand side will lead to an unacceptable loss of precision when x is small. The given substitute should be used, which is mathematically identical but won't lead to potentially nonsensical results with floating-point arithmetic.
Square Root :
Ö(a+x) - Öa = x / [ Ö(a+x) + Öa ]
e a + x - e a = 2 sh(x/2) e a+x/2
1 - cos x = 2 sin2 (x/2)
Inverse of Hyperbolic Tangent (cf. relativistic rapidity) :
Argth (a+x) - Argth (a) = Argth ( x / [1-a(a+x)] )