Thursday, August 1, 2013

Numerical Analysis: Size Matters

A pure mathematician, an applied mathematician and an accountant all apply for the same job on Wall Street, and during their interviews they are all asked the same question: "What is the value of $(1-1/N)\times N + 1 - N$?" The pure mathematician replies "Zero, of course!" The applied mathematician asks "How large is $N$?" The accountant asks "What would you like it to be?" The accountant gets the job ...

... but the applied mathematician is on the right track, especially in an era where much research and virtually all application of mathematics is done on digital computers. I've been meaning to do some posts on numerical analysis for a while now, motivated in part by a large number of questions appearing on optimization support forums that indicate the topic is foreign to a sizable number of people using mathematical programming software. (The poster child for questions of this type: "Why did solver XXX say that the optimal value of my variable is 0.999999823 when it is a 0-1 variable??")

I was a math major end to end. In my first quarter of graduate school, I took a course on numerical analysis -- I can't recall now why, as I was a "pure" mathematician in those days -- and it was a real eye-opener. Identities that hold in symbolic mathematics (including "chalkboard arithmetic") don't necessarily survive the conversion to a digital medium. I have no intention of trying to compress that first course in numerical analysis into a blog post, or even a series of posts, but perhaps an example or two will at least get the attention of some people who previously had not heard of it. The numerical analysis course turned out to be one of the most valuable math courses I took (at any level), and I invariably recommend it whenever someone asks "What courses should I take to prepare for a career in Operations Research?"

I tested the formula in the above joke in a small Java program, using double-precision arithmetic, and the reported answer was zero no matter how large a value of $N$ I tried. This might or might not be a consequence of compiler optimizations -- the compiler doing the arithmetic in a different order than I specified it. So I'll fall back on an example from that graduate course: computing a partial sum of the (divergent) harmonic series $\sum_{n=1}^\infty 1/n$.

One of the first things anyone learns about math, at an early age, is that addition is commutative associative ... except, as it turns out, when a computer is involved. Here is a small Java program that computes $\sum_{n=1}^N 1/n$ twice, in one case summing in ascending order of $n$ ($1 + 1/2 + 1/3 + \dots + 1/n$) and in the other case summing in descending order of $n$ ($1/N + 1/(N-1) + \dots + 1$). I start with $N=1000$ and double $N$ each time through the main loop. To save myself repetition, let me stipulate that henceforth "large" and "small" refer to the magnitudes of numbers, without regard for their sign. If addition is commutative on the computer, the difference between summing in ascending and descending order should be zero.

for (int limit = 1000; limit <= 1024000; limit *=2) {
  double up = 0;
  double down = 0;
  for (int n = 1; n <= limit; n++) {
    up += 1.0/n;
  }
  for (int n = limit; n >= 1; n--) {
    down += 1.0/n;
  }
  double diff = up - down;
  System.out.println("Limit = " + limit + ", up = " + up + ", down = " 
                     + down + ", diff = " + diff);
}

Here is the output it generated:

N1 + ... + 1/N1/N + ... + 1Difference
10007.4854708605503437.4854708605503412.6645352591003757E-15
20008.1783681036102848.1783681036102785.329070518200751E-15
40008.8713902997951988.871390299795223-2.4868995751603507E-14
80009.5644749842613919.564474984261425-3.375077994860476E-14
1600010.25759091579793710.2575909157979142.3092638912203256E-14
3200010.9507224716020310.950722471602038-8.881784197001252E-15
6400011.6438618397230111.6438618397230028.881784197001252E-15
12800012.33700511404807212.337005114048194-1.2256862191861728E-13
25600013.03015034148708713.0301503414869521.3500311979441904E-13
51200013.7232965454854613.7232965454853561.0480505352461478E-13
102400014.4164432377636214.416443237764337-7.176481631177012E-13

The first takeaway is that we're not seeing a difference of zero, ever. (Actually, for $N$ up to about 40, the reported difference is zero.) The differences are small, but not zero. There is no obvious pattern to their sign, nor is their size changing monotonically, but they do seem generally to be growing with $N$, which among other things means that if we have a really large value of $N$, we could get a fairly seriously inaccurate answer (at least in absolute, rather than relative, terms).

Why does this happen? The short answer is that computers store non-integral numbers only to a specific finite precision. That creates "truncation errors" in the representations of the numbers (0.1 as stored in a computer is not exactly 1/10) and "rounding errors" when arithmetic is performed. In particular, adding or subtracting a large number and a small number can produce rounding errors. In the extreme, the small number is smaller than the precision at which the large number is stored, and so the sum or difference of the two is identical to the large number as far as the computer is concerned. That sort of error is more likely to occur when $n$ is ascending (we are adding progressively smaller terms $1/n$ to progressively larger partial sums) than when $n$ is descending (at the outset, we add small numbers to small partial sums, and eventually larger numbers to larger sums); so I am inclined to trust the second set of partial sums more than I do the first. Neither, though, is exactly correct.

You might not be too excited about the differences in the table -- even at $N=1,024,000$ the difference is in the thirteenth decimal place -- but consider the following coding error probably made by every rookie programmer to tackle a mathematical computation: testing whether the difference of two floating point (non-integer) numbers equals zero when they should be testing whether it is close enough to zero. Many a program has entered an indefinite loop thanks to a test for equality and a smidgen of rounding error.

Update: John D. Cook has a related post in a different context (probability and statistics).

10 comments:

  1. Some additional links for the readers that do not know which consequences such errors can have:

    http://en.wikipedia.org/wiki/Ariane_5_Flight_501#Launch_failure
    http://en.wikipedia.org/wiki/MIM-104_Patriot#Failure_at_Dhahran

    Best regards,
    Thomas

    ReplyDelete
    Replies
    1. Thomas, thanks for the links. I was unaware of either example.

      Delete
  2. One minor error in your posting- it's not that floating point addition isn't commutative (it is), but rather that it isn't associative. That is, a+b and b+a will always give you the same answer, but (a+b)+c and a+(b+c) may give different answers due to rounding of the intermediate result.

    ReplyDelete
    Replies
    1. Oops! You are of course correct. (Just heard a knock on the door. I think they've come to rescind my math degrees.)

      Delete
  3. One of the areas of OR where this might matter is concerned with steady state probabilities for queue models. Solving the equations in many cases leads to a recurrence relation p_{n+1}=(some expression)*p_n, and then an explicit formula for p_0.

    However, for multiple server queues with limited waiting room, p_0 may be very small, and this made the recurrence subject to numerical errors of the kind discussed. I found that a good way round the problem was to find N such that p_N was maximal, and calculate that explicitly, then use the recurrence to find the other steady state probabilities.

    ReplyDelete
    Replies
    1. David: Thank your for the comment. I taught those recurrence relations in a previous life (not that I recall them precisely), but had very limited experience with applying them, so it had not occurred to me that rounding errors could seriously degrade the results. Working outward from the largest probability is an interesting idea. I'm not sure it would have occurred to me.

      Delete
  4. There is a technique called CESTAC based on stochastic arithmetic to correct the rounding errors.
    The idea is that you can control the rounding method in modern processors due to the IEEE754 norm and do each computation 3 times (low rounding, average, up rounding) which gives you a confidence interval. Once that's done you can propagate the confidence interval to have information of how precise your computation really is.

    In your example it would give you something like (x - y) = 0 +/- 10^-12
    Then you can see when computations are unstable and use a better algorithm (or live with it...)

    http://www-pequan.lip6.fr/cadna/Cadna_Dir/What_is_CADNA.php

    ReplyDelete
    Replies
    1. Thanks for the comment (and link!). I vaguely recall some mention of "interval arithmetic" when I took numerical analysis (while basking in the warmth from the mainframe's vacuum tubes). IIRC, propagating the intervals seemed to quickly achieve results along the lines of 0.0 +/- infinity; but precisions were quite a bit lower then and the number of numerical operations in any given chain (from input value to final result) may not have grown commensurately, so propagation may not hemorrhage accuracy quite so fast these days.

      Delete
  5. That is correct : "exact" interval arithmetic is too pessimistic and very soon reaches a ]-inf..+inf[ confidence interval. The difference with stochastic (interval) arithmetics is that computations are randomized - that wasn't obvious from my explanations. As a result you really have a probabilistic confidence interval meaning the result is in [a,b] with a 95% confidence. The more times you compute each computation with randomized rounding modes, the more precise is your interval. With 3 computations you can get around 90% if I remember correctly. And to gain a few percentages more you have to start increasing in a very significant way the number of samplings, making the method unpractical.

    ReplyDelete
    Replies
    1. I see now. That explains the "stochastic" part of "stochastic arithmetic". I had thought it was a reference to how my former students did computations. :-)

      Delete

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.