1

It is widely known that users of floating point must watch out for rounding errors. For example, 1.0/3*3 == 1 evaluates to false in pretty much every modern programming language. More surprisingly for nonspecialists, so does 1.0/10*10 == 1.

However, there are floating point systems that at least seem to deal better with these issues. In particular, I tried both of the above tests in emulators of the Apple II and Commodore Vic-20, and each expression evaluated to true in each case. This is counterintuitive: very primitive systems seem to be working better than much more advanced ones.

How did the old floating point systems get the desired answer in the above tests? Assuming modern IEEE floating point has good reason for doing otherwise, what does it gain in exchange? Or put another way, what was the problem that caused the old floating point systems to be abandoned even though they were capable of representing 1/10 and 1/3 without the most troublesome kind of rounding errors?

Edit: Simon Byrne correctly points out the exact tests I list above, do actually pass on IEEE floating point. I've no idea what mistake I made with those, can't reproduce it. But here's one that fails, tried in Python just now:

>>> 0.1+0.1+0.1 == 0.3
False

That exact test succeeds on the Apple II, so again, how did the old system get that result, and what was the tradeoff?

10
  • Can't answer the historical portion, but you could store the number as non-floating. Clojure has a "fractional" class that stores irrational numbers as a fraction instead so they aren't rounded until absolutely necessary. Commented Jan 29, 2017 at 22:19
  • @Carcigenicate Right, but storing numbers as fractions really needs to be done in arbitrary precision if it is to be usefully predictable, which leads to CPU and memory demands that rule it out for most of the cases where we use floating point. Commented Jan 29, 2017 at 22:25
  • 2
    What language/platform are you using? 1.0/3*3 == 1 and 1.0/10*10 == 1 should both be true in any language/platform using IEEE754 binary64, which is pretty ubiquitous these days. Commented Jan 29, 2017 at 22:46
  • @SimonByrne You're right, tried those just now and they pass, can't reproduce the mistake I made, edited post with a test that definitely fails. Commented Jan 29, 2017 at 23:06
  • 1
    BCD floats used to be popular on 8bit platforms which had no FPU anyway Commented Jan 29, 2017 at 23:12

2 Answers 2

2

My guess is that they probably just got lucky for the particular example you happened to choose. For example, the statement is true in IEEE754 binary32 arithmetic:

>>> import numpy as np
>>> np.float32(0.1) + np.float32(0.1) + np.float32(0.1) == np.float32(0.3)
True

Based this posting, the Apple II didn't provide hardware floating point, so the exact details were dependant on whatever the software provided (and it sounds as though different software provided different implementations). If they happened to use the same 24-bit significand (or another that gave similar results), then you would see the same answer.

UPDATE: this document seems to indicate that Applesoft Basic did use a 24-bit significand (not 25 – 24 plus an implicit 1 – as the earlier link seemed to suggest), which would explain why you saw the same result as binary32 arithmetic.

Sign up to request clarification or add additional context in comments.

Comments

2

Mathematically, it is impossible to find a base for a number system that can represent all fractions exactly, because there are infinitely many prime numbers. If you want to store fractions exactly, you have to store both the numerator and denominator individually, this makes calculations more complex.

If you store fractions as a single value, some operations will introduce small errors. If they are performed repeatedly, the errors will accumulate and could become noticeable.

There are two ways around this problem:

  • If you can find a common denominator, scale all values by that and use integers. For example, use integer cents instead of floating point dollars.

  • Round the numbers, when appropriate. Many times, it is enough to print floating point numbers with one or two digits less than their maximal precision.

Testing floating point numbers for equality can not be done like with integers. It's a bit like counting two groups of people and testing if the groups are the same size, and testing if two bottles have the same amount of milk in them.
For the "milk" test you have to say by how much the amounts may be different to still be regarded as "equal".

The Apple II had no floating point hardware, it was its BASIC that allowed floating point calculations. I guess they included such an error bound for equality tests, or they used base 10 numbers (BCD, see harold's comment).

4 Comments

And as per my reply to harold, BCD doesn't seem to be the explanation here. An error bound for equality tests might be a possible explanation, or some alternative rounding rule. I'm trying to figure out how to tell those apart.
My Apple II is older than IEEE 754 arithmetic. Even if it is using binary or hexadecimal floating point, the rounding rules are likely to be different, so the error cases will be different.
@PatriciaShanahan Yep, just so. Now in the one error case I know of that's different, the old system actually works better. But IEEE put a lot of thought into the new system. What error cases are there, in which IEEE works better, to justify the change?
In BCD, I guess 1/3 + 1/3 + 1/3 == 1 should fail. Please write an answer to your question if you find out how it worked ;-)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.