It is widely known that users of floating point must watch out for rounding errors. For example, 1.0/3*3 == 1 evaluates to false in pretty much every modern programming language. More surprisingly for nonspecialists, so does 1.0/10*10 == 1.
However, there are floating point systems that at least seem to deal better with these issues. In particular, I tried both of the above tests in emulators of the Apple II and Commodore Vic-20, and each expression evaluated to true in each case. This is counterintuitive: very primitive systems seem to be working better than much more advanced ones.
How did the old floating point systems get the desired answer in the above tests? Assuming modern IEEE floating point has good reason for doing otherwise, what does it gain in exchange? Or put another way, what was the problem that caused the old floating point systems to be abandoned even though they were capable of representing 1/10 and 1/3 without the most troublesome kind of rounding errors?
Edit: Simon Byrne correctly points out the exact tests I list above, do actually pass on IEEE floating point. I've no idea what mistake I made with those, can't reproduce it. But here's one that fails, tried in Python just now:
>>> 0.1+0.1+0.1 == 0.3
False
That exact test succeeds on the Apple II, so again, how did the old system get that result, and what was the tradeoff?
1.0/3*3 == 1and1.0/10*10 == 1should both be true in any language/platform using IEEE754 binary64, which is pretty ubiquitous these days.