I keep running into a floating point arithmetic error in Python that I can’t seem to figure out.
Problem: I need to create a weighting such that all weights sum to 1, not, for example: 0.99999999999999.
As an example, the following code:
values = numpy.array([9626.40000000034, 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 36907.300000000000054])
weights = values/values.sum()
weights.sum()
yields:
0.99999999999999989
Instead of 1. I have tried multiplying by 1000, converting to string (to cut off precision), and then converting back to float and dividing by 1000. It doesn’t work. I have also tried using Decimal.
from decimal import *
string_weight = []
float_weight = []
getcontext().prec = 3
for number in weights:
string_weight.append(Decimal(str(number)))
for string in string_weight:
float_weight.append(float(string))
fuel_weights = numpy.array(fuel_weights_float)
fuel_weights.sum()
The answer is:
1.0009999999999999
That is not what I want. I just want a simple “1.0”.
A sys.version report gives:
3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
I’m working on Mac OS X Catalina.
@halferin the comments, and I'll add a bounty.