-2

I need pulp or pyomo to do ALL arithmetic operations exclusively using float32. It seems that the default they use is float64.

The following are 2 examples from both pulp and pyomo for the sum operation.

import pulp
import pyomo.environ as pyo
import numpy as np


def pulp_example(values, target_float):
    variables = pulp.LpVariable.dicts("x", range(len(values)), 0, 1, pulp.LpBinary)
    problem = pulp.LpProblem("Problem", pulp.LpMinimize)
    problem += 0
    problem += pulp.lpSum([variables[i] for i in range(len(values))]) == 2
    sum_exp = pulp.lpSum([variables[i] * float(values[i]) for i in range(len(values))])
    problem += sum_exp == target_float
    status = problem.solve()
    return status


def pyomo_example(values, target_float):
    model = pyo.ConcreteModel()
    model.I = pyo.RangeSet(0, len(values) - 1)
    model.x = pyo.Var(model.I, within=pyo.Binary)
    model.selection_constraint = pyo.Constraint(expr=sum(model.x[i] for i in model.I) == 2)

    def sum_expr_rule(model):
        return sum(model.x[i] * values[i] for i in model.I)
    model.eq_expr = pyo.Expression(rule=sum_expr_rule)
    model.constraint = pyo.Constraint(expr=model.eq_expr == target_float)
    model.objective = pyo.Objective(expr=0, sense=pyo.minimize)
    solver = pyo.SolverFactory('cbc')
    result = solver.solve(model, tee=False)
    return result.solver.status


input_values = [90.2743351459503174, 4.8144315292034]

# As expected, summation of same input values results in different outputs depending on float data type
float32_target = 95.0887680053711  # float32 arithmetic
float64_target = 95.08876667515372  # float64 arithmetic
assert np.sum([input_values], dtype=np.float32) == float32_target
assert np.sum([input_values], dtype=np.float64) == float64_target


pulp_res = pulp_example(input_values, float32_target)  # 'Infeasible
pyomo_res = pyomo_example(input_values, float32_target)  # 'warning'

pulp_res = pulp_example(input_values, float64_target)  # 'Optimal'
pyomo_res = pyomo_example(input_values, float64_target)  # 'ok'
16
  • 2
    This seems a misapplication of linear programming. Neither pulp nor pyomo "does the math". They formulate a problem and turn it over to a solver, in this case cbc. Solvers have tolerances for all kinds of things, and you appear to be seeking floating point accuracy beyond 7 significant digits, which isn't really the forte of this approach Commented Jan 2 at 3:38
  • "seeking floating point accuracy beyond 7 significant digits" Where did I say that? Like the question says, I want exactly float32 data type to be used for the calculation. This is not a misapplication, it's literally solving a problem very efficiently that would've else taken years to solve with combinatoric, if len(input_values) >= 100 Commented Jan 2 at 3:54
  • perhaps I misread on accuracy.... 32bit is typically ~7 significant figures. I'm not sure you can control the numerical precision of any solver as you intend to do. I'd suggest using a small delta around the summation of maybe ±1e-5 if that is acceptable Commented Jan 2 at 5:03
  • And... If you are considering using a solver to pick an arbitrary number of floating point numbers from a large bag to make an exact summation with a MIP, that is a misapplication and isn't going to work well unless there are some clear "winning moves" that the solve process can capitalize on ... lest it revert to somewhat random sampling Commented Jan 2 at 5:10
  • If reduces the calculation time from a million years to 0.1 second, it's not a misapplication, in fact it IS a very good application. It works exactly right on float64, as you can see in the example code. There's no delta needed, it must be exact as it is for float64 but just for float32 instead. Commented Jan 2 at 8:12

1 Answer 1

2

There is no objective, so this is an exactly-determined linear system with this solution:

import numpy as np

input_values = np.array((90.2743351459503174, 4.8144315292034), dtype=np.float32)
target = np.float32(95.08876667515372)
a = np.array(
    [
        [1, 1],
        input_values,
    ],
    dtype=np.float32,
)
b = np.array((2, target), dtype=np.float32)
sol = np.linalg.solve(a, b)

print(a, '@', sol)
print('=', b)
print('with', sol.dtype, 'residuals of', a@sol - b)
residual_64 = a.astype(np.float64) @ sol.astype(np.float64) - b.astype(np.float64)
print('and', residual_64.dtype, 'residuals of', residual_64)

if not (residual_64 == 0).all():
    print('IT IS PHYSICALLY IMPOSSIBLE FOR THERE TO BE AN EXACT SOLUTION')
[[ 1.         1.       ]
 [90.27434    4.8144317]] @ [1. 1.]
= [ 2.      95.08877]
with float32 residuals of [0. 0.]
and float64 residuals of [0.00000000e+00 1.43051147e-06]
IT IS PHYSICALLY IMPOSSIBLE FOR THERE TO BE AN EXACT SOLUTION
Sign up to request clarification or add additional context in comments.

2 Comments

This code uses numpy instead of pulp or pyomo but more importantly: How would it scale with a list of 100+ values? Not easily right? In my example code, input_values can have as many values as possible and the code would work exactly the same.
Any system (such as the one that you wrote) that is solvable exactly via solve() or under/over-determined via lstsq() will be more scalable than a linear programming approach.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.