I'm having the following numerical issue while using SCIP solver as a callable library via PySCIPOpt:
two equivalent and almost identical models for the same problem yield different optimal values and optimal solutions, with optimal values having a relative difference of order 1e-6
An independent piece of software verified that the solutions yielded by both models are feasible for the original problem and that their true values agree with the optimal values reported by SCIP for each model. Until the appearance of this instance a bunch of instances of the same problem had been solved, with both models always agreeing on their optimal solutions and values.
Is it possible to modify the numerical precision of SCIP for comparing the values of primal solutions among them and against dual bounds? Which parameters should be modified to overcome this numerical difficulty?
What I tried
I've tried the following things and the problem persisted:
- Turning presolving off with
model.setPresolve(SCIP_PARAMSETTING.OFF). - Setting
model.setParam('numerics/feastol', epsilon)with different values ofepsilon.
Since feasible sets and objective functions agree (see description below), I've checked that the actual coefficients of the objective functions agree by calling model.getObjective() and comparing coefficients for equality for each variable appearing in the objective function.
The only thing that seemed to help was to add some noise (multiplying by numbers of the form 1+eps with small eps) to the coefficients in the objective function of the model yielding the worst solution. This makes SCIP to yield the same (and the better) solution for both models if eps is within certain range.
Just in case, this is what I get with scip -v in the terminal:
SCIP version 6.0.2 [precision: 8 byte] [memory: block] [mode: optimized] [LP solver: SoPlex 4.0.2] [GitHash: e639a0059d]
Description of the models
Model (I) has approximately 30K binary variables, say X[i] for i in some index set I. It's feasible and not a hard MIP.
Model (II) is has the same variables of model (I) plus ~100 continuous variables, say Y[j] for j in some index set J. Also model (II) has some constraints like this X[i_1] + X[i_2] + ... + X[i_n] <= Y[j].
Both objective functions agree and depend only on X[i] variables, the sense is minimization. Note that variables Y[j] are essentially free in model (II), since they are continuous and they do not appear in the objective function. Obviously, there is no point in including the Y[j] variables, but the optimal values shouldn't be different.
Model (II) is the one yielding the better (i.e. smaller) value.