0

I'm trying to minimize a function of N parameters (e.g. x[1],x[2],x[3]...,x[N]) where the boundaries for the minimization depend on the minimized parameters themselves. For instance, assume that all values of x could vary between 0 and 1 in such a way that the summing then all I got 1, then I have the following inequalities for the boundaries:

0 <= x[1] <= 1
x[1] <= x[2] <= 1 - x[1]
x[2] <= x[3] <= 1-x[1]-x[2]
...
x[N-1] <= x[N] <= 1-x[1]-x[2]-x[3]-...-x[N] 

Does anyone have an idea on how can I construct some algorithm like that on python? Or maybe if I can adopt an existent method from Scipy for example?

1 Answer 1

2

As a rule of thumb: As soon as your boundaries depend on the optimization variables, they are inequality constraints instead of boundaries. Using 0-based indices, your inequalities can be written as

# left-hand sides
        -x[0] <= 0
x[i] - x[i+1] <= 0 for all i = 0, ..., n-1

# right-hand sides
sum(x[i], i = 0, .., j) - 1 <= 0 for all j = 0, .., n

Both can be expressed by a simple matrix-vector product:

import numpy as np

D_lhs = np.diag(np.ones(N-1), k=-1) - np.diag(np.ones(N))
D_rhs = np.tril(np.ones(N))

def lhs(x):
    return D_lhs @ x

def rhs(x):
    return D_rhs @ x - np.ones(x.size)

As a result, you can use scipy.optimize.minimize to minimize your objective function subject to lhs(x) <= 0 and rhs(x) <= 0 like this:

from scipy.optimize import minimize

# minmize expects eqach inequality constraint in the form con(x) >= 0,
# so lhs(x) <= 0 is the same as -1.0*lhs(x) >= 0
con1 = {'type': 'ineq', 'fun': lambda x: -1.0*lhs(x)}
con2 = {'type': 'ineq', 'fun': lambda x: -1.0*rhs(x)}

result = minimize(your_obj_fun, x0=inital_guess, constraints=(con1, con2))

Sign up to request clarification or add additional context in comments.

6 Comments

Thank you @joni. Reviewing the question I noticed that the second condition is wrong since summing all values I NEED to have 1, to do this I have to impose sum(x[i], i=0,...,j) = 1 for all j = 0,..., n. Which will actually reduce the number of minimization parameters since the last one can be found directly by this condition. Can you help on how can I overcome this? Thank you in advance!
@MateusForcelini Then you only need to change con2 from 'ineq' to 'eq'. In case you only want to impose x[0]+...+x[1]=1 you can define def rhs(x): return np.sum(x)-1
Thank you @joni, I'm still having some issues with the code since I have two more arrays of variables that I want to minimize together with their respective constraints. More specifically I don't know how can I specify them on the function and on the scipy.minimize. For instance, I have three N-dimension arrays, say x, a and b and each of them have 2 constraints like the one you showed how to address. How can I specify on the minimize function that I want two minimize the parameters in those 3 arrays? And on the function, should I specify them in some way?
I'd love to edit the question but I can't since there is an edit suggestion pending and I cannot review it;
@MateusForcelini You can set z = (x, a, b) such that z is an array with 3*N variables. Then you only need to write your constraints as functions of z and use x, a, b = np.split(z, N) inside your constraints in order to express them only in terms of x, a or b. However, I think it would be better to ask a separate question instead of discussing it in the comments.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.