0

I'm trying out python's scipy.optimize to minimize some function using the SLSQP algorithm. The optimization seems to work fine unconstrained and with one matrix constraint, but then I get an error when I add a second matrix constraint.

import nlopt
import numpy as np
from scipy.optimize import minimize

n = 3
n_sim = 10000
risk_aversion = 5

Y = np.random.normal(loc=0, scale=1, size=(n_sim, n))

mu_Y = np.mean(Y, axis=0).reshape(1, n)
sigma_Y = np.cov(Y, rowvar=0, bias=1)

x0 = np.ones((n, 1)) / n

Aeq = np.ones((1, n))
beq = 1
A =  np.array([[1, 1, 0], [-1, -1, 0]])
b = np.array([[0.25], [-0.5]])

def func(x, mu, sigma, risk_aversion):
    return -np.dot(mu, x) + 0.5 * risk_aversion * np.dot(np.dot(x.T, sigma), x)
def func_deriv(x, sigma, risk_aversion):
    return risk_aversion * np.dot(sigma, x)

c_ = {'type': 'eq', 'fun' : lambda x: np.dot(Aeq, x) - beq, 'jac' : lambda x: Aeq}
b_ = [(0,1) for i in range(n)]

This version with one constraint and bounds works fine

res = minimize(lambda x: func(x, mu_Y, sigma_Y, risk_aversion), x0, 
               jac=lambda x: func_deriv(x, sigma_Y, risk_aversion), 
               constraints=c_, bounds=b_, method='SLSQP', options={'disp': True})

However, the two constraint version gives me an error

d_ = (c_,
        {'type': 'ineq', 'fun' : lambda x: np.dot(A, x) - b, 'jac' : lambda x: A})

res2 = minimize(lambda x: func(x, mu_Y, sigma_Y, risk_aversion), x0, 
                jac=lambda x: func_deriv(x, sigma_Y, risk_aversion), 
                constraints=d_, bounds=b_, method='SLSQP', options={'disp': True})

1 Answer 1

2

If I use

A =  np.array([-1, -1, 0])
b = -0.5

instead of what I had originally, then the optimization works. This suggests that I can't do a matrix of constraints (like I would with fmincon in Matlab), unless someone can confirm otherwise. However, I can just keep adding constraints such that it will work. The constraints have to be in the form of tuples of dictionaries, so I put together

d = []
for i in range(0, b.size):
    d.append({'type': 'ineq', 'fun' : lambda x: np.dot(A[i, ], x) - b[i], 'jac' : lambda x: A[i, ]})
d_ = tuple(d)
c_tuple = c_,
d_ = d_ + c_tuple

which seems to be working properly.

Sign up to request clarification or add additional context in comments.

2 Comments

When appending stuff to the constraint dict in a for loop, be careful about this: stackoverflow.com/questions/233673/lexical-closures-in-python
@pv My goal was more to get it to work well enough so that I could compare the basic performance to nlopt. nlopt was about 3x faster, so I'm going to focus more on using that (which does not require this syntax).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.