3

I am running an optimization with scipy.optimize.minimize

sig_init = 2
b_init = np.array([0.2,0.01,0.5,-0.02])
params_init = np.array([b_init, sig_init])
mle_args = (y,x)
results = opt.minimize(crit, params_init, args=(mle_args))

The problem is, I need to set a bound on sig_init. But the opt.minimize() requires that I specify bounds for each of the input parameters. But one of my inputs is a numpy array.

How can I specify the bounds given that one of my inputs is a numpy array?

1
  • What is the definition of crit? Commented May 9, 2018 at 20:52

1 Answer 1

1

First of all, scipy.optimize.minimize expects a flat array as its second argument x0 (documentation) (which means the function it optimizes also takes a flat array and optional additional arguments). Therefore it is my understanding you would have to give it something like :

 b_init = [0.2,0.01,0.5,-0.02]
 sig_init = [2]
 params_init = np.array(b_init + sig_init])

for the optimization to work. Then, if you will have to give the bounds for each scalar in you array. One rudimentary example if you wanted [-1, 1] bounds on sig and didn't want bounds on b :

bounds = [(-np.inf, np.inf) for _ in b_init] + [(-1, 1)]
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.