I want to find local minima of a function (of several variables) using python. The set of gradient-based optimization methods described in scipy.optimize.minimize seem like a good way to start.
I can compute the value of the function as well as the gradient of this function. As a matter of fact, when I evaluate the function, I basically get the gradient for free. Is there a way to leverage this property to minimize the number of function calls using scipy.optimize.minimize ?
I'm only referring to methods that use gradient based optimization (say BFGS for instance).
More precisely, how can I plug in a single python function that computes both the value of my mathematical function and the value of its gradient into scipy.optimize.minimize ?
Instead of this :
res = minimize(fun, x0, method='BFGS', jac=grad_fun,options={'disp': True})
I would like something like this :
res = minimize(fun_and_grad, x0, method='BFGS', options={'disp': True})
Thank you !