I am looking to speed up a slow running loop, but do not think I am going at this with the best approach. I would like to parallelize some code which runs a function which I have written and am having some trouble trying to figure out exactly how to formulate the input parameters when using python's multiprocessing module. The code I have is essentially of the following form:
a = some_value
b = some_value
c = some_value
for i in range(1,101):
for j in range(1,101):
b = np.array([i*0.001,j*0.001]).reshape((2,1))
(A,B,C,D) = function(a,b,c,d)
So my function itself takes on a variety of parameters but for this particular use I need to vary only one variable (which is an array of two values) and create a grid of values. Also, all other inputs are integers. I am familiar with very simple examples of parallelizing such loops by using a pool of workers through the following example code:
pool = mp.Pool(processes=4)
input_parameters = *list of iterables for multiprocessing*
result = pool.map(paramest.parameter_estimate_ND, input_parameters)
where the list of iterables is created using the itertools module. Since I am changing only one input variable of the function and all others are declared before I am having troubles structuring such input parameters. So what I would really like is to use multiprocessing to run different inputs at the same time to speed up the execution of the for loops.
My question then being, how would one structure the use of multiprocessing to parallelize code that runs on a function while only changing inputs of specific variables?
Am I approaching this in the best manner? Is there a better way to do such a thing?
Thank you!