Hi there I'm trying to run a big for loop, 239500 iterations, I have made some tests and I've found that 200 takes me 1 hour, resulting in 2 months of cpu time.
This is the loop:
for i in range(0, MonteCarlo):
print('Performing Monte Carlo ' + str(i) + '/' + str(MonteCarlo))
MCR = scramble(YearPos)
NewPos = reduce(operator.add, YearPos)
C = np.cov(VAR[NewPos, :], rowvar=0)
s, eof = eigs(C, k=neof, which='LR')
sc = (s.real / np.sum(s) * 100)**2
tcs = np.sum(sc)
MCH = sc/tcs
Hits[(MCH >= pcvar)] += 1
if (Hits >= CL).all():
print("Number of Hits is greater than 5 !!!")
break
Where np stands for numpy ans scramble stands for random.shuffle the calculations performed within the for loop are not dependent on each other.
Is there any way to do the loop in parallel, I have 12 cores and only 1 is running.... In Matlab I would make a parfor, is there any thing similar in python?
Thanks in advance
npisnumpy? Maybe provide relevant import lines to bring in more context.:)However you're going to run into trouble if you want to write to the globalHitsarray. Also ifYearPoscontains the same data between iterations you use the sameNewPoseach time, because yourreducestatement is just a sum. Please correct me of I'm wrong.