I have the following for loop, which contains a private function call inside of it:
for (i = 0; i < N; ++i)
dates[i] = to_time_t(&string_dates[i][0]);
to_time_t simply converts a string (e.g.: "18/03/2007") into a timestamp, and it does so with the help of mktime(), which is really slow. In fact, that for loop alone takes the most time out of any other code in the program. To remedy this, I am trying to apply OpenMP to the loop, like this:
#pragma omp parallel for private(i)
for (i = 0; i < N; ++i)
dates[i] = to_time_t(&string_dates[i][0]);
My OpenMP knowledge is limited, but I'm assuming that each element of the dates array is never accessed by two threads simultaneously since i is private. The same should apply to string_dates. But when I run this code, performance is actually worse, so I must be doing something wrong, I just don't see it. Any help is appreciated!
Edit: I should have included the to_time_t code from the start.
time_t to_time_t(const string * date) {
struct std::tm tm = {0};
istringstream ss_tm(*date);
ss_tm >> get_time(&tm, "%m/%d/%Y");
return mktime(&tm);
}
N? And how do you time the execution ?MPI_Wtime()(since I'm adopting MPI as well) before and after the loop.to_time_t(). It may be touching some global variables or calling into library functions that keep some hidden state and are not really thread-safe. I doubt anyone could guess the actual reason without some insight intoto_time_t().