4

I am trying to parallelize a for loop in my code base that should be embarrassingly parallel. However, Openmp is not doing so and rather is executing everything in sequential order. program complied using g++, std=c++11, I have executed a small program to ensure if openmp works or not, and it worked just fine.

The code block I am trying to parallelize is given below

void class_tmv::activate(class_tmv &result, const &a, const &b, const &c, const &d, const &e, f) const
    {
        result.clear();
        #pragma omp parallel for 
        for (unsigned int i = 0; i < tms.size(); ++i)
        {
            class_TM tmTemp;
            tms[i].activate(tmTemp, a, b, c, d, e, f);
            result.tms[i] = tmTemp;
        }
    }

Class_tmv has class variable tms, that is essentially a vector of Class_TM objects. Class_TM has a method also named activate that gets called above, it is defined as

    inline void Class_TM::activate(Class_TM &result, const &a, const &b, const &c, const &d, const &e, f) const
    {
        
        result.clear();
        Class_TM tmTemp;

        if (condition_1)
        {
            this->S_T(tmTemp, a, b, c, d, e, f);
        }
        else if (condition_2)
        {
            this->T_T(tmTemp, a, b, c, d, e, f);
        }
        else
        {
            cout << "The activation fundtion can be parsed." << endl;
        }
        result = tmTemp;
    }

S_T and T_T are other methods in class_TM.

The issue I'm having is the overall execution of the system is completely sequential, and the loop I'm trying to parallelize isn't working.

Any suggestions on what may be going wrong are extremely helpful. Any other solutions not related to openmp are also welcomed.

(This is my first time working on parallel applications)

2
  • 4
    Can you please share the command used to compile the code? Commented Feb 14, 2022 at 23:04
  • 2
    Please provide a minimal reproducible example. Commented Feb 15, 2022 at 13:03

3 Answers 3

3

Did you use the -fopenmp flag when compiling your code?

Sign up to request clarification or add additional context in comments.

Comments

2

Your full code or more context on the command used to compile would be helpful, but here are some pointers in the meantime:

  • As other answer mentioned, you should compile with -fopenmp or similar (depending on compiler). However, if you mention that you did a test and verified OpenMP worked correctly, then it's likely you did include that option and also the headers on your .c files.

  • As it seems you are experiencing, if you compile with OpenMP #pragmas and the optimizer sees no way to parallelize your for, or it is not in the "Canonical Form", then it will not be optimized. The Canonical Form of a for loop, among other things, requires that the stopping criteria is fixed and does not change across iterations (in this case the tms.size()). If the call to tms[i].activate() modifies tms's size, then that would not follow Canonical form and would not be optimized (or the subsequent calls to S_T and T_T) Check if this is your case on your context.

3 Comments

thanks for the suggestion guys read a bit more bout that and edited that part out of the answer :) @paleonix
The for loop is in canonical form for sure. There are definite stopping conditions and the tms.size() doesn't change during execution. I Saw possibilities of race conditions in the function calls. All the values were being passed by reference. I changed that so no race condition occurs in variables. But still no results in parallelizing.
@Harshang can you share the command used to compile the program? As well as your complete code so we can test?
0

This adds to the Canonical Form point made by @DarkCygnus. Apart from checking the stopping criterion, check for thread safety. The calls to S_T and T_T should not have race conditions. They should be manipulating shared variables such that parallelization will still result in correct behavior.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.