3

I have basically three questions on OpenMp.

Q1. Does OpenMp provide mutual exclusion to shared variables? Consider the following simple matrix multiplication code with three nested loops, parallelised using OpenMp in C++. Here A, B, and C are dynamically space allocated double** type of variables. Thread count is appropriately assigned a value.

#pragma omp parallel
{
 int tid = omp_get_thread_num();
 int fraction = (n/threadCount);
 int start = tid * fraction;
 int end = (tid+1) * fraction;

    for (int start = 0; i < end; i++)
    {
        for (int j = 0; j < N; j++)
        {
            C[i][j] = 0;

            for (int k = 0; k < N; k++)
                C[i][j] += A[i][k] * B[k][j];
        }
     }
}

The thing here is that mutual exclusion for reading from A and B and writing to C is unnecessary. But if extra overhead is incurred due to mutex on A, B, and C, it is favourable to relieve A, B, and C of mutex. How can it be achieved?

Q2. Consider introducing two private variables tempA and tempB into the above code as follows.

double **tempA, **tempB;

#pragma omp parallel private(tempA, tempB)
{
 int tid = omp_get_thread_num();
 int fraction = (n/threadCount);
 int start = tid * fraction;
 int end = (tid+1) * fraction;
 tempA = A;
 tempB = B;

    for (int start = 0; i < end; i++)
    {
        for (int j = 0; j < N; j++)
        {
            C[i][j] = 0;

            for (int k = 0; k < N; k++)
                C[i][j] += tempA[i][k] * tempB[k][j];
        }
     }
}

Would this strategy relieve A and B from mutex in calculations? I mean, although the same locations (referred by A and tempA, and B and tempB) are accessed by all threads, they refer them through different local variables.

Q3. Also, I would like to know about the difference in declaring the variables tempA and tempB inside the parallel code segment, instead of declaring them outside. Of course, then we won't need that private clause in the directive. Is there any other significant difference.

1
  • 1
    (1) Why do you do worksharing by hand instead of using #pragma omp for? (2) To synchronize reads and writes to shared variables there is #pragma omp atomic. (3) Don't reinvent the wheel, use Blaze. This linear algebra library implements OpenMP parallelized and SIMD vectorized matrix operations. It can't get any faster than this. Commented Jun 22, 2017 at 6:10

1 Answer 1

2
  1. By default no synchronization mechanisms are provided. But OpenMP provides a possibility to explicitly use such mechanisms. Use #pragma omp atomic, #pragma omp atomic read, #pragma omp atomic write for such purposes. Another option to use critical section: #pragma omp critical - more generic and powerful option, but not always required.

  2. Accessing same memory location through different variables does not change anything regarding concurrent access. You should use atomics to provide guarantee.

  3. If declaring variable inside pragma omp parallel - they will be private for a thread. See this and this posts for more information.

Also if you are using C++11, you can use std::atomic variables.

Sign up to request clarification or add additional context in comments.

7 Comments

1.What I want to know is whether in the sample code I posted, the reads from A, B and writes to C, are provided with mutex by default. If yes is there any way to relieve it from mutex?
@Melanka no, mutex is not provided by default. You should explicitly provide them. It would be unreasonable from the performance point of view to provide synchronization mechanisms by default.
3. Yes it would be private. But is there any difference in making it private by declaring inside when compared to declaring it outside and then adding private(varName) in the directive?
@Melanka 3. - this and this posts probably are what are you looking for
Take care with C11/C++11 atomics. If I remember correctly, mixing them with OpenMP's was not supported by some compilers. Related question.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.