53

In OpenMP when using omp sections, will the threads be distributed to the blocks inside the sections, or will each thread be assigned to each sections?

When nthreads == 3:

#pragma omp sections
{
    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }

    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }
}

Output:

id=1
id=1

But when I execute the following code:

#pragma omp sections
{
    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }

    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }
}

#pragma omp sections
{
    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }

    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }
}

Output:

id=1
id=1

id=2
id=2

From these output I can't understand what the concept of sections is in OpenMP.

1

9 Answers 9

113

The code posted by the OP will never execute in parallel, because the parallel keyword does not appear. The fact that the OP got ids different from 0 shows that probably his code was embedded in a parallel directive. However, this is not clear from his post, and might confuse beginners.

The minimum sensible example is (for the first example posted by the OP):

#pragma omp parallel sections
{
    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }

    #pragma omp section
    { 
        printf ("id = %d, \n", omp_get_thread_num());
    }
}

On my machine, this prints

id = 0,
id = 1,

showing that the two sections are being executed by different threads.

It's worth noting that however this code can not extract more parallelism than two threads: if it is executed with more threads, the other threads don't have any work to do and will just sit down idle.

Sign up to request clarification or add additional context in comments.

1 Comment

This answer does not explain why the OP shows id=1 and id=2. It's very likely the program the OP posted was running in parallel.
32

The idea of parallel sections is to give the compiler a hint that the various (inner) sections can be performed in parallel, for example:

#pragma omp parallel sections
{
   #pragma omp section
   {
      /* Executes in thread 1 */
   } 
   #pragma omp section
   {
      /* Executes in thread 2 */
   } 
   #pragma omp section
   {
      /* Executes in thread 3 */
   } 
   /* ... */
}

This is a hint to the compiler and not guaranteed to happen, though it should. Your output is kind of what is expected; it says that there are #sections being executed in thread id 1, and in thread 2. The output order is non-deterministic as you don't know what thread will run first.

1 Comment

-1 Your answer contains a lot of inaccuracies. You can't be sure that different sections are assigned to different threads. The output order is non-deterministic only inside a single sections construct, not among two different sections (implicit barrier at the end of the construct)
14

Change the first line from

#pragma omp sections

into

#pragma omp parallel sections

"parallel" directive ensures that the two sections are assigned to two threads. Then, you will receive the following output id = 0, id = 1,

Comments

13

You are missing parallel keyword. The parallel keyword triggers the openmp run in parallel.

Comments

7

According to OpenMP standard 3.1, section 2.5.2 (emphasis mine):

The sections construct is a noniterative worksharing construct that contains a set of structured blocks that are to be distributed among and executed by the threads in a team. Each structured block is executed once by one of the threads in the team in the context of its implicit task.

...

Each structured block in the sections construct is preceded by a section directive except possibly the first block, for which a preceding section directive is optional. The method of scheduling the structured blocks among the threads in the team is implementation defined. There is an implicit barrier at the end of a sections construct unless a nowait clause is specified.

So, applying these rules to your case, we can argue that:

  1. the different structured blocks identified in a sections directive are executed once, by one thread. In other words you have always four prints, whichever the number of threads
  2. the blocks in the first sections will be executed (in a non-deterministic order) before the blocks in the second sections (also executed in a non-deterministic order). This is because of the implicit barrier at the end of the work-sharing constructs
  3. the scheduling is implementation defined, so that you can't possibly control which thread has been assigned a given section

Your output is thus due to the way your scheduler decided to assign the different blocks to the threads in the team.

2 Comments

Sir, In the comment of the accepted answer, you commented You can't be sure that different sections are assigned to different threads. But, the 1st para in section 2.5.2 that you've referred speaks the similar thing. What's the difference?
@jos The difference is that the standard does not prescribe how blocks are distributed. The method of scheduling the structured blocks among the threads in the team is implementation defined. The OP shows that in a particular run the 2 blocks of the first sections are assigned both to thread 1, likewise for thread 2 on the blocks for the second section.
3

It may be helpful to add more information to the output line and to add more sections (if you have the thread-count)

#pragma omp parallel sections
{
    #pragma omp section
    {
        printf ("section 1 id = %d, \n", omp_get_thread_num()); 
    }
    #pragma omp section
    {
        printf ("section 2 id = %d, \n", omp_get_thread_num());
    }
    #pragma omp section
    {
        printf ("section 3 id = %d, \n", omp_get_thread_num());
    }
}

Then you may get more interesting output like this:

section 1 id = 4,
section 3 id = 3,
section 2 id = 1,

which shows how the sections may be executed in any order, by any available thread.

Comments

0

Note that 'nowait' tells the compiler that threads do not need to wait to exit the section. In Fortran 'nowait' goes at the end of the loop or section, which makes this more obvious.

Comments

0

The #pragma omp parallel is what creates (forks) the threads initially. Only on creating the threads, will the other Openmp constructs be of significance.

Hence, Method 1:

// this creates the threads
#pragma omp parallel
{
   #pragma omp sections
   {
     #pragma omp section
     {
        // code here
     }
     #pragma omp section
     {
        // code here
     }
   }
}

or

Method 2:

// this creates the threads and creates sections in one line
#pragma omp parallel sections
   #pragma omp section
   {
      // code here
   }
   #pragma omp section
   {
      // code here
   }
}

Comments

-4

If you want really start different threads in different sections, the nowait clause tells compiler that threads do not need to wait to enter a section.

#pragma omp parallel sections nowait
{
   ...
}

2 Comments

This is just plain wrong. nowait means removing the implied barrier at the end of a worksharing construct. There is no barrier on entry.
I agree with Massimiliano; Moreover, if u try to compile nowait with parallel it says that 'nowait' is not valid for 'omp parallel sections nowait'

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.