Using OpenMP directives

OpenMP directives exploit shared memory parallelism by defining various types of parallel regions. Parallel regions can include both iterative and non-iterative segments of program code.

The #pragma omp pragmas fall into these general categories:

  1. The #pragma omp pragmas that let you define parallel regions in which work is done by threads in parallel (#pragma omp parallel). Most of the OpenMP directives either statically or dynamically bind to an enclosing parallel region.
  2. The #pragma omp pragmas that let you define how work is distributed or shared across the threads in a parallel region (#pragma omp sections, #pragma omp for, #pragma omp single, #pragma omp task).
  3. The #pragma omp pragmas that let you control synchronization among threads (#pragma omp atomic, #pragma omp master, #pragma omp barrier, #pragma omp critical, #pragma omp flush, #pragma omp ordered) .
  4. The #pragma omp pragmas that let you define the scope of data visibility across parallel regions within the same thread (#pragma omp threadprivate).
  5. The #pragma omp pragmas for synchronization (#pragma omp taskwait, #pragma omp barrier)
Read syntax diagramSkip visual syntax diagram
OpenMP directive syntax

                             .-,----------.   
                             V            |   
>>-#pragma omp--pragma_name----+--------+-+--------------------->
                               '-clause-'     

>----statement_block-------------------------------------------><

Adding certain clauses to the #pragma omp pragmas can fine tune the behavior of the parallel or work-sharing regions. For example, a num_threads clause can be used to control a parallel region pragma.

The #pragma omp pragmas generally appear immediately before the section of code to which they apply. The following example defines a parallel region in which iterations of a for loop can run in parallel:
#pragma omp parallel
{
  #pragma omp for
    for (i=0; i<n; i++)
      ...
}
This example defines a parallel region in which two or more non-iterative sections of program code can run in parallel:
#pragma omp parallel
{
  #pragma omp sections
  {
    #pragma omp section
       structured_block_1
           ...
    #pragma omp section
       structured_block_2
           ...
        ....
  }
}

For a pragma-by-pragma description of the OpenMP directives, refer to Pragma directives for parallel processing in the XL C/C++ Compiler Reference.



Voice your opinion on getting help information Ask IBM compiler experts a technical question in the IBM XL compilers forum Reach out to us