OpenMP+MPI - Start

Combining shared memory and distributed memory computation.

OpenMPQuick reference, home page, tutorial (LLNL).
MPI: Quick reference (C++), docuhome page, tutorial (LLNL, MPI-book)

The compilers by PGI have to be used, see trial version.

Compiling Code:

Changing the underlying Compiler for OpenMPI (briefly):

How to OpenMP+MPI-parallelize the inner product:

Original code for inner product:
double scalar(const int N, const double x[], const double y[])
{
 double sum = 0.0;
for (int i=0; i<N; ++i) { sum += x[i]*y[i]; } return sum; }

int main()
{
...
double s = scalar(n,a,b);
...
}


OpenMP+MPI code for inner product:
#include <mpi.h>

// local sequential inner product
double scalar(const int N, const double x[], const double y[]) { double sum = 0.0;
#pragma omp parallel for private(i) shared(x,y) schedule(static) reduction(+:sum)
for (int i=0; i<N; ++i)
sum += x[i]*y[i]; return sum; }
// MPI inner product double scalar(const int n, const double x[], const double y[], const MPI_Comm icomm) { const double s = scalar(n,x,y); // call sequential inner product double sg; MPI_Allreduce(&s,&sg,1,MPI_DOUBLE,MPI_SUM,icomm); return(sg);
}

int main(int argc, char* argv[])
{
...
MPI_Init(&argc,&argv);
...
double s = scalar(n,a,b,MPI_COMM_WORLD);
...
MPI_Finalize();
...
}

and  compile the code with one of the available compilers

Each MPI process spreads OMP_NUM_THREADS  OpenMP threads

.