next up previous
Next: 2 Your first parallel Up: Course on Parallelization Previous: Course on Parallelization

Subsections

1 Preliminaries

1.1 MPI on OS-systems LINIX, SOLARIS, AIX, etc.

The Message Passing Interface covers about 140 functions, available in F77, C and C++. Already 6 functions allow to write parallel codes. Most of the other functions are based on these 6. We will be concerned with the following functions.

Basic functions  MPI_Init
   MPI_Finalize
   MPI_Send
   MPI_Recv
   MPI_Comm_rank
   MPI_Comm_size
additional functions  MPI_Barrier
   MPI_Bcast
   MPI_Gather
   MPI_Scatter
   MPI_Reduce
   MPI_Allreduce

1.2 Online help

MPI Homepage http://www.mcs.anl.gov/mpi/index.html
MPI Calls http://www.mpi-forum.org/docs/mpi-11-html/mpi-report.html
LAM Implementierung http://www.lam-mpi.org
        with ftp-download http://www.lam-mpi.org/download

1.3 Getting started on a pool of workstations

1.4 Installing the example code

Copy the files install and course.tar.gz into your working directory, and call ./install.

1.5 Getting started: LAM-MPI

$\displaystyle \square$
Define environmental variable archi (bash-shell) with one of the systems {LINUX, IRIX, SOLARIS, AIX}.
export archi=LINUX
or add it in your $(HOME)/.bashrc file. Check with env|grep archi .
$\displaystyle \square$
Initialize the LAM-MPI
lamboot
This lets you run several processes on your machine (or on a predefined set of machines) in parallel.
$\displaystyle \square$
If you have your own description file (like Example/mynode) of machines able to handle your code, then you could start MPI by
lamboot -v mynode
This lets you run several processes on your machine (or on a predefined set of machines) in parallel. If lamboot reports an error message then try lamboot -vd mynode to get a detailed report of the booting in progress.

1.6 Terminating LAM-MPI

$\displaystyle \square$
Kill all your MPI-processes which did not terminate regularly.
lamclean
$\displaystyle \square$
Terminate your MPI session
wipe

next up previous
Next: 2 Your first parallel Up: Course on Parallelization Previous: Course on Parallelization
Gundolf Haase 2003-05-19