next up previous
Next: 3 Synchronized Communication Up: Course on Parallelization Previous: 1 Preliminaries

2 Your first parallel code




\fbox{{\large E1}}
Compile the program in Example/firstc (Example/firstf ) for archi = LINUX via
make .
Start the program !

The following MPI functions require a communicator as parameter. This communicator describes the group of processes which are to be covered by the corresponding MPI function. By default, all processes are collected in MPI_COMM_WORLD which is one of the constants supplied by MPI. We restrict the examples to those global operations. For this purpose, create special MPI-type variable MPI_Comm icomm= MPI_COMM_WORLD; which is used as parameter !




\fbox{{\large E2}}
Write Your first parallel program by implementing

MPI_Init and MPI_Finalize,

compile the program and start 4 processes
mpirun -c 4 -lamd first.LINUX


\fbox{{\large E3}}
Implement the routines

MPI_Comm_rank and MPI_Comm_size,

in order to determine the number of running processes and the local process id. Let the master process (0) write the number of running processes. Start several processes.


\fbox{{\large E4}}
Implement the routine

Greetings(myid,numprocs,icomm)

given in greetings.c (greetings.f). Study the routines

MPI_Send and MPI_Recv !


next up previous
Next: 3 Synchronized Communication Up: Course on Parallelization Previous: 1 Preliminaries
Gundolf Haase 2003-05-19