Tutorial High Performance Computing

Geneve, Sept. 5, 2014

Prof.Dr DI Gundolf Haase, University of Graz, Institute for Mathematics and Scientific Computing


Schedule

Pool computers:
Graz: 143.50.47.166 to server 143.50.47.128

Supplementing material

Goals

This tutorial introduces into the field of High Performance Computing (HPC) in mathematical areas and engineering sciences with special focus on MPI, OpenMP, OpenACC and CUDA. The students will acquire specialties of recent and future hardware concepts as well as on supporting software standards. The course work will be organized such that all course topics will be implemented on the appropriate hardware ranging from a single CPU via multiple CPUs to clusters of CPU s and GPUs. The students will be able to adapt research specific code such that they can take advantage of available computer resources. The three main goals of the course consist of

  1. I.Knowledge of the students on algorithms and data structures for HPC and active use of this knowledge.

  2. II.The students get in touch with HPC related concepts and architectures, and the students are able to adopt new developments in this area onto the problem under consideration.

  3. III.Standard compiler and software support for parallel computer architectures is known and used by the students for solving mathematical problems by means of HPC hardware.

  4. IV.The students are able to write/adapt parallel programs on various parallel platforms.

Prerequisites on the student side

  1. 1.Basic knowledge in numerical linear algebra

  2. 2.Programming skills in C and/or C++

  3. 3.English language skills.

Topics of the extended course with further links and further reading

1


Introduction into recent processor development

2


Practical work with sequential HPC-programming (ex1)

 


Seminar talk on "HPC and Mathematics in Applications"

3


Classification of parallel programming; Shared ressources

4


Introduction OpenMP  (with practical work)

5


Practical work  with  OpenMP (ex2)

6


Introduction into MPI

7


Practical work with MPI (ex3)

8


Practical work with MPI and/or OpenMP

 



9


Continuation of Practical work with MPI and/or OpenMP; Comparison of the results

 10


Introduction into GPU programming: hardware and software

 11


First steps with OpenACC

 12


Practical work with OpenACC

 13


Performance tools available in CUDA and PGI-OpenACC

 14


Practical work with GPU code and performance tools

 15


Multiple-GPU programming

 16


Practical work in mixing MPI and GPU programming

Books