This advanced MPI/OpenMP course describes different everyday challenges that developers of parallel code have to face in everyday work, and provides working solutions for them. Here you will see how to deal with parallel profiling and explore the knobs and dials that make your code exploit the best possible performance, just like domain decomposition techniques and parallel I/O. Each of these sessions includes hands-on exercises to facilitate the understanding of the different constructs. Moreover, you will also obtain some insight on useful parallel libraries and routines for scientific code development.
In this course you will:
- Understand how to work with MPI and OpenMP with many examples from scientific applications
- Learn when and how to apply different parallelization strategies
- Experience how to develop and optimize code step by step for its use on a supercomputer
Everyone interested in learning how to make efficient use of MPI and OpenMP for different scientific applications
- Basic knowledge of Linux
- Basic knowledge of programming, particularly with C/C++ or Fortran
- Basic knowledge of parallel computing. No specific experience with supercomputing systems is necessary.
- Basic knowledge of MPI and OpenMP constructs (provided in the basic course)
You should have:
Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.
If you are not familiar with MPI/OpenMP, you may get the necessary knowledge right on time in the basic course.