Parallel Programming with MPI

Content

The efficient use of modern parallel computers is based on the exploitation of parallelism at all levels: hardware, programming and algorithms. After a brief overview of basic concepts for parallel processing the course presents in detail the specific concepts and language features of the Message Passing Interface (MPI) for programming parallel applications. The most important parallelization constructs of MPI are explained and applied in hands on exercises. The parallelization of algorithms is demonstrated in simple examples, their implementation as MPI programs will be studied in practical exercises.

Contents:

  • Fundamentals of parallel processing (computer architectures and programming models)
  • Introduction to the Message Passing Interface (MPI)
  • The main language constructs of MPI-1 and MPI-2 (Point-to-point communication, Collective communication incl. synchronization, Parallel operations, Data Structures, Parallel I / O, Process management)
  • Demonstration and practical exercises with Fortran, C and Python source codes for all topics; Practice for the parallelization of sample programs; Analysis and optimization of parallel efficiency

Requirements

  • Using the GWDG Scientific Compute Cluster - An Introduction, or equivalent knowledge
  • Practical experience with Fortran , C or Python
  • For the practical exercises: GWDG account (preferable) or course account (available upon request), own notebook

Learning goal

  • Use of MPI for parallelization of algorithms in order to be able to run parallel calculations on several computing nodes.

Skills

Trainer

  • Prof. Dr. Oswald Haan

Next appointment

DateLink
06.05.2025https://academy.gwdg.de/p/event.xhtml?id=673315795d441669671bc616
Last modified: 2025-05-27 06:59:41