PIConGPU

PIConGPU is a relativistic Particle-in-Cell code running on graphic processing units as well as regular multi-core processors. It is Open Source und is freely available for download. It can be used to study plasmas with relativistic dynamics, solving the Maxwell-Vlasov system of equations.

3
mentions
11
contributors

What PIConGPU can do for you

PIConGPU

PIConGPU is an extremely scalable and platform portable application for particle-in-cell simulations. While we mainly use it to study laser-plasma interactions, it has also found utility in astrophysical studies and simulations of matter under extreme conditions.

PIConGPU has been a finalist for the prestigious Gordon-Bell-Award in 2013 and has been one of the flagship applications for a number of leading edge high performance computing (HPC) systems since then (Titan, JUWELS Booster, Frontier1, Frontier2, Frontier3). Through this work, PIConGPU has established strong ties with a lot of national and international partners, especially the underlying hardware-agnostic libraries like Alpaka and Llama are now adopted in the CERN LHC software stack as well. Another collaborative effort also driven by PIConGPU is a standardization in data formats for plasma physics via openPMD, which is becoming one of the leading data standards in the community.

Video

click on the image to be redirected to youtube
A snapshot from a test simulation of an ultrashort, high-intensity laser pulse.
A snapshot from a simulation of an ultrashort, high-intensity laser pulse (orange-striped sphere) driving a plasma wave in ionized helium gas on the Oak Ridge Leadership Computing Facility’s (OLCF) Summit supercomputer. Purple areas highlight the electron density. Streams depict the stronger (red) and weaker (green and blue) electric fields.
This image was generated using ISAAC, a tool for visualizing simulations in real time on the Frontier supercomputer being built at OLCF. Image Courtesy of Felix Meyer/Helmholtz-Zentrum Dresden-Rossendorf.

Awards

Gordon Bell finalist 2013

Our pioneering astrophysics research combined particle-in-cell simulations with spectrally and directionally resolved radiation simulations and achieved remarkable performance, utilizing the world's largest computer system at the time, TITAN cluster. As a result, PIConGPU became one of six Gordon Bell Award finalists.

HZDR Technology and Innovation Award 2013

Due to its innovative use of GPU clusters, the development team of PIConGPU was awarded the 2013 Helmholtz-Zentrum Dresden-Rossendorf Technology and Innovation Award.

Helmholtz Best Scientific Image Contest 2022

In 2022, an in-situ, live rendering of a PIConGPU Laser Wakefield simulation using ISAAC reached second best image.

Project-involvement

PIConGPU participates in the following projects:

Plasma-PEPSC

Plasma-PEPSC drives plasma science using HPC and data analytics to tackle Grand Challenges. It optimizes plasma simulation codes for exascale computing, focusing on plasma-material interfaces, fusion, accelerators, and space plasma dynamics. The initiative collaborates with EuroHPC projects, uniting experienced scientists to support EuroHPC endeavors.

CAAR - Center for Accelerated Application Readiness

Frontier CAAR, or the Center for Accelerated Application Readiness, led by OLCF, prepares for the Frontier supercomputer. It optimizes scientific apps, enhancing simulations, data tasks, and machine learning for exascale performance. With vendor support and early hardware access, CAAR readied Frontier for large-scale scientific work in 2022.

JEWELS-BOOSTER early access

Performance-scalings

Through continuous engagement in various High-Performance Computing (HPC) initiatives, we systematically evaluate the performance of PIConGPU in a range of computing clusters. Below, we present the latest weak scaling results for LUMI, currently the largest supercomputer in Europe.
LUMI weak scaling PIConGPU

Publications

A current list of all publications in which PIConGPU has been used can be found in our documentation. Our users also regularly submit publications of their own for the list.

Community-map

PIConGPU has users (and developers) on almost every continent. To get to know about them, we offer a voluntary registration in our community map (static version here).
To see the interactive version, click on the map.
PIConGPU community map

Related projects

alpaka

PIConGPU achieves hardware parallelization through the alpaka library, a C++17 tool for accelerator development. It offers performance portability across accelerators, supports CPUs and CUDA GPUs, and provides backend options for concurrent execution, streamlining parallelization without requiring CUDA or threading code. Its approach mirrors CUDA's grid-blocks-threads model for optimal hardware adaptation.

LLAMA

In PIConGPU, we utilize LLAMA, a cross-platform C++ library, to abstract data layout and memory access. LLAMA decouples the algorithm's memory view from the actual layout, ensuring performance portability across diverse hardware platforms, simplifying development, and enabling seamless utilization of heterogeneous hardware.

ISAAC

In order to directly investigate and visualize simulation as they are running, PIConGPU relies on ISAAC. This library offers an efficient solution to visualize data in real-time, leveraging the computational power of accelerators like GPUs without the need to transfer vast amounts of data to the host. With a server, in situ library, and adaptable HTML5 client, ISAAC seamlessly integrates into simulations, allowing users to observe and even interact with computations as they happen, enhancing the efficiency and flexibility of data analysis and simulation optimization.

openPMD

PIConGPU has initiated the openPMD standard for parallel data output together with LBNL.By adhering to openPMD, PIConGPU ensures portability between various applications and algorithms, facilitates open-access data description, and streamlines post-processing, visualization, and analysis. Users benefit from simplified data exchange and visualization tools, and the flexibility to choose their preferred data format while still adhering to openPMD conventions.

PICMI

PIConGPU has adopted the Particle-In-Cell Modeling Interface (PICMI) standard to establish a common language for naming and structuring input files. By adhering to PICMI, PIConGPU ensures compatibility and consistency among various PIC codes, facilitating the sharing of syntax for common definitions and tasks like defining grids and field solvers. This standardization enhances accessibility, code comparisons, and paves the way for future automation and language-agnostic implementation possibilities, promoting a unified approach to PIC simulations within the community.

PMacc

PMacc, an acronym for 'Particle Mesh Accelerated', provides fundamental parallel algorithms and data structures critical to the core operations of the PIConGPU. It serves as the foundation upon which all physics methods are built. Currently, the library remains an integral part of the PIConGPU framework.

Information for users

PIConGPU offers a range of valuable resources for new users. Its official documentation serves as a comprehensive guide, covering installation, setup, and usage. The GitHub repository is a central hub for development and collaboration, housing the source code and additional documentation. User mailing lists provide timely essential information to users if breaking changes are introduced, while our list of academic papers showcase real-world applications. We also provided training workshops, and you can always direct contact us developers for personalized support.

Information for developers

PIConGPU is completely open source on GitHub. Anyone can open issues or submit pull requests. Documentation on how to get involved as a developer can be found on our documentation and wiki page.

Logo of PIConGPU
Keywords
Programming languages
  • C++ 76%
  • Python 16%
  • Shell 6%
  • CMake 1%
  • Mustache 1%
License
  • GPL-3.0-only
</>Source code
Packages
hub.docker.com

Participating organisations

Center for Advanced Systems Understanding
Helmholtz-Zentrum Dresden-Rossendorf

Mentions

Contributors

RW
René Widera
Maintainer
Helmholtz-Zentrum Dresden-Rossendorf
KS
Klaus Steiniger
Maintainer
Helmholtz-Zentrum Dresden-Rossendorf
RP
Richard Pausch
Maintainer
Helmholtz-Zentrum Dresden-Rossendorf
MB
Michael Bussmann
Scientific Supervisor
Center for Advanced Systems Understanding
FC
Finn-Ole Carstens
Developer
Helmholtz-Zentrum Dresden-Rossendorf
FP
Franz Pöschel
Developer
Center for Advanced Systems Understanding
MG
Marco Garten
Maintainer
Lawrence Berkeley National Laboratory
PO
Paweł Ordyna
Developer
Helmholtz-Zentrum Dresden-Rossendorf
BM
Brian Edward Marré
Developer
Helmholtz-Zentrum Dresden-Rossendorf