FairMQ is designed to help implement large-scale data processing workflows needed in next-generation particle physics experiments.
Next-generation Particle Physics Experiments at GSI/FAIR and CERN are facing unprecedented data processing challenges. Expected data rates require a non-trivial amount of high performance compute (HPC) resources in the order of thousands of CPU/GPU cores per experiment. Online (synchronous) data processing (compression) is crucial to stay within storage capacity limits. The complexity of tasks that need to be performed during the online data processing is significantly higher than ever before. Classically, complex tasks like calibration and track finding run in an offline (asynchronous) environment. Now they have to run online in a high performance and high throughput environment.
The FairMQ C++ library is designed to aid the implementation of such large-scale online data processing workflows by
FairMQ is not an end-user application, but a library and framework used by software experts to implement higher-level experiment-specific applications.
Screenshot of AliceO2 Debug GUI showing the data processing workflow of a single event processing node.
The screenshot shows a visualization of the data processing workflow on a single Alice event processing node (The "O2 Framework debug GUI" tool in the screenshot is part of the AliceO2 project). Data logically flows along the yellow edges (in this case via the FairMQ shmem data transport) through the various processing stages of which some are implemented as GPU and others as CPU algorithms.
Initially designed with the online data processing in mind, FairMQ has been successfully used to parallelize offline simulation and analysis workloads as well.