DL4PuDe

DL4PuDe is an AI-based framework that automatically detects pushing behavior in crowd videos, leading to a better understanding of pushing dynamics. This knowledge is crucial for better crowd management strategies and better design of public places, making public places more comfortable and safer.

9
mentions
3
contributors

Cite this software

What DL4PuDe can do for you

DOI License Python 3.7 | 3.8 GPU RAM16GB

DL4PuDe is an AI-based framework that automatically detects and annotates pushing behavior in crowd videos.

DL4PuDe was published in a peer-reviewed journal, and the paper is as follows.

Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040. 

To access the list of papers citing this work, kindly click on this link.

Abstract

Crowded event entrances could threaten the comfort and safety of pedestrians, especially when some pedestrians push others or use gaps in crowds to gain faster access to an event. Studying and understanding pushing dynamics leads to designing and building more comfortable and safe entrances. Researchers—to understand pushing dynamics—observe and analyze recorded videos to manually identify when and where pushing behavior occurs. Despite the accuracy of the manual method, it can still be time-consuming, tedious, and hard to identify pushing behavior in some scenarios. In this article, we propose a hybrid deep learning and visualization framework that aims to assist researchers in automatically identifying pushing behavior in videos. The proposed framework comprises two main components: (i) Deep optical flow and wheel visualization; to generate motion information maps. (ii) A combination of an EfficientNet-B0-based classifier and a false reduction algorithm for detecting pushing behavior at the video patch level. In addition to the framework, we present a new patch-based approach to enlarge the data and alleviate the class imbalance problem in small-scale pushing behavior datasets. Experimental results (using real-world ground truth of pushing behavior videos) demonstrate that the proposed framework achieves an 86% accuracy rate. Moreover, the EfficientNet-B0-based classifier outperforms baseline CNN-based classifiers in terms of accuracy.

Examples

Pushing behaviorInput videoOutput video *
img1JavatpointJavatpoint

*The framework detects pushing patches every 12 frames (12/25 s), the red boxes refer to the pushing patches.

Installation

  1. Clone the repository in your directory.
git clone https://github.com/PedestrianDynamics/DL4PuDe.git
  1. Install the required libraries.
cd PushingBehaviorDetectionFramework
pip install -r libraries.txt
  1. Run the framework.
python3 run.py --video [input video path]  
               --roi ["x coordinate of left-top ROI corner" "y coordinate of left-top ROI corner"
               "x coordinate of  right-bottom ROI corner" "y coordinate of right-bottom ROI corner" ] 
               --patch [rows cols]    
               --ratio [scale of video]   
               --angle [angle in degrees for rotating the input video to make crowd flow direction
               from left to right ---> ]

Demo

Run the following command

python3 run.py --video ./videos/150.mp4  --roi 380 128 1356 1294 --patch 3 3 --ratio 0.5  --angle 0

Then, you will see the following details.

Javatpoint

When the progress of the framework is complete, it will generate the annotated video in the framework directory. Please note that the "150 annotated video" is available on the directory root under the "150-demo.mp4" name.

More information in the corresponding GitHub repository

  1. Source code for building, training and evaluating CNN-based classifiers>
  2. Test set.
  3. Trained CNN-based classifiers.
  4. Video experiments.

Citation

If you utilize this framework or the generated dataset in your work, please cite it using the following BibTex entry:

Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040. 
Alia, Ahmed, Maree, Mohammed, & Chraibi, Mohcine. (2023). DL4PuDe: A hybrid framework of deep learning and visualization for pushing behavior detection in pedestrian dynamics. Sensors, 22(11). https://doi.org/10.5281/zenodo.6433908

Acknowledgments

  • This work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—491111487.

  • Thanks to the Forschungszentrum Juelich, Institute for Advanced Simulation-7, for making the Pedestrian Dynamics Data Archive publicly accessible under the CC Attribution 4.0 International license.

  • Thanks to Anna Sieben, Helena Lügering, and Ezel Üsten for developing the rating system and annotating the pushing behavior in the video experiments.

  • Thanks to the authors of the paper titled "RAFT: Recurrent All Pairs Field Transforms for Optical Flow'' for making the RAFT source code available.

Logo of DL4PuDe
Keywords
Programming languages
  • Jupyter Notebook 98%
  • Python 2%
License
  • BSD-3-Clause
</>Source code

Participating organisations

Forschungszentrum Jülich

Mentions

Contributors

AA
Ahmed Alia
Researcher and PhD Student
Forschungszentrum Jülich
MC
Mohcine Chraibi
Supervisor
Forschungszentrum Jülich