Chrome Extension
WeChat Mini Program
Use on ChatGLM

Incremental Audio-Visual Fusion for Person Recognition in Earthquake Scene

ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS(2024)

Cited 0|Views18
No score
Abstract
Earthquakes have a profound impact on social harmony and property, resulting in damage to buildings and infrastructure. Effective earthquake rescue efforts require rapid and accurate determination of whether any survivors are trapped in the rubble of collapsed buildings. While deep learning algorithms can enhance the speed of rescue operations using single-modal data (either visual or audio), they are confronted with two primary challenges: insufficient information provided by single-modal data and catastrophic forgetting. In particular, the complexity of earthquake scenes means that single-modal features may not provide adequate information. Additionally, catastrophic forgetting occurs when the model loses the information learned in a previous task after training on subsequent tasks, due to non-stationary data distributions in changing earthquake scenes. To address these challenges, we propose an innovative approach that utilizes an incremental audio-visual fusion model for person recognition in earthquake rescue scenarios. Firstly, we leverage a cross-modal hybrid attention network to capture discriminative temporal context embedding, which uses self-attention and cross-modal attention mechanisms to combine multi-modality information, enhancing the accuracy and reliability of person recognition. Secondly, an incremental learning model is proposed to overcome catastrophic forgetting, which includes elastic weight consolidation and feature replay modules. Specifically, the elastic weight consolidation module slows down learning on certain weights based on their importance to previously learned tasks. The feature replay module reviews the learned knowledge by reusing the features conserved from the previous task, thus preventing catastrophic forgetting in dynamic environments. To validate the proposed algorithm, we collected the Audio-Visual Earthquake Person Recognition (AVEPR) dataset from earthquake films and real scenes. Furthermore, the proposed method gets 85.41% accuracy while learning the 10th new task, which demonstrates the effectiveness of the proposed method and highlights its potential to significantly improve earthquake rescue efforts.
More
Translated text
Key words
Cross-modal audio-visual fusion,incremental learning,person recognition,elastic weight consolidation,feature replay
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined