ACF: An Adaptive Compression Framework for Multimodal Network in Embedded Devices.

IEEE Trans. Mob. Comput.(2024)

Cited 0|Views18
No score
The ubiquitous Internet-of-Things (IoT) devices generate vast amounts of multimodal data, and the deep multimodal fusion network (DMFN) is a promising technology for processing multimodal data. Deploying DMFNs locally on embedded IoT devices is a profitable way to provide privacy-preserving and robust sensing services. However, the current compression methods suffer from the following limitations: First, they are designed based on unimodal networks or specific model structures. Hence, it is hard to extend these methods to diverse DMFNs; Second, existing works never relate their efforts to disparate computational demands of multimodal data and modalities. Easy samples and redundant modalities consume the same computational resources as powerful modalities and complex samples. We propose an A daptive C ompression F ramework (ACF) for DMFNs to address those challenges. It enables input-dependent runtime compression locally on resource-constrained embedded devices. Specifically, we propose an offline model transformation module to upgrade the static network with two kinds of dynamic components to support online structural adjustment. Then we design a lightweight policy network to generate multi-granularity and data-dependent compression strategies for different model parts. Finally, we evaluate ACF on four DMFNs across three embedded platforms. Compared with the best results of the existing schemes, ACF obtains up to 2.61× latency reduction and 2.30× energy consumption reduction, with up to 3.57% accuracy improvement.
Translated text
Key words
Multimodal learning,resource-constrained embedded device,dynamic neural network
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined