Lightweight deep convolutional neural network for background sound classification in speech signals

JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA(2022)

引用 4|浏览1
暂无评分
摘要
Recognizing background information in human speech signals is a task that is extremely useful in a wide range of practical applications, and many articles on background sound classification have been published. It has not, however, been addressed with background embedded in real-world human speech signals. Thus, this work proposes a lightweight deep convolutional neural network (CNN) in conjunction with spectrograms for an efficient background sound classification with practical human speech signals. The proposed model classifies 11 different background sounds such as airplane, airport, babble, car, drone, exhibition, helicopter, restaurant, station, street, and train sounds embedded in human speech signals. The proposed deep CNN model consists of four convolution layers, four max-pooling layers, and one fully connected layer. The model is tested on human speech signals with varying signal-to-noise ratios (SNRs). Based on the results, the proposed deep CNN model utilizing spectrograms achieves an overall background sound classification accuracy of 95.2% using the human speech signals with a wide range of SNRs. It is also observed that the proposed model outperforms the benchmark models in terms of both accuracy and inference time when evaluated on edge computing devices. (C) 2022 Acoustical Society of America.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要