谷歌浏览器插件
订阅小程序
在清言上使用

Small-scale Linguistic Steganalysis for Multi-concealed Scenarios

IEEE signal processing letters(2022)

引用 4|浏览5
暂无评分
摘要
Recently, due to the considerable feature expression ability of neural networks, deep linguistic steganalysis methods have been greatly developed. However, there are still two issues that need to be ameliorated. First, the prevailing linguistic steganalysis methods rely heavily on massive training data, which is labor-intensive and time-consuming. Second, these methods implement steganalysis only in different weak-concealed scenarios, the stego texts in each of which have only a single language style and payload. But in practice, the intercepted network samples are probably the mixture of the stego texts that possess different language styles and payloads, in which the semantic spatial distribution may be more chaotic than that in weak-concealed scenarios, thus making steganalysis more difficult. To address the above issues, a novel linguistic steganalysis method is proposed in this letter. First, the pre-trained BERT language model is constructed as an embedder to compensate for the shortage of data. Then, in addition to learning local and global semantic features, a feature interaction module is designed for exploring mutual effects between them. Furthermore, besides the typical cross-entropy loss, triplet loss is also introduced for the model training. In this way, the proposed method can refine more comprehensive and discriminative deep features in the intricate semantic space. The performance of the proposed method is compared with the representative linguistic steganalysis methods on datasets of different scales, and the experimental results reveal the superiority of the proposed method.
更多
查看译文
关键词
Feature extraction,Linguistics,Semantics,Training,Payloads,Steganography,Neural networks,Linguistic steganalysis,neural networks,small-scale concealed scenarios,feature interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要