Fuzzy Fingerprinting Large Pre-trained Models.

EUSFLAT/AGOP(2023)

引用 1|浏览16
暂无评分
摘要
Large pre-trained models like BERT and RoBERTa have gained massive popularity as they have surpassed previous state-of-the-art models in various Natural Language Processing (NLP) tasks. Nevertheless, interpreting their behavior is still an ongoing challenge as these models are composed of millions of parameters. The introduction of the Fuzzy Fingerprint (FFP) framework provided a straightforward classification technique able to deliver result interpretations, however, this method was outperformed by these large pre-trained models. In this work, we introduce a novel method that combines the simplicity of the FFPs with the ability to detect complex patterns of large pre-trained models, in order to build a more interpretable classification framework. Furthermore, we show that it is feasible to obtain unique FFPs for each label that enable the examination of incorrect classifications. We evaluate our new method on four text classification benchmark datasets and show that it is possible to gain interpretability without any noticeable loss in performance.
更多
查看译文
关键词
models,fuzzy,pre-trained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要