Chrome Extension
WeChat Mini Program
Use on ChatGLM

Enhancing the Explainability of Deep Learning Based Malware Detection System

2023 9th Annual International Conference on Network and Information Systems for Computers (ICNISC)(2023)

Cited 0|Views11
No score
Abstract
With the rapid development of deep learning, deep learning-based malware detection has received increasing attention because of its advantage of not relying on domain knowledge. The research community has proposed some rudimentary methods to enhance the explainability of machine learning-based malware classifiers. However, these studies lack an analysis on the explainability of DNN. Research on the explainability of DNN is more challenging because of its complexity. In this work, we first propose a feature extraction method for code data, so that the deep learning model can capture the logical structure of the code. Then we train multiple deep learning models based on source code data, including MLP, CNN, and RNN. Finally, we construct feature attribution analysis for them based on local linear approximation and feature attribution respectively, to give the important features on which the inference decision of the deep neural network malicious code detection model relies, and realize the interpretability enhancement. The experimental results of the explainable analysis show that the explainable method in this paper can capture the important features and help users analyze malware better.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined