Explainable Deep Learning Models With Gradient-Weighted Class Activation Mapping for Smart Agriculture.

IEEE Access(2023)

引用 1|浏览8
暂无评分
摘要
Explainable Artificial Intelligence is a recent research direction that aims to explain the results of the Deep learning model. However, many recent research need to go into depth in evaluating the effective-ness of deep learning models in classifying image objects. For that reason, the research proposes two stages in the process of applying Explainable Artificial Intelligence, including: (1) assessing the accuracy of the deep learning model through evaluation methods, (2) using Grad-CAM for model interpretation aims to evaluate the feature detection ability of an image when recognized by deep learning models. The deep learning models included in the evaluation included VGG16, ResNet50, ResNet50V2, Xception, EfficientNetV2, Incep-tionV3, DenseNet201, MobileNetV2, MobileNet, NasNetMobile, RegNetX002, and InceptionResNetV2 on our updated VegNet dataset is available at: https://www.kaggle.com/datasets/enalis/tomatoes-dataset. The results show that the MobieNet model has high accuracy but less reliability than EfficientNetV2S and Xception. However, MobileNetV2's accuracy is the highest when considering the ratio match rate. The research results contribute to the construction of intelligent agricultural support systems (using automatic fruit-picking robots, removing poor-quality fruits,...) from the results of the Explainable AI model to be able to use the optimal deep learning model in processing.
更多
查看译文
关键词
explainable deep learning models,deep learning,agriculture,gradient-weighted
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要