Chrome Extension
WeChat Mini Program
Use on ChatGLM

An interpretable image classification model Combining a fuzzy neural network with a variational autoencoder inspired by the human brain

INFORMATION SCIENCES(2024)

Cited 0|Views6
No score
Abstract
Fuzzy neural networks (FNNs) have gained attention for their interpretability and self -learning ability. However, they struggle with interpreting high -dimensional unstructured data and the problem of "rule explosion". To address this, a model called VAE-FNN is proposed, which combines a FNN with a variational autoencoder (VAE). The VAE-FNN simulates the image perception, feature extraction, inductive reasoning, and adjustment learning processes in the human brain. An encoder is used to simulate the visual cortex for extracting features from complex images, reducing the dimensionality, and mitigating the rule explosion problem. The fuzzy neural network classifier (FNNC) simulates the reasoning functions of the parietal and prefrontal cortex in the human brain and achieves interpretable classification based on the encoder's output features. A training algorithm is designed to improve the stability of the FNNC. The VAE-FNN's training method adjusts the feature extraction process based on reconstruction and classification effects, enabling the model to obtain advanced and semantic classification features. Detailed experimental results on two image datasets demonstrate that the proposed model can extract high-level classification features and provide explanations consistent with human intuition while achieving high -precision classification. The experimental results on the other two datasets further validate the effectiveness of the proposed model.
More
Translated text
Key words
Fuzzy neural network,Image classification,Interpretability,Variational autoencoder
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined