Chrome Extension
WeChat Mini Program
Use on ChatGLM

Interpretable Deep Image Classification using Rationally Inattentive Utility Maximization

IEEE Journal of Selected Topics in Signal Processing(2024)

Cited 0|Views10
No score
Abstract
Can deep convolutional neural networks (CNNs) for image classification be interpreted as utility maximizers with information costs? By performing set-valued system identification for Bayesian decision systems, we demonstrate that deep CNNs behave equivalently (in terms of necessary and sufficient conditions) to rationally inattentive Bayesian utility maximizers, a generative model used extensively in economics for human decision-making. Our claim is based on approximately 500 numerical experiments on 5 widely used neural network architectures. The parameters of the resulting interpretable model are computed efficiently via convex feasibility algorithms. As a practical application, we also illustrate how the reconstructed interpretable model can predict the classification performance of deep CNNs with high accuracy. The theoretical foundation of our approach lies in Bayesian revealed preference studied in micro-economics. All our results are on GitHub and completely reproducible.
More
Translated text
Key words
Interpretable Machine Learning,Bayesian Revealed preference,Rational Inattention,Deep Neural Networks,Image Classification
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined