Explaining Explainability: Understanding Concept Activation Vectors
arxiv(2024)
摘要
Recent interpretability methods propose using concept-based explanations to
translate the internal representations of deep learning models into a language
that humans are familiar with: concepts. This requires understanding which
concepts are present in the representation space of a neural network. One
popular method for finding concepts is Concept Activation Vectors (CAVs), which
are learnt using a probe dataset of concept exemplars. In this work, we
investigate three properties of CAVs. CAVs may be: (1) inconsistent between
layers, (2) entangled with different concepts, and (3) spatially dependent.
Each property provides both challenges and opportunities in interpreting
models. We introduce tools designed to detect the presence of these properties,
provide insight into how they affect the derived explanations, and provide
recommendations to minimise their impact. Understanding these properties can be
used to our advantage. For example, we introduce spatially dependent CAVs to
test if a model is translation invariant with respect to a specific concept and
class. Our experiments are performed on ImageNet and a new synthetic dataset,
Elements. Elements is designed to capture a known ground truth relationship
between concepts and classes. We release this dataset to facilitate further
research in understanding and evaluating interpretability methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要