A Formal Explanation Space for the Simultaneous Clustering of Neurology Phenotypes

Research Square (Research Square)(2023)

引用 0|浏览3
暂无评分
摘要
Abstract Objective: Clustering is applied to biomedical datasets to identify meaningful subgroups of patients, proteins, genes, and diseases. Explainable AI (XAI) brings transparency and interpretability to the formation, composition, and quality of these clusters. This study creates a formal explanation space to enhance the interpretability of clusters of neurology phenotypes. Methods: Subjects with dementia, movement disorders, and multiple sclerosis were clustered by neurological phenotype using spectral methods. To improve the interpretability of the clusters, we created an explanation space that described the data, explained the algorithm, evaluated cluster separation and quality, identified influential features, visualized cluster composition, and assessed biological plausibility. Results: Text and equations were used to explain clustering algorithms. Cluster quality was evaluated with validity indices. The t-SNE plots illustrate cluster separation. Influential features were identified from SHAP plots. The cluster composition was visualized with heat maps and word clouds. Expert opinion assessed biological relevance. Spectral coclustering yielded clusters with higher validity indices and biological plausibility than spectral biclustering. Conclusions: When biomedical data undergo simultaneous clustering, a formal explanation space can improve the transparency of the methods and interpretability of the results.
更多
查看译文
关键词
neurology phenotypes,formal explanation space,simultaneous clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要