Interpretability Vs Explainability: The Black Box of Machine Learning

2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)(2023)

引用 0|浏览0
暂无评分
摘要
To understand the complex nature of the Artificial Intelligence (AI) model, the model needs to be more trustable, transparent, scalable, understandable, and explainable. The trust of the AI model is concluded based on the decision taken by the AI model in its black box environment. Thus, Explainable AI (XAI) helps the developers to understand how the AI model behaves/performs while making a particular decision. With more complex AI models, scientists face difficulty in understanding the model outcome. Hence, XAI is required to explain the decision-making process of an AI model. However, to build trust-based AI models, organization embeds ethical principles in the AI processes. In our research paper, we studied the case of the banking sector where an inefficient onboarding process fails to establish a customer-based relationship. Due to the inefficient onboarding process, banks lose users’ faith which creates a gap in the customer-based relationship and hampers the onboarding process. To bridge this gap, we explain the decision-making process of the AI model through XAI.
更多
查看译文
关键词
Explainable artificial intelligence (XAI),Machine Learning (ML),Software Development Life Cycle (SDLC),black box,neural networks (NN),deep learning (DNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要