Just in Time: Personal Temporal Insights for Altering Model Decisions

2019 IEEE 35th International Conference on Data Engineering (ICDE)(2019)

引用 4|浏览31
暂无评分
摘要
The interpretability of complex Machine Learning models is coming to be a critical social concern, as they are increasingly used in human-related decision-making processes such as resume filtering or loan applications. Individuals receiving an undesired classification are likely to call for an explanation - preferably one that specifies what they should do in order to alter that decision when they reapply in the future. Existing work focuses on a single ML model and a single point in time, whereas in practice, both models and data evolve over time: an explanation for an application rejection in 2018 may be irrelevant in 2019 since in the meantime both the model and the applicant's data can change. To this end, we propose a novel framework that provides users with insights and plans for changing their classification in particular future time points. The solution is based on combining state-of-the-art algorithms for (single) model explanations, ones for predicting future models, and database-style querying of the obtained explanations. We propose to demonstrate the usefulness of our solution in the context of loan applications, and interactively engage the audience in computing and viewing suggestions tailored for applicants based on their unique characteristic.
更多
查看译文
关键词
Data models,Generators,Databases,Machine learning,Adaptation models,Decision making,Computational modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要