Quantifying the Uncertainty of Next-Place Predictions

MobiCASE(2016)

引用 5|浏览23
暂无评分
摘要
Context-aware systems use predictions about a user's future state (e.g., next movements, actions, or needs) in order to seamlessly trigger the display of information or the execution of services. Predictions, however, always have an associated uncertainty that, when above a certain threshold, should prevent a system from taking action due to the risk of ``getting it wrong''. In this work, we present a context-dependent ``level of trust'' estimator that is able to determine whether a prediction should be trusted -- and thus used to trigger an action -- or not. Our estimator relies on ensemble learning to adapt across different users and application scenarios. We demonstrate its performance in the context of a popular problem -- next-place prediction -- and show how it outperforms existing approaches. We also report on the results of a survey that investigated user attitudes towards mobile-phone-based personal assistants and their ability to trigger actions in response to predictions. While users appreciated such assistants, they had substantially different tolerance thresholds with respect to prediction errors depending on the use case. This further motivates the need for a context-dependent level of trust estimator.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要