When generalized eating detection machine learning models fail in the field.

UbiComp '17: The 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing Maui Hawaii September, 2017(2017)

引用 34|浏览18
暂无评分
摘要
Problematic eating behaviors are a major cause of obesity. To improve our understanding of these eating behaviors, we need to be able to first reliably detect them. In this paper we use a wrist-worn sensor to test a generalized machine learning models' reliability in detecting eating episodes through data processing. We process data from a 6-axis inertial sensor. Since most eating episodes do not occur while moving, we filter out periods of physical activity, and then use an advanced motif-based time-point fusion technique to detect feeding gestures. We also cluster each of the false alarms into four categories in an effort to identify the main behaviors that confound feeding gesture detection. We tested our system on eight participants performing various activities in the wild while wearing a sensing suite: a neck- and a wrist-worn sensor, along with a wearable video camera continuously recording to capture ground truth. Trained annotators further validated the algorithms by identifying feeding gestures, and categorized the false alarms. All eating episodes were detected; however, many false alarms were also detected, yielding a 61% average F-measure in detecting feeding gestures. This result shows clear challenges in characterizing eating episodes by using a single inertial-based wrist-worn sensor.
更多
查看译文
关键词
Wrist-worn sensors, wearables, hand-to-mouth gestures, in-the-field test, overeating, inertial sensors, motif-based segmentation, K-Spectral Centroid clustering, fusion, classification, feeding gesture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要