Action Recognition From Skeleton Data via Analogical Generalization Over Qualitative Representations.

THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE(2018)

引用 27|浏览53
暂无评分
摘要
Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要