Learning 3D-aware Egocentric Spatial-Temporal Interaction via Graph Convolutional Networks
ICRA(2020)
摘要
To enable intelligent automated driving systems, a promising strategy is to understand how human drives and interacts with road users in complicated driving situations. In this paper, we propose a 3D-aware egocentric spatial-temporal interaction framework for automated driving applications. Graph convolution networks (GCN) is devised for interaction modeling. We introduce three novel concepts into GCN. First, we decompose egocentric interactions into ego-thing and ego-stuff interaction, modeled by two GCNs. In both GCNs, ego nodes are introduced to encode the interaction between thing objects (e.g., car and pedestrian), and interaction between stuff objects (e.g., lane marking and traffic light). Second, objects' 3D locations are explicitly incorporated into GCN to better model egocentric interactions. Third, to implement ego-stuff interaction in GCN, we propose a MaskAlign operation to extract features for irregular objects. We validate the proposed framework on tactical driver behavior recognition. Extensive experiments are conducted using Honda Research Institute Driving Dataset, the largest dataset with diverse tactical driver behavior annotations. Our framework demonstrates substantial performance boost over baselines on the two experimental settings by 3.9% and 6.0%, respectively. Furthermore, we visualize the learned affinity matrices, which encode ego-thing and ego-stuff interactions, to showcase the proposed framework can capture interactions effectively.
更多查看译文
关键词
graph convolutional networks,intelligent automated driving systems,complicated driving situations,spatial-temporal interaction framework,graph convolution networks,GCN,interaction modeling,ego-stuff interaction,Honda research institute driving dataset,3D-aware egocentric spatial-temporal interaction learning,tactical driver behavior annotations,ego-thing interactions,feature extraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络