Style transformed synthetic images for real world gaze estimation by using residual neural network with embedded personal identities

Applied Intelligence(2022)

引用 3|浏览12
暂无评分
摘要
Gaze interaction is essential for social communication in many scenarios; therefore, interpreting people’s gaze direction is helpful for natural human-robot interactions and human-virtual characters. In this study, we first adopt a residual neural network (ResNet) structure with an embedding layer of personal identity (ID-ResNet) that outperformed the current best result of 2.51 ∘ with MPIIGaze data, a benchmark dataset for gaze estimation. To avoid using manually labelled data, we used UnityEye synthetic images with and without style transformation as the training data. We exceeded the previously reported best result with MPIIGaze data (from 2.76 ∘ to 2.55 ∘ ) and UT-Multiview data (from 4.01 ∘ to 3.40 ∘ ). In addition, it only needs to fine-tune with a few ”calibration” examples for a new person to yield significant performance gains. In addition, we presented the KLBS-eye dataset that contains 15,350 images collected from 12 participants while looking in nine known directions and received the state-of-the-art result of (0.59 ± 1.69 ∘ ).
更多
查看译文
关键词
Appearance-based,ID-ResNet,Style transfer,Fine-tune,Learning by synthesis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要