Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Kristen Grauman, Andrew Westbury,Lorenzo Torresani,Kris Kitani,Jitendra Malik,Triantafyllos Afouras,Kumar Ashutosh, Vijay Baiyya,Siddhant Bansal, Bikram Boote, Eugene Byrne, Zach Chavis,Joya Chen,Feng Cheng, Fu-Jen Chu, Sean Crane, Avijit Dasgupta, Jing Dong,Maria Escobar,Cristhian Forigua,Abrham Gebreselasie,Sanjay Haresh, Jing Huang,Md Mohaiminul Islam, Suyog Jain,Rawal Khirodkar, Devansh Kukreja,Kevin J Liang,Jia-Wei Liu,Sagnik Majumder, Yongsen Mao,Miguel Martin,Effrosyni Mavroudi,Tushar Nagarajan,Francesco Ragusa,Santhosh Kumar Ramakrishnan, Luigi Seminara, Arjun Somayazulu,Yale Song, Shan Su,Zihui Xue, Edward Zhang, Jinxu Zhang,Angela Castillo,Changan Chen, Xinzhu Fu,Ryosuke Furuta,Cristina Gonzalez, Prince Gupta, Jiabo Hu, Yifei Huang, Yiming Huang, Weslie Khoo, Anush Kumar, Robert Kuo, Sach Lakhavani,Miao Liu, Mi Luo,Zhengyi Luo, Brighid Meredith, Austin Miller, Oluwatumininu Oguntola, Xiaqing Pan, Penny Peng,Shraman Pramanick, Merey Ramazanova,Fiona Ryan, Wei Shan, Kiran Somasundaram, Chenan Song, Audrey Southerland, Masatoshi Tateno, Huiyu Wang, Yuchen Wang, Takuma Yagi, Mingfei Yan, Xitong Yang, Zecheng Yu, Shengxin Cindy Zha,Chen Zhao, Ziwei Zhao, Zhifan Zhu, Jeff Zhuo,Pablo Arbelaez,Gedas Bertasius,David Crandall,Dima Damen, Jakob Engel, Giovanni Maria Farinella, Antonino Furnari,Bernard Ghanem, Judy Hoffman,C. V. Jawahar,Richard Newcombe, Hyun Soo Park,James M. Rehg,Yoichi Sato,Manolis Savva, Jianbo Shi,Mike Zheng Shou, Michael Wray

CVPR 2024(2023)

引用 0|浏览19
暂无评分
摘要
We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair). More than 800 participants from 13 cities worldwide performed these activities in 131 different natural scene contexts, yielding long-form captures from 1 to 42 minutes each and 1,422 hours of video combined. The multimodal nature of the dataset is unprecedented: the video is accompanied by multichannel audio, eye gaze, 3D point clouds, camera poses, IMU, and multiple paired language descriptions -- including a novel "expert commentary" done by coaches and teachers and tailored to the skilled-activity domain. To push the frontier of first-person video understanding of skilled human activity, we also present a suite of benchmark tasks and their annotations, including fine-grained activity understanding, proficiency estimation, cross-view translation, and 3D hand/body pose. All resources will be open sourced to fuel new research in the community.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要