학술논문

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Document Type
Working Paper
Author
Grauman, KristenWestbury, AndrewTorresani, LorenzoKitani, KrisMalik, JitendraAfouras, TriantafyllosAshutosh, KumarBaiyya, VijayBansal, SiddhantBoote, BikramByrne, EugeneChavis, ZachChen, JoyaCheng, FengChu, Fu-JenCrane, SeanDasgupta, AvijitDong, JingEscobar, MariaForigua, CristhianGebreselasie, AbrhamHaresh, SanjayHuang, JingIslam, Md MohaiminulJain, SuyogKhirodkar, RawalKukreja, DevanshLiang, Kevin JLiu, Jia-WeiMajumder, SagnikMao, YongsenMartin, MiguelMavroudi, EffrosyniNagarajan, TusharRagusa, FrancescoRamakrishnan, Santhosh KumarSeminara, LuigiSomayazulu, ArjunSong, YaleSu, ShanXue, ZihuiZhang, EdwardZhang, JinxuCastillo, AngelaChen, ChanganFu, XinzhuFuruta, RyosukeGonzalez, CristinaGupta, PrinceHu, JiaboHuang, YifeiHuang, YimingKhoo, WeslieKumar, AnushKuo, RobertLakhavani, SachLiu, MiaoLuo, MiLuo, ZhengyiMeredith, BrighidMiller, AustinOguntola, OluwatumininuPan, XiaqingPeng, PennyPramanick, ShramanRamazanova, MereyRyan, FionaShan, WeiSomasundaram, KiranSong, ChenanSoutherland, AudreyTateno, MasatoshiWang, HuiyuWang, YuchenYagi, TakumaYan, MingfeiYang, XitongYu, ZechengZha, Shengxin CindyZhao, ChenZhao, ZiweiZhu, ZhifanZhuo, JeffArbelaez, PabloBertasius, GedasCrandall, DavidDamen, DimaEngel, JakobFarinella, Giovanni MariaFurnari, AntoninoGhanem, BernardHoffman, JudyJawahar, C. V.Newcombe, RichardPark, Hyun SooRehg, James M.Sato, YoichiSavva, ManolisShi, JianboShou, Mike ZhengWray, Michael
Source
Subject
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Artificial Intelligence
Language
Abstract
We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair). 740 participants from 13 cities worldwide performed these activities in 123 different natural scene contexts, yielding long-form captures from 1 to 42 minutes each and 1,286 hours of video combined. The multimodal nature of the dataset is unprecedented: the video is accompanied by multichannel audio, eye gaze, 3D point clouds, camera poses, IMU, and multiple paired language descriptions -- including a novel "expert commentary" done by coaches and teachers and tailored to the skilled-activity domain. To push the frontier of first-person video understanding of skilled human activity, we also present a suite of benchmark tasks and their annotations, including fine-grained activity understanding, proficiency estimation, cross-view translation, and 3D hand/body pose. All resources are open sourced to fuel new research in the community. Project page: http://ego-exo4d-data.org/
Comment: updated baseline results and dataset statistics to match the released v2 data; added table to appendix comparing stats of Ego-Exo4D alongside other datasets