학술논문

Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Document Type
Working Paper
Author
Collaboration, Open X-EmbodimentO'Neill, AbbyRehman, AbdulMaddukuri, AbhiramGupta, AbhishekPadalkar, AbhishekLee, AbrahamPooley, AcornGupta, AgrimMandlekar, AjayJain, AjinkyaTung, AlbertBewley, AlexHerzog, AlexIrpan, AlexKhazatsky, AlexanderRai, AnantGupta, AnchitWang, AndrewKolobov, AndreySingh, AnikaitGarg, AnimeshKembhavi, AniruddhaXie, AnnieBrohan, AnthonyRaffin, AntoninSharma, ArchitYavary, ArefehJain, ArhanBalakrishna, AshwinWahid, AyzaanBurgess-Limerick, BenKim, BeomjoonSchölkopf, BernhardWulfe, BlakeIchter, BrianLu, CewuXu, CharlesLe, CharlotteFinn, ChelseaWang, ChenXu, ChenfengChi, ChengHuang, ChenguangChan, ChristineAgia, ChristopherPan, ChuerFu, ChuyuanDevin, ColineXu, DanfeiMorton, DanielDriess, DannyChen, DaphnePathak, DeepakShah, DhruvBüchler, DieterJayaraman, DineshKalashnikov, DmitrySadigh, DorsaJohns, EdwardFoster, EthanLiu, FangchenCeola, FedericoXia, FeiZhao, FeiyuFrujeri, Felipe VieiraStulp, FreekZhou, GaoyueSukhatme, Gaurav S.Salhotra, GautamYan, GeFeng, GilbertSchiavi, GiulioBerseth, GlenKahn, GregoryWang, GuanzhiSu, HaoFang, Hao-ShuShi, HaochenBao, HenghuiAmor, Heni BenChristensen, Henrik IFuruta, HirokiWalke, HomerFang, HongjieHa, HuyMordatch, IgorRadosavovic, IlijaLeal, IsabelLiang, JackyAbou-Chakra, JadKim, JaehyungDrake, JaimynPeters, JanSchneider, JanHsu, JasmineBohg, JeannetteBingham, JeffreyWu, JeffreyGao, JensenHu, JiahengWu, JiajunWu, JialinSun, JiankaiLuo, JianlanGu, JiayuanTan, JieOh, JihoonWu, JimmyLu, JingpeiYang, JingyunMalik, JitendraSilvério, JoãoHejna, JoeyBooher, JonathanTompson, JonathanYang, JonathanSalvador, JordiLim, Joseph J.Han, JunhyekWang, KaiyuanRao, KanishkaPertsch, KarlHausman, KarolGo, KeeganGopalakrishnan, KeerthanaGoldberg, KenByrne, KendraOslund, KennethKawaharazuka, KentoBlack, KevinLin, KevinZhang, KevinEhsani, KianaLekkala, KiranEllis, KirstyRana, KrishanSrinivasan, KrishnanFang, KuanSingh, Kunal PratapZeng, Kuo-HaoHatch, KyleHsu, KyleItti, LaurentChen, Lawrence YunliangPinto, LerrelFei-Fei, LiTan, LiamFan, Linxi "Jim"; Ott, LionelLee, LisaWeihs, LucaChen, MagnumLepert, MarionMemmel, MariusTomizuka, MasayoshiItkina, MashaCastro, Mateo GuamanSpero, MaxDu, MaximilianAhn, MichaelYip, Michael C.Zhang, MingtongDing, MingyuHeo, MinhoSrirama, Mohan KumarSharma, MohitKim, Moo JinKanazawa, NaoakiHansen, NicklasHeess, NicolasJoshi, Nikhil JSuenderhauf, NikoLiu, NingDi Palo, NormanShafiullah, Nur Muhammad MahiMees, OierKroemer, OliverBastani, OsbertSanketi, Pannag RMiller, Patrick "Tree"; Yin, PatrickWohlhart, PaulXu, PengFagan, Peter DavidMitrano, PeterSermanet, PierreAbbeel, PieterSundaresan, PriyaChen, QiuyuVuong, QuanRafailov, RafaelTian, RanDoshi, RiaMart{'i}n-Mart{'i}n, RobertoBaijal, RohanScalise, RosarioHendrix, RoseLin, RoyQian, RunjiaZhang, RuohanMendonca, RussellShah, RutavHoque, RyanJulian, RyanBustamante, SamuelKirmani, SeanLevine, SergeyLin, ShanMoore, SherryBahl, ShikharDass, ShivinSonawani, ShubhamSong, ShuranXu, SichunHaldar, SiddhantKaramcheti, SiddharthAdebola, SimeonGuist, SimonNasiriany, SoroushSchaal, StefanWelker, StefanTian, StephenRamamoorthy, SubramanianDasari, SudeepBelkhale, SuneelPark, SungjaeNair, SurajMirchandani, SuvirOsa, TakayukiGupta, TanmayHarada, TatsuyaMatsushima, TatsuyaXiao, TedKollar, ThomasYu, TianheDing, TianliDavchev, TodorZhao, Tony Z.Armstrong, TravisDarrell, TrevorChung, TrinityJain, VidhiVanhoucke, VincentZhan, WeiZhou, WenxuanBurgard, WolframChen, XiChen, XiangyuWang, XiaolongZhu, XinghaoGeng, XinyangLiu, XiyuanLiangwei, XuLi, XuanlinPang, YansongLu, YaoMa, Yecheng JasonKim, YejinChebotar, YevgenZhou, YifanZhu, YifengWu, YilinXu, YingWang, YixuanBisk, YonatanCho, YoonyoungLee, YoungwoonCui, YuchenCao, YueWu, Yueh-HuaTang, YujinZhu, YukeZhang, YunchuJiang, YunfanLi, YunshuangLi, YunzhuIwasawa, YusukeMatsuo, YutakaMa, ZehanXu, ZhuoCui, Zichen JeffZhang, ZichenFu, ZipengLin, Zipeng
Source
Subject
Computer Science - Robotics
Language
Abstract
Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website https://robotics-transformer-x.github.io.
Comment: Project website: https://robotics-transformer-x.github.io