학술논문

Scaling Instructable Agents Across Many Simulated Worlds
Document Type
Working Paper
Author
SIMA TeamRaad, Maria AbiAhuja, ArunBarros, CatarinaBesse, FredericBolt, AndrewBolton, AdrianBrownfield, BethanieButtimore, GavinCant, MaxChakera, SarahChan, Stephanie C. Y.Clune, JeffCollister, AdrianCopeman, VikkiCullum, AlexDasgupta, Ishitade Cesare, DarioDi Trapani, JuliaDonchev, YaniDunleavy, EmmaEngelcke, MartinFaulkner, RyanGarcia, FrankieGbadamosi, CharlesGong, ZhitaoGonzales, LucyGupta, KshitijGregor, KarolHallingstad, Arne OlavHarley, TimHaves, SamHill, FelixHirst, EdHudson, Drew A.Hudson, JonyHughes-Fitt, StephRezende, Danilo J.Jasarevic, MimiKampis, LauraKe, RosemaryKeck, ThomasKim, JunkyungKnagg, OscarKopparapu, KavyaLampinen, AndrewLegg, ShaneLerchner, AlexanderLimont, MarjorieLiu, YulanLoks-Thompson, MariaMarino, JosephCussons, Kathryn MartinMatthey, LoicMcloughlin, SiobhanMendolicchio, PiermariaMerzic, HamzaMitenkova, AnnaMoufarek, AlexandreOliveira, ValeriaOliveira, YankoOpenshaw, HannahPan, RenkePappu, AneeshPlatonov, AlexPurkiss, OllieReichert, DavidReid, JohnRichemond, Pierre HarveyRoberts, TysonRuscoe, GilesElias, Jaume SanchezSandars, TashaSawyer, Daniel P.Scholtes, TimSimmons, GuySlater, DanielSoyer, HubertStrathmann, HeikoStys, PeterTam, Allison C.Teplyashin, DenisTerzi, TayfunVercelli, DavideVujatovic, BojanWainwright, MarcusWang, Jane X.Wang, ZhengdongWierstra, DaanWilliams, DuncanWong, NathanielYork, SarahYoung, Nick
Source
Subject
Computer Science - Robotics
Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Computer Science - Machine Learning
Language
Abstract
Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order to accomplish complex tasks. The Scalable, Instructable, Multiworld Agent (SIMA) project tackles this by training agents to follow free-form instructions across a diverse range of virtual 3D environments, including curated research environments as well as open-ended, commercial video games. Our goal is to develop an instructable agent that can accomplish anything a human can do in any simulated 3D environment. Our approach focuses on language-driven generality while imposing minimal assumptions. Our agents interact with environments in real-time using a generic, human-like interface: the inputs are image observations and language instructions and the outputs are keyboard-and-mouse actions. This general approach is challenging, but it allows agents to ground language across many visually complex and semantically rich environments while also allowing us to readily run agents in new environments. In this paper we describe our motivation and goal, the initial progress we have made, and promising preliminary results on several diverse research environments and a variety of commercial video games.