학술논문

Gemma: Open Models Based on Gemini Research and Technology
Document Type
Working Paper
Author
Gemma TeamMesnard, ThomasHardin, CassidyDadashi, RobertBhupatiraju, SuryaPathak, ShreyaSifre, LaurentRivière, MorganeKale, Mihir SanjayLove, JulietteTafti, PouyaHussenot, LéonardSessa, Pier GiuseppeChowdhery, AakankshaRoberts, AdamBarua, AdityaBotev, AlexCastro-Ros, AlexSlone, AmbroseHéliou, AmélieTacchetti, AndreaBulanova, AnnaPaterson, AntoniaTsai, BethShahriari, BobakLan, Charline LeChoquette-Choo, Christopher A.Crepy, ClémentCer, DanielIppolito, DaphneReid, DavidBuchatskaya, ElenaNi, EricNoland, EricYan, GengTucker, GeorgeMuraru, George-ChristianRozhdestvenskiy, GrigoryMichalewski, HenrykTenney, IanGrishchenko, IvanAustin, JacobKeeling, JamesLabanowski, JaneLespiau, Jean-BaptisteStanway, JeffBrennan, JennyChen, JeremyFerret, JohanChiu, JustinMao-Jones, JustinLee, KatherineYu, KathyMillican, KatieSjoesund, Lars LoweLee, LisaDixon, LucasReid, MachelMikuła, MaciejWirth, MateoSharman, MichaelChinaev, NikolaiThain, NithumBachem, OlivierChang, OscarWahltinez, OscarBailey, PaigeMichel, PaulYotov, PetkoChaabouni, RahmaComanescu, RamonaJana, ReenaAnil, RohanMcIlroy, RossLiu, RuiboMullins, RyanSmith, Samuel LBorgeaud, SebastianGirgin, SertanDouglas, SholtoPandya, ShreeShakeri, SiamakDe, SohamKlimenko, TedHennigan, TomFeinberg, VladStokowiec, WojciechChen, Yu-huiAhmed, ZafaraliGong, ZhitaoWarkentin, TrisPeran, LudovicGiang, MinhFarabet, ClémentVinyals, OriolDean, JeffKavukcuoglu, KorayHassabis, DemisGhahramani, ZoubinEck, DouglasBarral, JoellePereira, FernandoCollins, EliJoulin, ArmandFiedel, NoahSenter, EvanAndreev, AlekKenealy, Kathleen
Source
Subject
Computer Science - Computation and Language
Computer Science - Artificial Intelligence
Language
Abstract
This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations.