학술논문

DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Document Type
Working Paper
Author
DeepSeek-AILiu, AixinFeng, BeiWang, BinWang, BingxuanLiu, BoZhao, ChenggangDengr, ChengqiRuan, ChongDai, DamaiGuo, DayaYang, DejianChen, DeliJi, DongjieLi, ErhangLin, FangyunLuo, FuliHao, GuangboChen, GuantingLi, GuoweiZhang, H.Xu, HanweiYang, HaoZhang, HaoweiDing, HonghuiXin, HuajianGao, HuazuoLi, HuiQu, HuiCai, J. L.Liang, JianGuo, JianzhongNi, JiaqiLi, JiashiChen, JinYuan, JingyangQiu, JunjieSong, JunxiaoDong, KaiGao, KaigeGuan, KangWang, LeanZhang, LecongXu, LeiXia, LeyiZhao, LiangZhang, LiyueLi, MengWang, MiaojunZhang, MingchuanZhang, MinghuaTang, MinghuiLi, MingmingTian, NingHuang, PanpanWang, PeiyiZhang, PengZhu, QihaoChen, QinyuDu, QiushiChen, R. J.Jin, R. L.Ge, RuiqiPan, RuizheXu, RunxinChen, RuyiLi, S. S.Lu, ShanghaoZhou, ShangyanChen, ShanhuangWu, ShaoqingYe, ShengfengMa, ShirongWang, ShiyuZhou, ShuangYu, ShuipingZhou, ShunfengZheng, SizeWang, T.Pei, TianYuan, TianSun, TianyuXiao, W. L.Zeng, WangdingAn, WeiLiu, WenLiang, WenfengGao, WenjunZhang, WentaoLi, X. Q.Jin, XiangyueWang, XianzuBi, XiaoLiu, XiaodongWang, XiaohanShen, XiaojinChen, XiaokangChen, XiaoshaNie, XiaotaoSun, XiaowenWang, XiaoxiangLiu, XinXie, XinYu, XingkaiSong, XinnanZhou, XinyiYang, XinyuLu, XuanSu, XuechengWu, Y.Li, Y. K.Wei, Y. X.Zhu, Y. X.Xu, YanhongHuang, YanpingLi, YaoZhao, YaoSun, YaofengLi, YaohuiWang, YaohuiZheng, YiZhang, YichaoXiong, YiliangZhao, YilongHe, YingTang, YingPiao, YishiDong, YixinTan, YixuanLiu, YiyuanWang, YongjiGuo, YongqiangZhu, YuchenWang, YuduanZou, YuhengZha, YukunMa, YunxianYan, YutingYou, YuxiangLiu, YuxuanRen, Z. Z.Ren, ZehuiSha, ZhangliFu, ZheHuang, ZhenZhang, ZhenXie, ZhendaHao, ZhewenShao, ZhihongWen, ZhiniuXu, ZhipengZhang, ZhongyuLi, ZhuoshuWang, ZihanGu, ZihuiLi, ZilinXie, Ziwei
Source
Subject
Computer Science - Computation and Language
Computer Science - Artificial Intelligence
Language
Abstract
We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token, and supports a context length of 128K tokens. DeepSeek-V2 adopts innovative architectures including Multi-head Latent Attention (MLA) and DeepSeekMoE. MLA guarantees efficient inference through significantly compressing the Key-Value (KV) cache into a latent vector, while DeepSeekMoE enables training strong models at an economical cost through sparse computation. Compared with DeepSeek 67B, DeepSeek-V2 achieves significantly stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to 5.76 times. We pretrain DeepSeek-V2 on a high-quality and multi-source corpus consisting of 8.1T tokens, and further perform Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) to fully unlock its potential. Evaluation results show that, even with only 21B activated parameters, DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models.