학술논문

Scaling Laws of Dataset Size for VideoGPT / VideoGPTのデータセットサイズに関するスケーリング則
Document Type
Journal Article
Source
Proceedings of the Annual Conference of JSAI. 2023, :2
Subject
Dataset Size
Scaling Laws
World Models
スケーリング則
データセットサイズ
世界モデル
Language
Japanese
ISSN
2758-7347
Abstract
Over the past decade, deep learning has made significant strides in improving various domains by training large models with large-scale computational resources. Recent studies showed that large-scale transformer models perform well in diverse generative tasks, including language modeling and image modeling. Efficient training of such large-scale models requires a vast amount of data, and many fields are working on building large-scale datasets. However, despite the development in simulator environments such as CARLA and large-scale datasets such as RoboNet, the scaling to dataset size of the performance of world models, which try to acquire the temporal and spatial structure of environments, has yet to be sufficiently studied. Thus, this work experimentally proves the scaling law of a world model to dataset size. We use VideoGPT and a dataset generated by the CARLA simulator. We also show that the computational budget should mainly be used to scale up dataset size when the number of model parameters is on the order of 107 or larger and the computational budget is limited.

Online Access