학술논문

Mastering the game of Go without human knowledge
Document Type
Report
Source
Nature. October 19, 2017, Vol. 550 Issue 7676, p354, 6 p.
Subject
Artificial intelligence -- Forecasts and trends
Go (Game) -- Technology application
Computer games -- Forecasts and trends
Computer game
Market trend/market analysis
Artificial intelligence
Technology application
Environmental issues
Science and technology
Zoology and wildlife conservation
AlphaGo (Computer game) -- Forecasts and trends
Language
English
ISSN
0028-0836
Abstract
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGos own move selections and also the winner of AlphaGos games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa:, our new program AlphaGo Zero achieved superhuman performance, winning 1000 against the previously published, champion-defeating AlphaGo.
Author(s): David Silver (corresponding author) [1]; Julian Schrittwieser [1]; Karen Simonyan [1]; Ioannis Antonoglou [1]; Aja Huang [1]; Arthur Guez [1]; Thomas Hubert [1]; Lucas Baker [1]; Matthew Lai [1]; [...]