학술논문

Visual Recognition for ZELDA Content Generation via Generative Adversarial Network
Document Type
Conference
Source
2023 3rd International Conference on Artificial Intelligence (ICAI) Artificial Intelligence (ICAI), 2023 3rd International Conference on. :76-81 Feb, 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Video games
Visualization
Training data
Games
Machine learning
Generative adversarial networks
Stability analysis
Procedural content generation
Deep convolutional Generative adversarial network
Wasserstein Generative Adversarial Network
Procedural content generation via machine learning
Language
Abstract
In video games, procedural content generation has a strong history. Current procedural content generation strategies, such as search-based, solver-based, rule-based, and language-based techniques, have been used to create levels, maps, character models, and surfaces in games. There has been a research area dedicated to game content generation. More recently, Generative tasks have been in charge of a wide range of content creations that are relevant to games. Although some front-line Generative Adversarial Networks (GANs) are used independently, others are used in conjunction with more traditional techniques or an intel-ligent environment. GANs model suffers from a problem known as mode collapse where duplicate content is generated. In this ar-ticle, we have applied a simple Generative Adversarial Network, Deep convolutional Generative Adversarial Network(DCGAN), and Wasserstein Generative Adversarial Network(WGAN) to the ZELDA data set for levels of content generation and conclude the results of the basis of visual recognition. Results show that WGAN generates visually good content.