학술논문

Reinforcement Learning Algorithm and FDTD-Based Simulation Applied to Schroeder Diffuser Design Optimization
Document Type
Periodical
Source
IEEE Access Access, IEEE. 9:136004-136017 2021
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Acoustics
Mathematical models
Optimization
Finite difference methods
Acoustic waves
Time-domain analysis
Computational modeling
Schroeder diffuser optimization
acoustic simulation
finite-difference time-domain (FDTD)
reinforcement learning
Language
ISSN
2169-3536
Abstract
This paper aims to propose a novel approach to the algorithmic design of Schroeder acoustic diffusers by employing a deep learning optimization algorithm and a fitness function, which are based on a computer simulation of the propagation of acoustic waves. The deep learning method employed for the research consists of a deep policy gradient algorithm. It is used as a tool for carrying out a sequential optimization process, which seeks to maximize the fitness function based on parameters characterizing the autocorrelation diffusion coefficient of the designed acoustic diffuser. As the autocorrelation acoustic diffusion coefficients are calculated based on the polar response of a diffuser, the finite-difference time-domain (FDTD) simulation method is used to obtain a set of impulse responses, which are necessary to calculate the polar responses of the optimized Schroeder diffusers. The results obtained from the optimization derived from the deep learning algorithm were compared with the outcomes of a similar algorithm by employing a genetic algorithm and based on random selection of the Schroeder diffuser well-depth pattern. We found that the best result was achieved by the deep policy gradient, as it produced outcomes that, in terms of the provided autocorrelation diffusion coefficient, were statistically better than the properties of the designs supplied by two other baseline approaches.