학술논문

Reinforcement Learning-Based Fed-Batch Optimization with Reaction Surrogate Model
Document Type
Conference
Source
2021 American Control Conference (ACC) American Control Conference (ACC), 2021. :2581-2586 May, 2021
Subject
Aerospace
Bioengineering
Components, Circuits, Devices and Systems
Computing and Processing
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Computational modeling
Simulation
Reinforcement learning
Data models
Stability analysis
Real-time systems
Computational efficiency
Deep Reinforcement Learning
Surrogate Modeling
fed-batch optimization
Proximal Policy Optimization
LSTM modeling
Language
ISSN
2378-5861
Abstract
In this paper, we implement a framework which combines Reinforcement Learning (RL) based reaction optimization with first principle model and plant historical data of the reaction system. Here we employ a Long-Short-Term-Memory (LSTM) network for reaction surrogate modeling, and Proximal Policy Optimization (PPO) algorithm for the fed-batch optimization. The proposed reaction surrogate model combines simulation data with real plant data for an accurate and computationally efficient reaction simulation. Based on the surrogate model, the RL optimization result suggests maintaining an increased temperature setpoint and high reactant feed flow to maximize the product profits. The simulation results by following the RL profile suggests an estimate of 6.4% improvement of the product profits.