학술논문
Q-Learning-Driven Framework for High-Dimensional Optimization Problems
Document Type
Conference
Author
Source
2024 IEEE Congress on Evolutionary Computation (CEC) Evolutionary Computation (CEC), 2024 IEEE Congress on. :1-8 Jun, 2024
Subject
Language
Abstract
High-dimensional optimization problems present a significant challenge in various scientific and engineering domains due to their complexity and the exponential increase in the search space. Traditional optimization algorithms often struggle to balance exploration and exploitation efficiently in such settings. To address this challenge, Reinforcement Learning (RL) is integrated with metaheuristic algorithms in this paper. The proposed framework dynamically selects among Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), and Artificial Bee Colony (ABC) algorithms based on their performance history. The RL agent is trained via a Q- Learning algorithm with dynamically allocated rewards to ensure a fair evaluation of the improvements in objective values. It determines the most suitable algorithm to apply in each iteration, adapting its strategy as the optimization progresses. QL-H(GDPA) is evaluated on five widely recognized high-dimensional benchmark functions using statistical analyses such as the Friedman test and the Nemenyi post hoc test. The experimental results demonstrate the superior performance of QL-H(GDPA) over individual algorithms, highlighting its effectiveness in high-dimensional optimization. The adaptive nature of the algorithm selection process allows for more effective navigation through complex solution spaces, particularly in high-dimensional contexts. The study underscores the potential of RL in improving optimization strategies and opens avenues for more intelligent and adaptable optimization frameworks in high-dimensional scenarios.