학술논문

Guiding FPGA Detailed Placement via Reinforcement Learning
Document Type
Conference
Source
2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC) Very Large Scale Integration (VLSI-SoC), 2022 IFIP/IEEE 30th International Conference on. :1-6 Oct, 2022
Subject
Components, Circuits, Devices and Systems
Runtime
Q-learning
Heuristic algorithms
Benchmark testing
Very large scale integration
Feature extraction
Routing
Reinforcement Learning
Placement
FPGAs
Language
ISSN
2324-8440
Abstract
Detailed Placement (DP) is an important, but time-consuming, optimization step within the Field Programmable Gate Array (FPGA) design flow. Given a global placement, DP seeks to refine the global placement to improve the success of the subsequent routing step. In this paper, we show how Reinforcement Learning (RL) can be used to significantly reduce DP runtimes while maintaining Quality-of-Result (QoR). We develop 3 different RL models based on Tabular Q-Learning, Deep Q-Learning, and Actor-Critic. These models are evaluated by integrating them into GPlace3.0 – a state-of-the-art analytic FPGA placement tool – and tested using the 12 ISPD contest benchmarks. Our results show the models achieve total runtime improvements between 2x to 3.5x and similar QoR compared to GPlace3.0’s algorithmic-based detailed placer.