학술논문
LLM - TG: Towards Automated Test Case Generation for Processors Using Large Language Models
Document Type
Conference
Author
Source
2024 IEEE 42nd International Conference on Computer Design (ICCD) ICCD Computer Design (ICCD), 2024 IEEE 42nd International Conference on. :389-396 Nov, 2024
Subject
Language
ISSN
2576-6996
Abstract
Design verification (DV) has existed for decades and is crucial for identifying potential bugs before chip tape- out. Hand-crafting test cases is time-consuming and error-prone, even for experienced verification engineers. Prior work has attempted to lighten this burden by rule-guided random test case generation. However, this approach does not eliminate the manual effort required to write rules that describe detailed hardware behavior. Motivated by advances in large language models (LLMs), we explore their potential to capture register transfer level (RTL) behavior and construct prompts for test case generation based on RTL behavior. First, we introduce a prompt framework, LLM - Driven Test Generation (LLM - TG), to generate test cases, thereby enhancing LLMs' test generation capabilities. Additionally, we provide an open-source prompt library that offers a set of standardized prompts for processor verification, aiming to improve test generation efficiency. Lastly, we use an LLM to verify a 12-stage, multi-issue, out-of-order RV64GC processor, achieving at least an 8.34 % increase in block coverage and at least a 5.8 % increase in expression coverage compared to the state-of-the-art (SOTA) methods, LLM4DV and RISCV- DV. The prompt library is available at https://github.com/LLM-TGIPrompt_Library.