학술논문

(Security) Assertions by Large Language Models
Document Type
Periodical
Source
IEEE Transactions on Information Forensics and Security IEEE Trans.Inform.Forensic Secur. Information Forensics and Security, IEEE Transactions on. 19:4374-4389 2024
Subject
Signal Processing and Analysis
Computing and Processing
Communication, Networking and Broadcast Technologies
Hardware
Security
Benchmark testing
Codes
Writing
Task analysis
Source coding
LLM
AI
hardware
assertion generation
hardware security
vulnerability detection
ChatGPT
Codex
Language
ISSN
1556-6013
1556-6021
Abstract
The security of computer systems typically relies on a hardware root of trust. As vulnerabilities in hardware can have severe implications on a system, there is a need for techniques to support security verification activities. Assertion-based verification is a popular verification technique that involves capturing design intent in a set of assertions that can be used in formal verification or testing-based checking. However, writing security-centric assertions is a challenging task. In this work, we investigate the use of emerging large language models (LLMs) for code generation in hardware assertion generation for security, where primarily natural language prompts, such as those one would see as code comments in assertion files, are used to produce SystemVerilog assertions. We focus our attention on a popular LLM and characterize its ability to write assertions out of the box, given varying levels of detail in the prompt. We design an evaluation framework that generates a variety of prompts, and we create a benchmark suite comprising real-world hardware designs and corresponding golden reference assertions that we want to generate with the LLM.