UPDF AI

VerilogEval: Evaluating Large Language Models for Verilog Code Generation

Mingjie Liu,N. Pinckney,Brucek Khailany,Haoxing Ren

2023 · DOI: 10.48550/arXiv.2309.07544
arXiv.org · 186 Citations

TLDR

A benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification and it is demonstrated that the Verilogs code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs.

Abstract

The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits. The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines. The Verilog code completions can be automatically tested for functional correctness by comparing the transient simulation outputs of the generated design with a golden solution. We also demonstrate that the Verilog code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs.

Cited Papers
Citing Papers