학술논문

Enhancement and Expansion of the Neural Network-Based Compact Model Using a Binning Method
Document Type
Periodical
Source
IEEE Journal of the Electron Devices Society IEEE J. Electron Devices Soc. Electron Devices Society, IEEE Journal of the. 12:65-73 2024
Subject
Components, Circuits, Devices and Systems
Engineered Materials, Dielectrics and Plasmas
Mathematical models
Modeling
Data models
Training
Predictive models
Computational modeling
Analytical models
Artificial neural network (ANN)
machine learning (ML)
device modeling
compact model
binning
emerging device
SPICE
Language
ISSN
2168-6734
Abstract
The artificial neural network (ANN)-based compact model has significant advantages over physics-based standard compact models such as BSIM-CMG because it can achieve higher accuracy over a wide range of geometric parameters. This makes it particularly suitable for design space exploration and optimization. However, the ANN-based compact model using only one set of model parameters (global-ANN) requires larger model sizes to achieve wider coverage and higher accuracy in order to capture the unpredictable nonlinearities of emerging devices. This results in reduced simulation speed and a trade-off between simulation accuracy, model coverage, and simulation speed makes it difficult to utilize ANN-based compact models in a variety of ways. To solve this problem, we propose the first ANN-based compact modeling flow using a binning method (binning-ANN) and we address the training requirements and data sparsity issues that may occur due to the binning method in ANNs. In addition, we develop a bin size optimization guideline for the binning-ANN. As a result, the binning-ANN not only has higher accuracy, but also much better expandability than existing methods.