학술논문

Analog RF Circuit Sizing by a Cascade of Shallow Neural Networks
Document Type
Periodical
Source
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on. 42(12):4391-4401 Dec, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Radio frequency
Genetic algorithms
Topology
Behavioral sciences
Neural networks
Training
Circuit synthesis
Analog circuits
Deep learning
Design automation
Microelectronics
circuit sizing
deep learning
design automation
genetic algorithms (GAs)
microelectronics
neural networks
radiofrequency (RF)
Language
ISSN
0278-0070
1937-4151
Abstract
A deep neural network architecture for the automatic sizing of analog circuit components is proposed, with a focus on radio frequency (RF) applications in the 2 to 5-GHz region. It addresses the challenges of the typically small number of examples for network training and the existence of multiple solutions, of which impractical values for integrated circuit implementation. We address these issues by restricting the learning to one component size at a time, thanks to a cascade of dedicated shallow neural networks (SNNs), where each network constrains the prediction of the next ones. Moreover, the SNNs are individually tuned by a genetic algorithm for the prediction order and accuracy. This reduction of the solution space at each step allows the use of small training sets, and the introduced constraints between SNNs handle component interdependencies. The method is successfully validated on three different types of RF microcircuits: 1) a low-noise amplifier (LNA); 2) a voltage-controlled oscillator (VCO); and 3) a mixer, using 180 and 130-nm CMOS implementations. All the predictions were within 5% of the true values, both at the component and performance levels, and all the responses were obtained in less than 5 s, after 4 to 47 min. Training on a regular PC station. The obtained results show that the proposed method is fast and applicable to arbitrary analog circuit topologies, with no need to retrain the developed neural network for each new set of desired circuit performances.