학술논문

A Weight Mapping Strategy for More Fully Exploiting Data in CIM-Based CNN Accelerator
Document Type
Periodical
Source
IEEE Transactions on Circuits and Systems II: Express Briefs IEEE Trans. Circuits Syst. II Circuits and Systems II: Express Briefs, IEEE Transactions on. 71(4):2324-2328 Apr, 2024
Subject
Components, Circuits, Devices and Systems
Kernel
Information filters
Convolutional neural networks
Computer architecture
Arrays
Convolution
Energy efficiency
Convolutional neural network
weight mapping method
compute-in-memory
data reuse
array utilization
Language
ISSN
1549-7747
1558-3791
Abstract
Compute-in-memory accelerators have been extensively researched to overcome the limitations of the von Neumann architecture. However, the current mapping strategy and dataflow results in inefficient utilization of the array and input data. In this brief, we propose a new mapping method named Squeezemapping that leverages spare space in each array and optimizes the utilization of input dataset. We employed NeuroSim to simulate the inference of various networks of different scales. Experimental results demonstrate that our method performs 36.51% higher in energy efficiency and 48.15% higher in speedup when applied to the VGG16 large-scale model under area constraints.