학술논문

XNOR-VSH: A Valley-Spin Hall Effect-Based Compact and Energy-Efficient Synaptic Crossbar Array for Binary Neural Networks
Document Type
Periodical
Source
IEEE Journal on Exploratory Solid-State Computational Devices and Circuits IEEE J. Explor. Solid-State Comput. Devices Circuits Exploratory Solid-State Computational Devices and Circuits, IEEE Journal on. 9(2):99-107 Dec, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Magnetic tunneling
Nonvolatile memory
In-memory computing
Energy efficiency
Artificial intelligence
Hall effect
Neural networks
Binary neural networks (BNNs)
edge artificial intelligence (AI)
in-memory computing (IMC)
magnetic tunnel junction (MTJ)
monolayer transition metal dichalcogenide (TMD)
nonvolatile memories (NVMs)
valley-spin Hall (VSH) effect
Language
ISSN
2329-9231
Abstract
Binary neural networks (BNNs) have shown an immense promise for resource-constrained edge artificial intelligence (AI) platforms. However, prior designs typically either require two bit-cells to encode signed weights leading to an area overhead, or require complex peripheral circuitry. In this article, we address this issue by proposing a compact and low power in-memory computing (IMC) of XNOR-based dot products featuring signed weight encoding in a single bit-cell. Our approach utilizes valley-spin Hall (VSH) effect in monolayer tungsten di-selenide to design an XNOR bit-cell (named “XNOR-VSH”) with differential storage and access-transistor-less topology. We co-optimize the proposed VSH device and a memory array to enable robust in-memory dot product computations between signed binary inputs and signed binary weights with sense margin (SM) $1 ~\mu \text{A}$ . Our results show that the proposed XNOR-VSH array achieves 4.8%–9.0% and 37%–63% lower IMC latency and energy, respectively, with 49%–64% smaller area compared to spin-transfer-torque (STT)-magnetic random access memory (MRAM) and spin-orbit-torque (SOT)-MRAM based XNOR-arrays. We also present the impact of hardware non-idealities and process variations in XNOR-VSH on system-level accuracy for the trained ResNet-18 BNNs using the CIFAR-10 dataset.