학술논문

Enabling ImageNet-Scale Deep Learning on MCUs for Accurate and Efficient Inference
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(7):11471-11479 Apr, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Memory management
Computational modeling
Internet of Things
Analytical models
Microcontrollers
Brain modeling
Performance evaluation
Artificial Intelligence of Things (AIoT)
convolutional neural networks (CNNs)
deep learning
edge AI
embedded systems
Internet of Things (IoT)
machine learning
microcontrollers (MCUs)
neural networks (NNs)
Tiny Machine Learning (TinyML)
Language
ISSN
2327-4662
2372-2541
Abstract
Conventional approaches to Tiny Machine Learning (TinyML) achieve high accuracy by deploying the largest deep learning model with the highest input resolutions that fit within the size constraints imposed by the microcontroller’s (MCUs) fast internal storage and memory. In this article, we perform an in-depth analysis of prior works to show that models derived within these constraints suffer from low accuracy and, surprisingly, high latency. We propose an alternative approach that enables the deployment of efficient models with low inference latency, but free from the constraints of internal memory. We take a holistic view of typical MCU architectures and utilize plentiful but slower external memories to relax internal storage and memory constraints. To avoid the lower speed of external memory impacting inference latency, we build on the TinyOps inference framework, which performs operation partitioning and uses overlays via DMA, to accelerate the latency. Using insights from our study, we deploy efficient models from the TinyOps design space onto a range of embedded MCUs achieving record performance on TinyML ImageNet classification with up to 6.7% higher accuracy and $1.4\times $ faster latency compared to state-of-the-art internal memory approaches.