학술논문

A Genetic Tuner for Fixed-skeleton TinyML Models
Document Type
Conference
Source
2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops) Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), 2024 IEEE International Conference on. :556-561 Mar, 2024
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
Computer vision
Tuners
Computational modeling
Conferences
Pipelines
Computer architecture
Transformers
genetic
evolutionary
tinyml
automl
edge
Language
ISSN
2766-8576
Abstract
TinyML is a new paradigm of machine learning and inference on tiny devices. In many new-age applications such as Internet-of-Things (IoT), robotics, automotive embedded systems, etc., it is very important to process data close to the source for low-latency response, reduced data transfer, and privacy preservation. Despite the growing popularity of TinyML, it is very difficult to carry out these projects at scale. The reason for this is a critical dependency on skilled resources, who can design accurate and efficient models for very tiny devices, e.g. micro-controllers. In this paper, we address this shortage of TinyML skillsets by introducing a framework that can automatically generate, a tiny machine inference pipeline very fast. The framework is evaluated on diverse computer vision datasets, for which the models having state-of-the-art accuracy, less than 60KB in size are generated within 20 minutes on average.