학술논문

ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data
Document Type
Working Paper
Source
Proceedings of the 41st International Conference on Machine Learning (2024), in Proceedings of Machine Learning Research 235:48837-48854
Subject
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Machine Learning
Computer Science - Neural and Evolutionary Computing
68T05
I.2.6
I.2.10
I.4.8
I.4.10
D.2.2
D.1.4
Language
Abstract
We seek to enable classic processing of continuous ultra-sparse spatiotemporal data generated by event-based sensors with dense machine learning models. We propose a novel hybrid pipeline composed of asynchronous sensing and synchronous processing that combines several ideas: (1) an embedding based on PointNet models -- the ALERT module -- that can continuously integrate new and dismiss old events thanks to a leakage mechanism, (2) a flexible readout of the embedded data that allows to feed any downstream model with always up-to-date features at any sampling rate, (3) exploiting the input sparsity in a patch-based approach inspired by Vision Transformer to optimize the efficiency of the method. These embeddings are then processed by a transformer model trained for object and gesture recognition. Using this approach, we achieve performances at the state-of-the-art with a lower latency than competitors. We also demonstrate that our asynchronous model can operate at any desired sampling rate.
Comment: Originally published in the Proceedings of Machine Learning Research ICML 2024