학술논문

Towards resource-efficient detection-driven processing of multi-stream videos
Document Type
Conference
Source
Proceedings of the 27th Annual International Conference on Mobile Computing and Networking. :843-845
Subject
frame filtering
multi-streaming videos
object detection
real-time processing
Language
English
Abstract
Detection-driven video analytics is resource hungry as it depends on running object detectors on video frames. Running an object detection engine (i.e., deep learning models such as YOLO and EfficientDet) for each frame makes video analytics pipelines difficult to achieve real-time processing. In this paper, we leverage selective processing of frames and batching of frames to reduce the overall cost of running detection models on live videos. We discuss several factors that hinder the real-time processing of detection-driven video analytics. We propose a system with configurable knobs and show how to achieve the stability of the system using a Lyapunov-based control strategy. In our setup, heterogeneous edge devices (e.g. mobile phones, cameras) stream videos to a low-resource edge server where frames are selectively processed in batches and the detection results are sent to the cloud or to the edge device for further application-aware processing. Preliminary results on controlling different knobs, such as frame skipping, frame size, and batch size show interesting insights to achieve real-time processing of multi-stream video streams with low overhead and low overall information loss.

Online Access