학술논문

ConvNets and AGMM based real-time human detection under fisheye camera for embedded surveillance
Document Type
Conference
Source
2016 International Conference on Information and Communication Technology Convergence (ICTC) Information and Communication Technology Convergence (ICTC), 2016 International Conference on. :840-845 Oct, 2016
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Fields, Waves and Electromagnetics
Power, Energy and Industry Applications
Signal Processing and Analysis
Transportation
Surveillance
Cameras
Object detection
Training
Adaptation models
Visualization
Real-time systems
Human Detection
Fisheye Camera
Convolutional Neural Networks (ConvNets)
AGMM (Adaptive Gaussian Mixture Model)
Embedded Surveillance
Background Subtraction
Language
Abstract
Human detection is an essential task in so many applications, especially surveillance systems. Recently, ConvNets (Convolutional Neural Networks)-based YOLO model is a successful method applied for object (including human) detection. It is one of the fastest way to detect directly objects from the input image. However, compared to the ConvNets-based state-of-the-art object detection methods, YOLO model-based object detection method achieved less accuracy. In this paper, we propose a new real-time human detection under fisheye cameras for surveillance purpose based on YOLO model. However, we improve the preciseness by using 2-D input channels consisting of grey-level image channel and foreground-background context information extracted by AGMM (Adaptive Gaussian Mixture Model) instead of original 3-D color input channels for ConvNets-based YOLO model. It is shown through experiments that the proposed method performs better with respect to accuracy and more robust to background scene changes without processing speed degradation compared to YOLO model-based human detection so that it can be successfully employed for embedded surveillance application.