학술논문

Improved action recognition by combining multiple 2D views in the bag-of-words model
Document Type
Conference
Source
2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Advanced Video and Signal Based Surveillance (AVSS), 2013 10th IEEE International Conference on. :250-255 Aug, 2013
Subject
Computing and Processing
Cameras
Histograms
Support vector machines
Radio frequency
Accuracy
Training
Surveillance
Language
Abstract
Action recognition is a hard problem due to the many degrees of freedom of the human body and the movement of its limbs. This is especially hard when only one camera viewpoint is available and when actions involve subtle movements. For instance, when looked from the side, checking one's watch may look very similar to crossing one's arms. In this paper, we investigate how much the recognition can be improved when multiple views are available. The novelty is that we explore various combination schemes within the robust and simple bag-of-words (BoW) framework, from early fusion of features to late fusion of multiple classifiers. In new experiments on the publicly available IXMAS dataset, we learn that action recognition can be improved significantly already by only adding one viewpoint. We demonstrate that the state-of-the-art on this dataset can be improved by 5% — achieving 96.4% accuracy — when multiple views are combined. Cross-view invariance of the BoW pipeline can be improved by 32% with intermediate-level fusion.