학술논문

Weakly Supervised Action Localization by Sparse Temporal Pooling Network
Document Type
Conference
Source
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR Computer Vision and Pattern Recognition (CVPR), 2018 IEEE/CVF Conference on. :6752-6761 Jun, 2018
Subject
Computing and Processing
Videos
Proposals
Feature extraction
Task analysis
Convolutional neural networks
Prediction algorithms
Language
ISSN
2575-7075
Abstract
We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm learns from video-level class labels and predicts temporal intervals of human actions with no requirement of temporal localization annotations. We design our network to identify a sparse subset of key segments associated with target actions in a video using an attention module and fuse the key segments through adaptive temporal pooling. Our loss function is comprised of two terms that minimize the video-level action classification error and enforce the sparsity of the segment selection. At inference time, we extract and score temporal proposals using temporal class activations and class-agnostic attentions to estimate the time intervals that correspond to target actions. The proposed algorithm attains state-of-the-art results on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with its weak supervision.