학술논문

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Document Type
Periodical
Source
IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Trans. Pattern Anal. Mach. Intell. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 44(12):9434-9445 Dec, 2022
Subject
Computing and Processing
Bioengineering
Visualization
Annotations
Training
Analytical models
Three-dimensional displays
Semantics
Convolutional neural networks
Computer vision
machine learning
video
vision and scene understanding
benchmarking
multi-modal recognition
modeling from video
methods of data collection
neural nets
Language
ISSN
0162-8828
2160-9292
1939-3539
Abstract
Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.