Learning to detect, associate, and recognize human actions and surrounding scenes in untrimmed videos
- Authors
- Park, J.; Lee, J.; Jeon, S.; Kim, S.; Kim, S.; Sohn, K.
- Issue Date
- Oct-2018
- Publisher
- Association for Computing Machinery, Inc
- Keywords
- Action Classification; Scene Classification; Semantic Feature Fusion; Video Classification
- Citation
- CoVieW 2018 - Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild, co-located with MM 2018, pp 21 - 26
- Pages
- 6
- Journal Title
- CoVieW 2018 - Proceedings of the 1st Workshop and Challenge on Comprehensive Video Understanding in the Wild, co-located with MM 2018
- Start Page
- 21
- End Page
- 26
- URI
- https://yscholarhub.yonsei.ac.kr/handle/2021.sw.yonsei/6639
- DOI
- 10.1145/3265987.3265989
- ISSN
- 0000-0000
- Abstract
- While recognizing human actions and surrounding scenes addresses different aspects of video understanding, they have strong correlations that can be used to complement the singular information of each other. In this paper, we propose an approach for joint action and scene recognition that is formulated in an end-to-end learning framework based on temporal attention techniques and the fusion of them. By applying temporal attention modules to the generic feature network, action and scene features are extracted efficiently, and then they are composed to a single feature vector through the proposed fusion module. Our experiments on the CoVieW18 dataset show that our model is able to detect temporal attention with only weak supervision, and remarkably improves multi-task action and scene classification accuracies. © 2018 Association for Computing Machinery.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > Electrical and Electronic Engineering > 1. Journal Articles
Items in Scholar Hub are protected by copyright, with all rights reserved, unless otherwise indicated.