학술논문

On summarising the ‘here and now’ of social videos for smart mobile browsing
Document Type
Conference
Source
2014 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM) Computational Intelligence for Multimedia Understanding (IWCIM), 2014 International Workshop on. :1-5 Nov, 2014
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Videos
Transform coding
Media
Visualization
Social network services
Pipelines
Streaming media
Social Media
Web Harvesting
Video Summarisation
Blur Detection
MPEG Codec
Language
Abstract
The amount of media that is being uploaded to social sites (such as Twitter, Facebook and Instagram) is providing a wealth of visual data (images and videos) augmented with additional information such as keywords, timestamps and GPS coordinates. Tapastreet 1 provides access in real-time to this visual content by harvesting social networks for visual media associated with particular locations, time and hashtags [1]. Browsing efficiently through harvested videos requires smart processing to give users a quick overview of their content in particular when using mobile platforms with limited bandwidth. This paper aims at presenting an architecture for testing several strategies for processing summaries of videos collected on social networks to tackle this issue.