Video redundancy detection in rushes collection

Ren, R., Punitha, P. and Jose, J. (2008) Video redundancy detection in rushes collection. In: 2nd ACM TRECVid Video Summarization Workshop, Vancouver, British Columbia, Canada, 31 Oct 2008, pp. 65-69. ISBN 9781605583099 (doi: 10.1145/1463563.1463574)

Full text not currently available from Enlighten.

Publisher's URL: http://dx.doi.org/10.1145/1463563.1463574

Abstract

The rushes is a collection of raw material videos. There are various redundancies, such as rainbow screen, clipboard shot, white/black view, and unnecessary re-take. This paper develops a set of solutions to remove these video redundancies as well as an effective system for video summarisation. We regard manual editing effects, e.g. clipboard shots, as differentiators in the visual language. A rushes video is therefore divided into a group of subsequences, each of which stands for a re-take instance. A graph matching algorithm is proposed to estimate the similarity between re-takes and suggests the best instance for content presentation. The experiments on the Rushes 2008 collection show that a video can be shortened to 4%-16% of the original size by redundancy detection. This significantly reduces the complexity in content selection and leads to an effective and efficient video summarisation system.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Jose, Professor Joemon and Ren, Dr R
Authors: Ren, R., Punitha, P., and Jose, J.
Subjects:Z Bibliography. Library Science. Information Resources > Z665 Library Science. Information Science
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
ISBN:9781605583099

University Staff: Request a correction | Enlighten Editors: Update this record