Study of context influence on classifiers trained under different video-document representations

Bermejo, P., Joho, H., Jose, J.M. and Villa, R. (2011) Study of context influence on classifiers trained under different video-document representations. Information Processing and Management, 47(2), pp. 215-226. (doi: 10.1016/j.ipm.2010.05.003)

Full text not currently available from Enlighten.

Abstract

The problem of content-based video retrieval continues to pose a challenge to the research community, the performance of video retrieval systems being low due to the semantic gap. In this paper we consider whether taking advantage of context can aid the video retrieval process by making the prediction of relevance easier, i.e. if it is easier for a classification system to predict the relevance of a video shot under a given context, then that context has potential in also improving retrieval, since the underlying features better differentiate relevant from non-relevant video shots. We use an operational definition of context, where datasets can be split into disjoint sub-collections which reflect a particular context. Contexts considered include task difficulty and user expertise, among others. In the classification process, four main types of features are used to represent video-shots: conventional low-level visual features representing physical properties of the video shots, behavioral features which are based on user interaction with the video shots, and two different bag-of-words features obtained from the Automatic Speech Recognition from the audio of the video. So, we measure how well each kind of video representation performs and, for each of these representations, our datasets are then split into different contexts in order to discover which contexts affect the performance of a number of trained classifiers. Thus, we aim to discover contexts which improve the classifier’s performance and if this improvement is consistent regardless the kind of representation. Experimental results show which kind of the tested document representations works best for the different features; following on from this, we then identify the contexts which result in the classifiers performing better.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Jose, Professor Joemon and Joho, Mr Hideo and Villa, Dr Robert
Authors: Bermejo, P., Joho, H., Jose, J.M., and Villa, R.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Journal Name:Information Processing and Management
ISSN:0306-4573
Published Online:14 July 2010

University Staff: Request a correction | Enlighten Editors: Update this record