A framework for evaluating automatic image annotation algorithms

Athanasakos, K., Stathopoulos, V. and Jose, J. (2010) A framework for evaluating automatic image annotation algorithms. Lecture Notes in Computer Science, 5993, pp. 217-228. (doi:10.1007/978-3-642-12275-0_21)

[img] Text
ID39525.pdf

330kB

Publisher's URL: http://dx.doi.org/10.1007/978-3-642-12275-0_21

Abstract

Several Automatic Image Annotation (AIA) algorithms have been introduced recently, which have been found to outperform previous models. However, each one of them has been evaluated using either different descriptors, collections or parts of collections, or "easy" settings. This fact renders their results non-comparable, while we show that collection-specific properties are responsible for the high reported performance measures, and not the actual models. In this paper we introduce a framework for the evaluation of image annotation models, which we use to evaluate two state-of-the-art AIA algorithms. Our findings reveal that a simple Support Vector Machine (SVM) approach using Global MPEG-7 Features outperforms state-of-the-art AIA models across several collection settings. It seems that these models heavily depend on the set of features and the data used, while it is easy to exploit collection-specific properties, such as tag popularity especially in the commonly used Corel 5K dataset and still achieve good performance.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Jose, Professor Joemon and Stathopoulos, Mr Vasileios
Authors: Athanasakos, K., Stathopoulos, V., and Jose, J.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Journal Name:Lecture Notes in Computer Science
Publisher:Springer
ISSN:0302-9743
ISSN (Online):1611-3349
Copyright Holders:Copyright © 2010 Springer
First Published:First published in Lecture Notes in Computer Science 5993 : 217-228
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher. The original publication is available at www.springerlink.com

University Staff: Request a correction | Enlighten Editors: Update this record