Towards a unified visual framework in a binocular active robot vision system

Aragon Camarasa, G. , Fattah, H. and Siebert, J.P. (2010) Towards a unified visual framework in a binocular active robot vision system. Robotics and Autonomous Systems, 58(3), pp. 276-286. (doi: 10.1016/j.robot.2009.08.005)

Full text not currently available from Enlighten.

Publisher's URL: http://dx.doi.org/10.1016/j.robot.2009.08.005

Abstract

This paper presents the results of an investigation and pilot study into an active binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognising objects in a cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of an investigation that yield a maximum vergence error of ~6.5 pixels, while 85% of known objects were recognised in five different cluttered scenes. Finally a ‘stepping-stone’ visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the field of view resulting from any individual saccade.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Siebert, Dr Paul and Fattah, Mr Haitham and Aragon Camarasa, Dr Gerardo
Authors: Aragon Camarasa, G., Fattah, H., and Siebert, J.P.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Journal Name:Robotics and Autonomous Systems
ISSN:0921-8890

University Staff: Request a correction | Enlighten Editors: Update this record