A Biologically Motivated Software Retina for Robotic Sensors Based on Smartphone Technology

Siebert, J. P. , Schmidt, A., Aragon-Camarasa, G. , Hockings, N. and Wang, X. (2016) A Biologically Motivated Software Retina for Robotic Sensors Based on Smartphone Technology. Worskshop on the Architecture of Smart Cameras, Dijon, France, 04-05 Jul 2016.

[img]
Preview
Text
148800.pdf - Accepted Version

66kB

Publisher's URL: http://eunevis.org/wasc2016/

Abstract

A key issue in designing robotics systems is the cost of an integrated camera sensor that meets the bandwidth/processing requirement for many advanced robotics applications, especially lightweight robotics applications, such as visual surveillance or SLAM in autonomous aerial vehicles. There is currently much work going on to adapt smartphones to provide complete robot vision systems, as the phone is so exquisitely integrated having camera(s), inertial sensing, sound I/O and excellent wireless connectivity. Mass market production makes this a very low-cost platform and manufacturers from quadrotor drone suppliers to children’s toys, such as the Meccanoid robot, employ a smartphone to provide a vision system/control system. Accordingly, many research groups are attempting to optimise image analysis, computer vision and machine learning libraries for the smartphone platform. However current approaches to robot vision remain highly demanding for mobile processors such as the ARM, and while a number of algorithms have been developed, these are very stripped down, i.e. highly compromised in function or performance For example, the semi-dense visual odometry implementation of [1] operates on images of only 320x240pixels. In our research we have been developing biologically motivated foveated vision algorithms, potentially some 100 times more efficient than their conventional counterparts, based on a model of the mammalian retina we have developed. Vision systems based on the foveated architectures found in mammals have the potential to reduce bandwidth and processing requirements by about x100 - it has been estimated that our brains would weigh ~60Kg if we were to process all our visual input at uniform high resolution. We have reported a foveated visual architecture that implements a functional model of the retina-visual cortex to produce feature vectors that can be matched/classified using conventional methods, or indeed could be adapted to employ Deep Convolutional Neural Nets for the classification/interpretation stage, [2,3,4]. We are now at the early stages of investigating how best to port our foveated architecture onto a smartphone platform. To achieve the required levels of performance we propose to optimise our retina model to the ARM processors utilised in smartphones, in conjunction with their integrated GPUs, to provide a foveated smart vision system on a smartphone. Our current goal is to have a foveated system running in real-time to serve as a front-end robot sensor for tasks such as general purpose object recognition and reliable dense SLAM using a commercial off-the-shelf smartphone which communicates with conventional hardware performing back-end visual classification/interpretation. We believe that, as in Nature, space-variance is the key to achieving the necessary data reduction to be able to implement the complete visual processing chain on the smartphone itself.

Item Type:Conference or Workshop Item
Status:Published
Refereed:No
Glasgow Author(s) Enlighten ID:Schmidt, Dr Adam and Siebert, Dr Paul and Hockings, Mr Nick and Aragon Camarasa, Dr Gerardo and Wang, xiaomeng
Authors: Siebert, J. P., Schmidt, A., Aragon-Camarasa, G., Hockings, N., and Wang, X.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Research Group:CVAS in IDA
Copyright Holders:Copyright © 2016 The Authors
First Published:First published in Worskshop on the Architecture of Smart Cameras 2016
Publisher Policy:Reproduced with the permission of the Authors

University Staff: Request a correction | Enlighten Editors: Update this record