Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model

Ozimek, P. and Siebert, J. P. (2017) Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model. BMVC 2017 Workshop on Deep Learning on Irregular Domains, London, UK, 07 Sep 2017.

148797.pdf - Published Version


Publisher's URL:


We present a biologically inspired method for pre-processing images applied to CNNs that reduces their memory requirements while increasing their invariance to scale and rotation changes. Our method is based on the mammalian retino-cortical transform: a mapping between a pseudo-randomly tessellated retina model (used to sample an input image) and a CNN. The aim of this first pilot study is to demonstrate a functional retinaintegrated CNN implementation and this produced the following results: a network using the full retino-cortical transform yielded an F1 score of 0.80 on a test set during a 4-way classification task, while an identical network not using the proposed method yielded an F1 score of 0.86 on the same task. The method reduced the visual data by e×7, the input data to the CNN by 40% and the number of CNN training epochs by 64%. These results demonstrate the viability of our method and hint at the potential of exploiting functional traits of natural vision systems in CNNs.

Item Type:Conference or Workshop Item
Glasgow Author(s) Enlighten ID:Ozimek, Peter and Siebert, Dr Paul
Authors: Ozimek, P., and Siebert, J. P.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Research Group:Computer Vision for Autonomous Systems within IDA
Copyright Holders:Copyright © 2017 The Authors
First Published:First published in BMVC 2017 Workshop on Deep Learning on Irregular Domains
Publisher Policy:Reproduced in accordance with the publisher copyright policy

University Staff: Request a correction | Enlighten Editors: Update this record