Assessing Robustness of Image Recognition Models to Changes in the Computational Environment

Louloudakis, N., Gibson, P., Cano, J. and Rajan, A. (2022) Assessing Robustness of Image Recognition Models to Changes in the Computational Environment. NeurIPS ML Safety Workshop (MLSW), New Orleans, USA, 28 Nov -9 Dec 2022.

[img] Text
312875.pdf - Published Version

890kB

Abstract

Image recognition tasks typically use deep learning and require enormous processing power, thus relying on hardware accelerators like GPUs and TPUs for fast, timely processing. Failure in real-time image recognition tasks can occur due to incorrect mapping on hardware accelerators, which may lead to timing uncertainty and incorrect behavior. In addition, the increasing demand for optimal performance has led to progress towards the optimization of different neural network operations, such as operator fusion. Owing to the increased use of image recognition tasks in safety-critical applications like autonomous driving and medical imaging, it is imperative to assess the performance and impact of such optimizations, and explore their effectiveness. In this paper we conduct robustness analysis of four popular image recognition models with the ImageNet dataset, assessing the impact of the compiler optimizations applied, utilizing different Deep Learning frameworks and executing on hardware devices of varying capabilities. Our results indicate output label discrepancies of up to 37% across deep learning framework conversions, and up to 81.8% unexpected performance degradation upon application of compiler optimizations.

Item Type:Conference or Workshop Item
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Cano Reyes, Dr Jose and Louloudakis, Mr Nick and Gibson, Perry
Authors: Louloudakis, N., Gibson, P., Cano, J., and Rajan, A.
College/School:College of Science and Engineering > School of Computing Science
Research Group:Glasgow Intelligent Computing Laboratory
Copyright Holders:Copyright © The Author(s) 2022
First Published:First published in ML Safety Workshop, 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record