Hardware Acceleration of Deep Neural Networks on Edge Devices with FPGAs

Haris, J. and Cano, J. (2020) Hardware Acceleration of Deep Neural Networks on Edge Devices with FPGAs. 16th International Summer School on Advanced Computer Architecture and Compilation for High-Performance and Embedded Systems (ACACES), Online, 06-17 Jul 2020.

[img] Text
226500.pdf - Published Version



Deep Neural Networks (DNNs) provide excellent performance in the field of machine learning and with the current trend of technology moving towards more mobile and decentralised processing of data, many industries face the challenge of performing DNN inference in constrained edge devices. Field Programmable Gate Arrays (FPGAs) are reconfigurable semiconductor circuits that are well suited for processing DNNs efficiently through hardware acceleration, as developers can adapt and redesign specialized DNN accelerators for new emergent DNNmodels. In this work, we design and implement hardware accelerators within the PYNQ Z1 board. Our designs outperform the CPU only inference of MobileNetV1 by 40% for single thread and 25.4% for dual thread.

Item Type:Conference or Workshop Item
Glasgow Author(s) Enlighten ID:Cano Reyes, Dr Jose and Haris, Jude
Authors: Haris, J., and Cano, J.
College/School:College of Science and Engineering > School of Computing Science
Copyright Holders:Copyright © 2020 The Authors
Publisher Policy:Reproduced with the permission of the publisher

University Staff: Request a correction | Enlighten Editors: Update this record