Towards Robust 3D Object Recognition with Dense-to-Sparse Deep Domain Adaptation

Murali, P. K. , Wang, C., Dahiya, R. and Kaboli, M. (2022) Towards Robust 3D Object Recognition with Dense-to-Sparse Deep Domain Adaptation. In: 2022 IEEE International Conference on Flexible & Printable Sensors & Systems, Vienna, Austria, 10-13 Jul 2022, ISBN 9781665442732 (doi: 10.1109/FLEPS53764.2022.9781490)

[img] Text
268682.pdf - Accepted Version

3MB

Abstract

Three-dimensional (3D) object recognition is crucial for intelligent autonomous agents such as autonomous vehicles and robots alike to operate effectively in unstructured environments. Most state-of-art approaches rely on relatively dense point clouds and performance drops significantly for sparse point clouds. Unsupervised domain adaption allows to minimise the discrepancy between dense and sparse point clouds with minimal unlabelled sparse point clouds, thereby saving additional sparse data collection, annotation and retraining costs. In this work, we propose a novel method for point cloud based object recognition with competitive performance with state-of-art methods on dense and sparse point clouds while being trained only with dense point clouds.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Dahiya, Professor Ravinder and Murali, Prajval Kumar
Authors: Murali, P. K., Wang, C., Dahiya, R., and Kaboli, M.
College/School:College of Science and Engineering > School of Engineering > Electronics and Nanoscale Engineering
ISBN:9781665442732
Copyright Holders:Copyright © 2022 IEEE
First Published:First published in 2022 IEEE International Conference on Flexible and Printable Sensors and Systems (FLEPS)
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record