Duan, L. and Aragon-Camarasa, G. (2022) A continuous robot vision approach for predicting shapes and visually perceived weights of garments. IEEE Robotics and Automation Letters, 7(3), pp. 7950-7957. (doi: 10.1109/lra.2022.3186747)
Text
275088.pdf - Accepted Version 1MB |
Abstract
We present a continuous perception approach that learns geometric and physical similarities between garments by continuously observing a garment while a robot picks it up from a table. The aim is to capture and encode geometric and physical characteristics of a garment into a manifold where a decision can be carried out, such as predicting the garment’s shape class and its visually perceived weight. Our approach features an early stop strategy, which means that a robot does not need to observe a full video sequence of a garment being picked up from a crumpled to a hanging state to make a prediction, taking 8 seconds in average to classify garment shapes. In our experiments, we find that our approach achieves prediction accuracies of 93% for shape classification and 98.5% for predicting weights and advances state-of-art approaches in similar robotic perception tasks by 22% for shape classification.
Item Type: | Articles |
---|---|
Keywords: | Artificial Intelligence, Control and Optimization, Computer Science Applications, Computer Vision and Pattern Recognition, Mechanical Engineering, Human-Computer Interaction, Biomedical Engineering, Control and Systems Engineering |
Status: | Published |
Refereed: | Yes |
Glasgow Author(s) Enlighten ID: | Duan, Mr Li and Aragon Camarasa, Dr Gerardo |
Authors: | Duan, L., and Aragon-Camarasa, G. |
College/School: | College of Science and Engineering > School of Computing Science |
Journal Name: | IEEE Robotics and Automation Letters |
Publisher: | IEEE |
ISSN: | 2377-3766 |
ISSN (Online): | 2377-3766 |
Published Online: | 27 June 2022 |
Copyright Holders: | Copyright © 2022 IEEE |
First Published: | First published in IEEE Robotics and Automation Letters 7(3): 7950-7957 |
Publisher Policy: | Reproduced in accordance with the publisher copyright policy |
University Staff: Request a correction | Enlighten Editors: Update this record