RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models

Long, Z. , Killick, G., Mccreadie, R. and Aragon-Camarasa, G. (2024) RoboLLM: Robotic Vision Tasks Grounded on Multimodal Large Language Models. In: 2024 IEEE International Conference on Robotics and Automation (ICRA2024), 13-17 May 2024, Yokohama, Japan, (Accepted for Publication)

[img] Text
320032.pdf - Accepted Version
Restricted to Repository staff only
Available under License Creative Commons Attribution.

1MB

Item Type:Conference Proceedings
Additional Information:This research has been supported by EPSRC Grant No. EP/S019472/1
Status:Accepted for Publication
Refereed:Yes
Glasgow Author(s) Enlighten ID:Mccreadie, Dr Richard and LONG, ZIJUN and Aragon Camarasa, Dr Gerardo and Killick, George
Authors: Long, Z., Killick, G., Mccreadie, R., and Aragon-Camarasa, G.
College/School:College of Science and Engineering
College of Science and Engineering > School of Computing Science
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
303747Digital-Chemical-Robotics for Translation of Code to Molecules and Complex Chemical SystemsLeroy CroninEngineering and Physical Sciences Research Council (EPSRC)EP/S019472/1Chemistry