Hong, Y., Dai, H. and Ding, Y. (2022) Cross-Modality Knowledge Distillation network for monocular 3D object detection. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G. M. and Hassner, T. (eds.) Computer Vision – ECCV 2022. Series: Lecture notes in computer science, 13670. Springer, pp. 87-104. ISBN 9783031200809 (doi: 10.1007/978-3-031-20080-9_6)
Full text not currently available from Enlighten.
Abstract
Leveraging LiDAR-based detectors or real LiDAR point data to guide monocular 3D detection has brought significant improvement, e.g., Pseudo-LiDAR methods. However, the existing methods usually apply non-end-to-end training strategies and insufficiently leverage the LiDAR information, where the rich potential of the LiDAR data has not been well exploited. In this paper, we propose the Cross-Modality Knowledge Distillation (CMKD) network for monocular 3D detection to efficiently and directly transfer the knowledge from LiDAR modality to image modality on both features and responses. Moreover, we further extend CMKD as a semi-supervised training framework by distilling knowledge from large-scale unlabeled data and significantly boost the performance. Until submission, CMKD ranks 1st among the monocular 3D detectors with publications on both KITTI test set and Waymo val set with significant performance gains compared to previous state-of-the-art methods. Our code will be released at https://github.com/Cc-Hy/CMKD.
Item Type: | Book Sections |
---|---|
Additional Information: | Print ISBN: 9783031200793 |
Status: | Published |
Refereed: | Yes |
Glasgow Author(s) Enlighten ID: | Dai, Dr Hang |
Authors: | Hong, Y., Dai, H., and Ding, Y. |
College/School: | College of Science and Engineering > School of Computing Science |
Publisher: | Springer |
ISSN: | 0302-9743 |
ISBN: | 9783031200809 |
Related URLs: |
University Staff: Request a correction | Enlighten Editors: Update this record