Self-distillation for robust LiDAR semantic segmentation in autonomous driving

Li, J., Dai, H. and Ding, Y. (2022) Self-distillation for robust LiDAR semantic segmentation in autonomous driving. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G. M. and Hassner, T. (eds.) Computer Vision – ECCV 2022. Series: Lecture notes in computer science, 13688. Springer, pp. 659-676. ISBN 9783031198151 (doi: 10.1007/978-3-031-19815-1_38)

Full text not currently available from Enlighten.


We propose a new and effective self-distillation framework with our new Test-Time Augmentation (TTA) and Transformer based Voxel Feature Encoder (TransVFE) for robust LiDAR semantic segmentation in autonomous driving, where the robustness is mission-critical but usually neglected. The proposed framework enables the knowledge to be distilled from a teacher model instance to a student model instance, while the two model instances are with the same network architecture for jointly learning and evolving. This requires a strong teacher model to evolve in training. Our TTA strategy effectively reduces the uncertainty in the inference stage of the teacher model. Thus, we propose to equip the teacher model with TTA for providing privileged guidance while the student continuously updates the teacher with better network parameters learned by itself. To further enhance the teacher model, we propose a TransVFE to improve the point cloud encoding by modeling and preserving the local relationship among the points inside each voxel via multi-head attention. The proposed modules are generally designed to be instantiated with different backbones. Evaluations on SemanticKITTI and nuScenes datasets show that our method achieves state-of-the-art performance. Our code is publicly available at

Item Type:Book Sections
Additional Information:Print ISBN: 9783031198144
Glasgow Author(s) Enlighten ID:Dai, Dr Hang
Authors: Li, J., Dai, H., and Ding, Y.
College/School:College of Science and Engineering > School of Computing Science
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record