Revisiting Parameter-Efficient Tuning: Are We Really There Yet?

Chen, G., Liu, F., Meng, Z. and Liang, S. (2023) Revisiting Parameter-Efficient Tuning: Are We Really There Yet? In: 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022), Abu Dhabi, 7-11 Dec 2022, pp. 2612-2626.

[img] Text
281955.pdf - Published Version
Available under License Creative Commons Attribution.


Publisher's URL:


Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs). By tuning just a fraction amount of parameters comparing to full model finetuning, PETuning methods claim to have achieved performance on par with or even better than finetuning. In this work, we take a step back and re-examine these PETuning methods by conducting the first comprehensive investigation into the training and evaluation of them. We found the problematic validation and testing practice in current studies, when accompanied by the instability nature of PETuning methods, has led to unreliable conclusions. When being compared under a truly fair evaluation protocol, PETuning cannot yield consistently competitive performance while finetuning remains to be the best-performing method in medium- and high-resource settings. We delve deeper into the cause of the instability and observed that the number of trainable parameters and training iterations are two main factors: reducing trainable parameters and prolonging training iterations may lead to higher stability in PETuning methods.

Item Type:Conference Proceedings
Glasgow Author(s) Enlighten ID:Chen, Guanzheng and Meng, Dr Zaiqiao
Authors: Chen, G., Liu, F., Meng, Z., and Liang, S.
College/School:College of Science and Engineering
College of Science and Engineering > School of Computing Science
Copyright Holders:Copyright © 2022 Association for Computational Linguistics
First Published:First published inProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2612–2626, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics
Publisher Policy:Reproduced under a Creative Commons license

University Staff: Request a correction | Enlighten Editors: Update this record