Your browser doesn't support javascript.
loading
A two-stage fine-tuning method for low-resource cross-lingual summarization.
Zhang, Kaixiong; Zhang, Yongbing; Yu, Zhengtao; Huang, Yuxin; Tan, Kaiwen.
Afiliación
  • Zhang K; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
  • Zhang Y; Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China.
  • Yu Z; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
  • Huang Y; Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China.
  • Tan K; Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China.
Math Biosci Eng ; 21(1): 1125-1143, 2024 Jan.
Article en En | MEDLINE | ID: mdl-38303457
ABSTRACT
Cross-lingual summarization (CLS) is the task of condensing lengthy source language text into a concise summary in a target language. This presents a dual challenge, demanding both cross-language semantic understanding (i.e., semantic alignment) and effective information compression capabilities. Traditionally, researchers have tackled these challenges using two types of

methods:

pipeline methods (e.g., translate-then-summarize) and end-to-end methods. The former is intuitive but prone to error propagation, particularly for low-resource languages. The later has shown an impressive performance, due to multilingual pre-trained models (mPTMs). However, mPTMs (e.g., mBART) are primarily trained on resource-rich languages, thereby limiting their semantic alignment capabilities for low-resource languages. To address these issues, this paper integrates the intuitiveness of pipeline methods and the effectiveness of mPTMs, and then proposes a two-stage fine-tuning method for low-resource cross-lingual summarization (TFLCLS). In the first stage, by recognizing the deficiency in the semantic alignment for low-resource languages in mPTMs, a semantic alignment fine-tuning method is employed to enhance the mPTMs' understanding of such languages. In the second stage, while considering that mPTMs are not originally tailored for information compression and CLS demands the model to simultaneously align and compress, an adaptive joint fine-tuning method is introduced. This method further enhances the semantic alignment and information compression abilities of mPTMs that were trained in the first stage. To evaluate the performance of TFLCLS, a low-resource CLS dataset, named Vi2ZhLow, is constructed from scratch; moreover, two additional low-resource CLS datasets, En2ZhLow and Zh2EnLow, are synthesized from widely used large-scale CLS datasets. Experimental results show that TFCLS outperforms state-of-the-art methods by 18.88%, 12.71% and 16.91% in ROUGE-2 on the three datasets, respectively, even when limited with only 5,000 training samples.
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Math Biosci Eng Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Math Biosci Eng Año: 2024 Tipo del documento: Article País de afiliación: China