HC2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding.
IEEE Trans Pattern Anal Mach Intell
; PP2024 May 20.
Article
en En
| MEDLINE
| ID: mdl-38768000
ABSTRACT
State-of-the-art model for zero-shot cross-lingual spoken language understanding performs cross-lingual unsupervised contrastive learning to achieve the label-agnostic semantic alignment between each utterance and its code-switched data. However, it ignores the precious intent/slot labels, whose label information is promising to help capture the label-aware semantics structure and then leverage supervised contrastive learning to improve both source and target languages' semantics. In this paper, we propose Hybrid and Cooperative Contrastive Learning to address this problem. Apart from cross-lingual unsupervised contrastive learning, we design a holistic approach that exploits source language supervised contrastive learning, cross-lingual supervised contrastive learning and multilingual supervised contrastive learning to perform label-aware semantics alignments in a comprehensive manner. Each kind of supervised contrastive learning mechanism includes both single-task and joint-task scenarios. In our model, one contrastive learning mechanism's input is enhanced by others. Thus the total four contrastive learning mechanisms are cooperative to learn more consistent and discriminative representations in the virtuous cycle during the training process. Experiments show that our model obtains consistent improvements over 9 languages, achieving new state-of-the-art performance.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Idioma:
En
Revista:
IEEE Trans Pattern Anal Mach Intell
/
IEEE transactions on pattern analysis and machine intelligence (Online)
Asunto de la revista:
INFORMATICA MEDICA
Año:
2024
Tipo del documento:
Article
Pais de publicación:
Estados Unidos