Description-Enhanced Label Embedding Contrastive Learning for Text Classification.
IEEE Trans Neural Netw Learn Syst
; PP2023 Jun 16.
Article
em En
| MEDLINE
| ID: mdl-37327102
Text classification is one of the fundamental tasks in natural language processing, which requires an agent to determine the most appropriate category for input sentences. Recently, deep neural networks have achieved impressive performance in this area, especially pretrained language models (PLMs). Usually, these methods concentrate on input sentences and corresponding semantic embedding generation. However, for another essential component: labels, most existing works either treat them as meaningless one-hot vectors or use vanilla embedding methods to learn label representations along with model training, underestimating the semantic information and guidance that these labels reveal. To alleviate this problem and better exploit label information, in this article, we employ self-supervised learning (SSL) in model learning process and design a novel self-supervised relation of relation (R 2 ) classification task for label utilization from a one-hot manner perspective. Then, we propose a novel () for text classification, in which text classification and R 2 classification are treated as optimization targets. Meanwhile, triplet loss is employed to enhance the analysis of differences and connections among labels. Moreover, considering that one-hot usage is still short of exploiting label information, we incorporate external knowledge from WordNet to obtain multiaspect descriptions for label semantic learning and extend to a novel () from a label embedding perspective. One step further, since these fine-grained descriptions may introduce unexpected noise, we develop a mutual interaction module to select appropriate parts from input sentences and labels simultaneously based on contrastive learning (CL) for noise mitigation. Extensive experiments on different text classification tasks reveal that can effectively improve the classification performance and can make better use of label information and further improve the performance. As a byproduct, we have released the codes to facilitate other research.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Idioma:
En
Ano de publicação:
2023
Tipo de documento:
Article