Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Entropy (Basel) ; 26(4)2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38667879

RESUMEN

In social networks, the occurrence of unexpected events rapidly catalyzes the widespread dissemination and further evolution of network public opinion. The advent of zero-shot stance detection aligns more closely with the characteristics of stance detection in today's digital age, where the absence of training examples for specific models poses significant challenges. This task necessitates models with robust generalization abilities to discern target-related, transferable stance features within training data. Recent advances in prompt-based learning have showcased notable efficacy in few-shot text classification. Such methods typically employ a uniform prompt pattern across all instances, yet they overlook the intricate relationship between prompts and instances, thereby failing to sufficiently direct the model towards learning task-relevant knowledge and information. This paper argues for the critical need to dynamically enhance the relevance between specific instances and prompts. Thus, we introduce a stance detection model underpinned by a gated multilayer perceptron (gMLP) and a prompt learning strategy, which is tailored for zero-shot stance detection scenarios. Specifically, the gMLP is utilized to capture semantic features of instances, coupled with a control gate mechanism to modulate the influence of the gate on prompt tokens based on the semantic context of each instance, thereby dynamically reinforcing the instance-prompt connection. Moreover, we integrate contrastive learning to empower the model with more discriminative feature representations. Experimental evaluations on the VAST and SEM16 benchmark datasets substantiate our method's effectiveness, yielding a 1.3% improvement over the JointCL model on the VAST dataset.

2.
Entropy (Basel) ; 26(2)2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38392417

RESUMEN

Joint entity and relation extraction methods have attracted an increasing amount of attention recently due to their capacity to extract relational triples from intricate texts. However, most of the existing methods ignore the association and difference between the Named Entity Recognition (NER) subtask features and the Relation Extraction (RE) subtask features, which leads to an imbalance in the interaction between these two subtasks. To solve the above problems, we propose a new joint entity and relation extraction method, FSN. It contains a Filter Separator Network (FSN) module that employs a two-direction LSTM to filter and separate the information contained in a sentence and merges similar features through a splicing operation, thus solving the problem of the interaction imbalance between subtasks. In order to better extract the local feature information for each subtask, we designed a Named Entity Recognition Generation (NERG) module and a Relation Extraction Generation (REG) module by adopting the design idea of the decoder in Transformer and average pooling operations to better capture the entity boundary information in the sentence and the entity pair boundary information for each relation in the relational triple, respectively. Additionally, we propose a dynamic loss function that dynamically adjusts the learning weights of each subtask in each epoch according to the proportionality between each subtask, thus narrowing down the difference between the ideal and realistic results. We thoroughly evaluated our model on the SciERC dataset and the ACE2005 dataset. The experimental results demonstrate that our model achieves satisfactory results compared to the baseline model.

3.
Entropy (Basel) ; 26(5)2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38785680

RESUMEN

Traditional methods for pest recognition have certain limitations in addressing the challenges posed by diverse pest species, varying sizes, diverse morphologies, and complex field backgrounds, resulting in a lower recognition accuracy. To overcome these limitations, this paper proposes a novel pest recognition method based on attention mechanism and multi-scale feature fusion (AM-MSFF). By combining the advantages of attention mechanism and multi-scale feature fusion, this method significantly improves the accuracy of pest recognition. Firstly, we introduce the relation-aware global attention (RGA) module to adaptively adjust the feature weights of each position, thereby focusing more on the regions relevant to pests and reducing the background interference. Then, we propose the multi-scale feature fusion (MSFF) module to fuse feature maps from different scales, which better captures the subtle differences and the overall shape features in pest images. Moreover, we introduce generalized-mean pooling (GeMP) to more accurately extract feature information from pest images and better distinguish different pest categories. In terms of the loss function, this study proposes an improved focal loss (FL), known as balanced focal loss (BFL), as a replacement for cross-entropy loss. This improvement aims to address the common issue of class imbalance in pest datasets, thereby enhancing the recognition accuracy of pest identification models. To evaluate the performance of the AM-MSFF model, we conduct experiments on two publicly available pest datasets (IP102 and D0). Extensive experiments demonstrate that our proposed AM-MSFF outperforms most state-of-the-art methods. On the IP102 dataset, the accuracy reaches 72.64%, while on the D0 dataset, it reaches 99.05%.

4.
Entropy (Basel) ; 25(7)2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-37510012

RESUMEN

Micro-expressions are the small, brief facial expression changes that humans momentarily show during emotional experiences, and their data annotation is complicated, which leads to the scarcity of micro-expression data. To extract salient and distinguishing features from a limited dataset, we propose an attention-based multi-scale, multi-modal, multi-branch flow network to thoroughly learn the motion information of micro-expressions by exploiting the attention mechanism and the complementary properties between different optical flow information. First, we extract optical flow information (horizontal optical flow, vertical optical flow, and optical strain) based on the onset and apex frames of micro-expression videos, and each branch learns one kind of optical flow information separately. Second, we propose a multi-scale fusion module to extract more prosperous and more stable feature expressions using spatial attention to focus on locally important information at each scale. Then, we design a multi-optical flow feature reweighting module to adaptively select features for each optical flow separately by channel attention. Finally, to better integrate the information of the three branches and to alleviate the problem of uneven distribution of micro-expression samples, we introduce a logarithmically adjusted prior knowledge weighting loss. This loss function weights the prediction scores of samples from different categories to mitigate the negative impact of category imbalance during the classification process. The effectiveness of the proposed model is demonstrated through extensive experiments and feature visualization on three benchmark datasets (CASMEII, SAMM, and SMIC), and its performance is comparable to that of state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA