Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 169: 20-31, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857170

RESUMO

The development of telecom technology not only facilitates social interactions but also inevitably provides the breeding ground for telecom fraud crimes. However, telecom fraud detection is a challenging task as fraudsters tend to commit co-fraud and disguise themselves within the mass of benign ones. Previous approaches work by unearthing differences in calling sequential patterns between independent fraudsters, but they may ignore synergic fraud patterns and oversimplify fraudulent behaviors. Fortunately, graph-like data formed by traceable telecom interaction provides opportunities for graph neural network (GNN)-based telecom fraud detection methods. Therefore, we develop a latent synergy graph (LSG) learning-based telecom fraud detector, named LSG-FD, to model both sequential and interactive fraudulent behaviors. Specifically, LSG-FD introduces (1) a multi-view LSG extractor to reconstruct synergy relationship-oriented graphs from the meta-interaction graph based on second-order proximity assumption; (2) an LSTM-based calling behavior encoder to capture the sequential patterns from the perspective of local individuals; (3) a dual-channel based graph learning module to alleviate the disassortativity issue (caused by the camouflages of fraudsters) by incorporating the dual-channel frequency filters and the learnable controller to adaptively aggregate high- and low-frequency information from their neighbors; (4) an imbalance-resistant model trainer to remedy the graph imbalance issue by developing a label-aware sampler. Experiment results on the telecom fraud dataset and another two widely used fraud datasets have verified the effectiveness of our model.


Assuntos
Fraude , Aprendizagem , Humanos , Redes Neurais de Computação
2.
Sensors (Basel) ; 23(4)2023 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-36850648

RESUMO

The current accuracy of speech recognition can reach over 97% on different datasets, but in noisy environments, it is greatly reduced. Improving speech recognition performance in noisy environments is a challenging task. Due to the fact that visual information is not affected by noise, researchers often use lip information to help to improve speech recognition performance. This is where the performance of lip recognition and the effect of cross-modal fusion are particularly important. In this paper, we try to improve the accuracy of speech recognition in noisy environments by improving the lip reading performance and the cross-modal fusion effect. First, due to the same lip possibly containing multiple meanings, we constructed a one-to-many mapping relationship model between lips and speech allowing for the lip reading model to consider which articulations are represented from the input lip movements. Audio representations are also preserved by modeling the inter-relationships between paired audiovisual representations. At the inference stage, the preserved audio representations could be extracted from memory by the learned inter-relationships using only video input. Second, a joint cross-fusion model using the attention mechanism could effectively exploit complementary intermodal relationships, and the model calculates cross-attention weights on the basis of the correlations between joint feature representations and individual modalities. Lastly, our proposed model achieved a 4.0% reduction in WER in a -15 dB SNR environment compared to the baseline method, and a 10.1% reduction in WER compared to speech recognition. The experimental results show that our method could achieve a significant improvement over speech recognition models in different noise environments.


Assuntos
Leitura Labial , Percepção da Fala , Humanos , Fala , Aprendizagem , Lábio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...