Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Comput Biol Med ; 164: 107255, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37499296

RESUMEN

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has shown high sensitivity to diagnose breast cancer. However, few computer-aided algorithms focus on employing DCE-MR images for breast cancer diagnosis due to the lack of publicly available DCE-MRI datasets. To address this issue, our work releases a new DCE-MRI dataset called BreastDM for breast tumor segmentation and classification. In particular, a dataset of 232 patients selected with DCE-MR images for benign and malignant cases is established. Each case consists of three types of sequences: pre-contrast, post-contrast, and subtraction sequences. To show the difficulty of breast DCE-MRI tumor image segmentation and classification tasks, benchmarks are achieved by state-of-the-art image segmentation and classification algorithms, including conventional hand-crafted based methods and recently-emerged deep learning-based methods. More importantly, a local-global cross attention fusion network (LG-CAFN) is proposed to further improve the performance of breast tumor images classification. Specifically, LG-CAFN achieved the highest accuracy (88.20%, 83.93%) and AUC value (0.9154,0.8826) in both groups of experiments. Extensive experiments are conducted to present strong baselines based on various typical image segmentation and classification algorithms. Experiment results also demonstrate the superiority of the proposed LG-CAFN to other breast tumor images classification methods. The related dataset and evaluation codes are publicly available at smallboy-code/Breast-cancer-dataset.


Asunto(s)
Neoplasias de la Mama , Neoplasias Mamarias Animales , Humanos , Animales , Femenino , Medios de Contraste , Imagen por Resonancia Magnética/métodos , Mama/diagnóstico por imagen , Mama/patología , Neoplasias de la Mama/patología , Algoritmos
2.
Front Neurosci ; 16: 1107284, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36685221

RESUMEN

Recently, personality trait recognition, which aims to identify people's first impression behavior data and analyze people's psychological characteristics, has been an interesting and active topic in psychology, affective neuroscience and artificial intelligence. To effectively take advantage of spatio-temporal cues in audio-visual modalities, this paper proposes a new method of multimodal personality trait recognition integrating audio-visual modalities based on a hybrid deep learning framework, which is comprised of convolutional neural networks (CNN), bi-directional long short-term memory network (Bi-LSTM), and the Transformer network. In particular, a pre-trained deep audio CNN model is used to learn high-level segment-level audio features. A pre-trained deep face CNN model is leveraged to separately learn high-level frame-level global scene features and local face features from each frame in dynamic video sequences. Then, these extracted deep audio-visual features are fed into a Bi-LSTM and a Transformer network to individually capture long-term temporal dependency, thereby producing the final global audio and visual features for downstream tasks. Finally, a linear regression method is employed to conduct the single audio-based and visual-based personality trait recognition tasks, followed by a decision-level fusion strategy used for producing the final Big-Five personality scores and interview scores. Experimental results on the public ChaLearn First Impression-V2 personality dataset show the effectiveness of our method, outperforming other used methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...