Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
Comput Biol Med ; 166: 107522, 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37820559

RESUMO

Automated radiology report generation is gaining popularity as a means to alleviate the workload of radiologists and prevent misdiagnosis and missed diagnoses. By imitating the working patterns of radiologists, previous report generation approaches have achieved remarkable performance. However, these approaches suffer from two significant problems: (1) lack of visual prior: medical observations in radiology images are interdependent and exhibit certain patterns, and lack of such visual prior can result in reduced accuracy in identifying abnormal regions; (2) lack of alignment between images and texts: the absence of annotations and alignments for regions of interest in the radiology images and reports can lead to inconsistent visual and textual features of the abnormal regions generated by the model. To address these issues, we propose a Visual Prior-based Cross-modal Alignment Network for radiology report generation. First, we propose a novel Contrastive Attention that compares input image with normal images to extract difference information, namely visual prior, which helps to identify abnormalities quickly. Then, to facilitate the alignment of images and texts, we propose a Cross-modal Alignment Network that leverages the cross-modal matrix initialized by the features generated by pre-trained models, to compute cross-modal responses for visual and textual features. Finally, a Visual Prior-guided Multi-Head Attention is proposed to incorporate the visual prior into the generation process. The extensive experimental results on two benchmark datasets, IU-Xray and MIMIC-CXR, illustrate that our proposed model outperforms the state-of-the-art models over almost all metrics, achieving BLEU-4 scores of 0.188 and 0.116 and CIDEr scores of 0.409 and 0.240, respectively.

2.
Front Psychiatry ; 13: 1075564, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36704734

RESUMO

Introduction: Recent efforts have been made to apply machine learning and deep learning approaches to the automated classification of schizophrenia using structural magnetic resonance imaging (sMRI) at the individual level. However, these approaches are less accurate on early psychosis (EP) since there are mild structural brain changes at early stage. As cognitive impairments is one main feature in psychosis, in this study we apply a multi-task deep learning framework using sMRI with inclusion of cognitive assessment to facilitate the classification of patients with EP from healthy individuals. Method: Unlike previous studies, we used sMRI as the direct input to perform EP classifications and cognitive estimations. The proposed deep learning model does not require time-consuming volumetric or surface based analysis and can provide additionally cognition predictions. Experiments were conducted on an in-house data set with 77 subjects and a public ABCD HCP-EP data set with 164 subjects. Results: We achieved 74.9 ± 4.3% five-fold cross-validated accuracy and an area under the curve of 71.1 ± 4.1% on EP classification with the inclusion of cognitive estimations. Discussion: We reveal the feasibility of automated cognitive estimation using sMRI by deep learning models, and also demonstrate the implicit adoption of cognitive measures as additional information to facilitate EP classifications from healthy controls.

3.
Comput Math Methods Med ; 2015: 262819, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26557871

RESUMO

We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Biologia Computacional , Humanos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Imagem Multimodal/métodos , Imagem Multimodal/estatística & dados numéricos , Distribuição Normal , Tomografia por Emissão de Pósitrons/estatística & dados numéricos , Tomografia Computadorizada de Emissão de Fóton Único/estatística & dados numéricos , Tomografia Computadorizada por Raios X/estatística & dados numéricos
4.
J Zhejiang Univ Sci B ; 6(7): 611-6, 2005 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15973760

RESUMO

This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.


Assuntos
Inteligência Artificial , Imageamento Tridimensional/métodos , Linguagens de Programação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador , Algoritmos , Gráficos por Computador , Simulação por Computador , Humanos , Modelos Biológicos , Intensificação de Imagem Radiográfica/métodos , Software , Nódulo Pulmonar Solitário/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA