Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Digit Health ; 10: 20552076241260557, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38882253

RESUMO

Background: Left ventricular opacification (LVO) improves the accuracy of left ventricular ejection fraction (LVEF) by enhancing the visualization of the endocardium. Manual delineation of the endocardium by sonographers has observer variability. Artificial intelligence (AI) has the potential to improve the reproducibility of LVO to assess LVEF. Objectives: The aim was to develop an AI model and evaluate the feasibility and reproducibility of LVO in the assessment of LVEF. Methods: This retrospective study included 1305 echocardiography of 797 patients who had LVO at the Department of Ultrasound Medicine, Union Hospital, Huazhong University of Science and Technology from 2013 to 2021. The AI model was developed by 5-fold cross validation. The validation datasets included 50 patients prospectively collected in our center and 42 patients retrospectively collected in the external institution. To evaluate the differences between LV function determined by AI and sonographers, the median absolute error (MAE), spearman correlation coefficient, and intraclass correlation coefficient (ICC) were calculated. Results: In LVO, the MAE of LVEF between AI and manual measurements was 2.6% in the development cohort, 2.5% in the internal validation cohort, and 2.7% in the external validation cohort. Compared with two-dimensional echocardiography (2DE), the left ventricular (LV) volumes and LVEF of LVO measured by AI correlated significantly with manual measurements. AI model provided excellent reliability for the LV parameters of LVO (ICC > 0.95). Conclusions: AI-assisted LVO enables more accurate identification of the LV endocardium and reduces observer variability, providing a more reliable way for assessing LV function.

2.
Med Image Anal ; 69: 101975, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33550007

RESUMO

The outbreak of COVID-19 around the world has caused great pressure to the health care system, and many efforts have been devoted to artificial intelligence (AI)-based analysis of CT and chest X-ray images to help alleviate the shortage of radiologists and improve the diagnosis efficiency. However, only a few works focus on AI-based lung ultrasound (LUS) analysis in spite of its significant role in COVID-19. In this work, we aim to propose a novel method for severity assessment of COVID-19 patients from LUS and clinical information. Great challenges exist regarding the heterogeneous data, multi-modality information, and highly nonlinear mapping. To overcome these challenges, we first propose a dual-level supervised multiple instance learning module (DSA-MIL) to effectively combine the zone-level representations into patient-level representations. Then a novel modality alignment contrastive learning module (MA-CLR) is presented to combine representations of the two modalities, LUS and clinical information, by matching the two spaces while keeping the discriminative features. To train the nonlinear mapping, a staged representation transfer (SRT) strategy is introduced to maximumly leverage the semantic and discriminative information from the training data. We trained the model with LUS data of 233 patients, and validated it with 80 patients. Our method can effectively combine the two modalities and achieve accuracy of 75.0% for 4-level patient severity assessment, and 87.5% for the binary severe/non-severe identification. Besides, our method also provides interpretation of the severity assessment by grading each of the lung zone (with accuracy of 85.28%) and identifying the pathological patterns of each lung zone. Our method has a great potential in real clinical practice for COVID-19 patients, especially for pregnant women and children, in aspects of progress monitoring, prognosis stratification, and patient management.


Assuntos
COVID-19/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , SARS-CoV-2 , Índice de Gravidade de Doença , Tomografia Computadorizada por Raios X , Ultrassonografia , Adulto Jovem
3.
IEEE Trans Image Process ; 23(11): 4850-62, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25216482

RESUMO

Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA