Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Eur Radiol ; 31(12): 9664-9674, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34089072

RESUMO

OBJECTIVE: Assess if deep learning-based artificial intelligence (AI) algorithm improves reader performance for lung cancer detection on chest X-rays (CXRs). METHODS: This reader study included 173 images from cancer-positive patients (n = 98) and 346 images from cancer-negative patients (n = 196) selected from National Lung Screening Trial (NLST). Eight readers, including three radiology residents, and five board-certified radiologists, participated in the observer performance test. AI algorithm provided image-level probability of pulmonary nodule or mass on CXRs and a heatmap of detected lesions. Reader performance was compared with AUC, sensitivity, specificity, false-positives per image (FPPI), and rates of chest CT recommendations. RESULTS: With AI, the average sensitivity of readers for the detection of visible lung cancer increased for residents, but was similar for radiologists compared to that without AI (0.61 [95% CI, 0.55-0.67] vs. 0.72 [95% CI, 0.66-0.77], p = 0.016 for residents, and 0.76 [95% CI, 0.72-0.81] vs. 0.76 [95% CI, 0.72-0.81, p = 1.00 for radiologists), while false-positive findings per image (FPPI) was similar for residents, but decreased for radiologists (0.15 [95% CI, 0.11-0.18] vs. 0.12 [95% CI, 0.09-0.16], p = 0.13 for residents, and 0.24 [95% CI, 0.20-0.29] vs. 0.17 [95% CI, 0.13-0.20], p < 0.001 for radiologists). With AI, the average rate of chest CT recommendation in patients positive for visible cancer increased for residents, but was similar for radiologists (54.7% [95% CI, 48.2-61.2%] vs. 70.2% [95% CI, 64.2-76.2%], p < 0.001 for residents and 72.5% [95% CI, 68.0-77.1%] vs. 73.9% [95% CI, 69.4-78.3%], p = 0.68 for radiologists), while that in cancer-negative patients was similar for residents, but decreased for radiologists (11.2% [95% CI, 9.6-13.1%] vs. 9.8% [95% CI, 8.0-11.6%], p = 0.32 for residents and 16.4% [95% CI, 14.7-18.2%] vs. 11.7% [95% CI, 10.2-13.3%], p < 0.001 for radiologists). CONCLUSIONS: AI algorithm can enhance the performance of readers for the detection of lung cancers on chest radiographs when used as second reader. KEY POINTS: • Reader study in the NLST dataset shows that AI algorithm had sensitivity benefit for residents and specificity benefit for radiologists for the detection of visible lung cancer. • With AI, radiology residents were able to recommend more chest CT examinations (54.7% vs 70.2%, p < 0.001) for patients with visible lung cancer. • With AI, radiologists recommended significantly less proportion of unnecessary chest CT examinations (16.4% vs. 11.7%, p < 0.001) in cancer-negative patients.


Assuntos
Inteligência Artificial , Neoplasias Pulmonares , Algoritmos , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Radiografia , Radiografia Torácica , Sensibilidade e Especificidade
2.
Eur J Radiol ; 139: 109583, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33846041

RESUMO

PURPOSE: As of August 30th, there were in total 25.1 million confirmed cases and 845 thousand deaths caused by coronavirus disease of 2019 (COVID-19) worldwide. With overwhelming demands on medical resources, patient stratification based on their risks is essential. In this multi-center study, we built prognosis models to predict severity outcomes, combining patients' electronic health records (EHR), which included vital signs and laboratory data, with deep learning- and CT-based severity prediction. METHOD: We first developed a CT segmentation network using datasets from multiple institutions worldwide. Two biomarkers were extracted from the CT images: total opacity ratio (TOR) and consolidation ratio (CR). After obtaining TOR and CR, further prognosis analysis was conducted on datasets from INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3. For each data cohort, generalized linear model (GLM) was applied for prognosis prediction. RESULTS: For the deep learning model, the correlation coefficient of the network prediction and manual segmentation was 0.755, 0.919, and 0.824 for the three cohorts, respectively. The AUC (95 % CI) of the final prognosis models was 0.85(0.77,0.92), 0.93(0.87,0.98), and 0.86(0.75,0.94) for INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3 cohorts, respectively. Either TOR or CR exist in all three final prognosis models. Age, white blood cell (WBC), and platelet (PLT) were chosen predictors in two cohorts. Oxygen saturation (SpO2) was a chosen predictor in one cohort. CONCLUSION: The developed deep learning method can segment lung infection regions. Prognosis results indicated that age, SpO2, CT biomarkers, PLT, and WBC were the most important prognostic predictors of COVID-19 in our prognosis model.


Assuntos
COVID-19 , Aprendizado Profundo , Registros Eletrônicos de Saúde , Humanos , Pulmão , Prognóstico , SARS-CoV-2 , Tomografia Computadorizada por Raios X
3.
IEEE J Biomed Health Inform ; 24(12): 3529-3538, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33044938

RESUMO

Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Algoritmos , COVID-19/virologia , Feminino , Humanos , Masculino , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação , Índice de Gravidade de Doença
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA