Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
Crit Care Med ; 52(2): 237-247, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38095506

RESUMO

OBJECTIVES: We aimed to develop a computer-aided detection (CAD) system to localize and detect the malposition of endotracheal tubes (ETTs) on portable supine chest radiographs (CXRs). DESIGN: This was a retrospective diagnostic study. DeepLabv3+ with ResNeSt50 backbone and DenseNet121 served as the model architecture for segmentation and classification tasks, respectively. SETTING: Multicenter study. PATIENTS: For the training dataset, images meeting the following inclusion criteria were included: 1) patient age greater than or equal to 20 years; 2) portable supine CXR; 3) examination in emergency departments or ICUs; and 4) examination between 2015 and 2019 at National Taiwan University Hospital (NTUH) (NTUH-1519 dataset: 5,767 images). The derived CAD system was tested on images from chronologically (examination during 2020 at NTUH, NTUH-20 dataset: 955 images) or geographically (examination between 2015 and 2020 at NTUH Yunlin Branch [YB], NTUH-YB dataset: 656 images) different datasets. All CXRs were annotated with pixel-level labels of ETT and with image-level labels of ETT presence and malposition. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: For the segmentation model, the Dice coefficients indicated that ETT would be delineated accurately (NTUH-20: 0.854; 95% CI, 0.824-0.881 and NTUH-YB: 0.839; 95% CI, 0.820-0.857). For the classification model, the presence of ETT could be accurately detected with high accuracy (area under the receiver operating characteristic curve [AUC]: NTUH-20, 1.000; 95% CI, 0.999-1.000 and NTUH-YB: 0.994; 95% CI, 0.984-1.000). Furthermore, among those images with ETT, ETT malposition could be detected with high accuracy (AUC: NTUH-20, 0.847; 95% CI, 0.671-0.980 and NTUH-YB, 0.734; 95% CI, 0.630-0.833), especially for endobronchial intubation (AUC: NTUH-20, 0.991; 95% CI, 0.969-1.000 and NTUH-YB, 0.966; 95% CI, 0.933-0.991). CONCLUSIONS: The derived CAD system could localize ETT and detect ETT malposition with excellent performance, especially for endobronchial intubation, and with favorable potential for external generalizability.


Assuntos
Aprendizado Profundo , Medicina de Emergência , Humanos , Estudos Retrospectivos , Intubação Intratraqueal/efeitos adversos , Intubação Intratraqueal/métodos , Hospitais Universitários
2.
Radiology ; 306(1): 172-182, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36098642

RESUMO

Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years ± 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Masculino , Humanos , Idoso , Estudos Retrospectivos , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos , Pâncreas
3.
BMC Cancer ; 23(1): 58, 2023 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-36650440

RESUMO

BACKGROUND: CT is the major detection tool for pancreatic cancer (PC). However, approximately 40% of PCs < 2 cm are missed on CT, underscoring a pressing need for tools to supplement radiologist interpretation. METHODS: Contrast-enhanced CT studies of 546 patients with pancreatic adenocarcinoma diagnosed by histology/cytology between January 2005 and December 2019 and 733 CT studies of controls with normal pancreas obtained between the same period in a tertiary referral center were retrospectively collected for developing an automatic end-to-end computer-aided detection (CAD) tool for PC using two-dimensional (2D) and three-dimensional (3D) radiomic analysis with machine learning. The CAD tool was tested in a nationwide dataset comprising 1,477 CT studies (671 PCs, 806 controls) obtained from institutions throughout Taiwan. RESULTS: The CAD tool achieved 0.918 (95% CI, 0.895-0.938) sensitivity and 0.822 (95% CI, 0.794-0.848) specificity in differentiating between studies with and without PC (area under curve 0.947, 95% CI, 0.936-0.958), with 0.707 (95% CI, 0.602-0.797) sensitivity for tumors < 2 cm. The positive and negative likelihood ratios of PC were 5.17 (95% CI, 4.45-6.01) and 0.10 (95% CI, 0.08-0.13), respectively. Where high specificity is needed, using 2D and 3D analyses in series yielded 0.952 (95% CI, 0.934-0.965) specificity with a sensitivity of 0.742 (95% CI, 0.707-0.775), whereas using 2D and 3D analyses in parallel to maximize sensitivity yielded 0.915 (95% CI, 0.891-0.935) sensitivity at a specificity of 0.791 (95% CI, 0.762-0.819). CONCLUSIONS: The high accuracy and robustness of the CAD tool supported its potential for enhancing the detection of PC.


Assuntos
Adenocarcinoma , Neoplasias Pancreáticas , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem , Estudos Retrospectivos , Adenocarcinoma/diagnóstico por imagem , Taiwan/epidemiologia , Sensibilidade e Especificidade , Neoplasias Pancreáticas
4.
J Med Syst ; 48(1): 1, 2023 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-38048012

RESUMO

PURPOSE: To develop two deep learning-based systems for diagnosing and localizing pneumothorax on portable supine chest X-rays (SCXRs). METHODS: For this retrospective study, images meeting the following inclusion criteria were included: (1) patient age ≥ 20 years; (2) portable SCXR; (3) imaging obtained in the emergency department or intensive care unit. Included images were temporally split into training (1571 images, between January 2015 and December 2019) and testing (1071 images, between January 2020 to December 2020) datasets. All images were annotated using pixel-level labels. Object detection and image segmentation were adopted to develop separate systems. For the detection-based system, EfficientNet-B2, DneseNet-121, and Inception-v3 were the architecture for the classification model; Deformable DETR, TOOD, and VFNet were the architecture for the localization model. Both classification and localization models of the segmentation-based system shared the UNet architecture. RESULTS: In diagnosing pneumothorax, performance was excellent for both detection-based (Area under receiver operating characteristics curve [AUC]: 0.940, 95% confidence interval [CI]: 0.907-0.967) and segmentation-based (AUC: 0.979, 95% CI: 0.963-0.991) systems. For images with both predicted and ground-truth pneumothorax, lesion localization was highly accurate (detection-based Dice coefficient: 0.758, 95% CI: 0.707-0.806; segmentation-based Dice coefficient: 0.681, 95% CI: 0.642-0.721). The performance of the two deep learning-based systems declined as pneumothorax size diminished. Nonetheless, both systems were similar or better than human readers in diagnosis or localization performance across all sizes of pneumothorax. CONCLUSIONS: Both deep learning-based systems excelled when tested in a temporally different dataset with differing patient or image characteristics, showing favourable potential for external generalizability.


Assuntos
Aprendizado Profundo , Medicina de Emergência , Pneumotórax , Humanos , Adulto Jovem , Adulto , Estudos Retrospectivos , Pneumotórax/diagnóstico por imagem , Raios X
5.
AJR Am J Roentgenol ; 215(6): 1403-1410, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33052737

RESUMO

OBJECTIVE. Deep learning applications in radiology often suffer from overfitting, limiting generalization to external centers. The objective of this study was to develop a high-quality prostate segmentation model capable of maintaining a high degree of performance across multiple independent datasets using transfer learning and data augmentation. MATERIALS AND METHODS. A retrospective cohort of 648 patients who underwent prostate MRI between February 2015 and November 2018 at a single center was used for training and validation. A deep learning approach combining 2D and 3D architecture was used for training, which incorporated transfer learning. A data augmentation strategy was used that was specific to the deformations, intensity, and alterations in image quality seen on radiology images. Five independent datasets, four of which were from outside centers, were used for testing, which was conducted with and without fine-tuning of the original model. The Dice similarity coefficient was used to evaluate model performance. RESULTS. When prostate segmentation models utilizing transfer learning were applied to the internal validation cohort, the mean Dice similarity coefficient was 93.1 for whole prostate and 89.0 for transition zone segmentations. When the models were applied to multiple test set cohorts, the improvement in performance achieved using data augmentation alone was 2.2% for the whole prostate models and 3.0% for the transition zone segmentation models. However, the best test-set results were obtained with models fine-tuned on test center data with mean Dice similarity coefficients of 91.5 for whole prostate segmentation and 89.7 for transition zone segmentation. CONCLUSION. Transfer learning allowed for the development of a high-performing prostate segmentation model, and data augmentation and fine-tuning approaches improved performance of a prostate segmentation model when applied to datasets from external centers.


Assuntos
Imageamento por Ressonância Magnética , Reconhecimento Automatizado de Padrão , Neoplasias da Próstata/diagnóstico por imagem , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
6.
Eur Radiol ; 29(3): 1074-1082, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30116959

RESUMO

OBJECTIVE: To develop and evaluate a radiomics nomogram for differentiating the malignant risk of gastrointestinal stromal tumours (GISTs). METHODS: A total of 222 patients (primary cohort: n = 130, our centre; external validation cohort: n = 92, two other centres) with pathologically diagnosed GISTs were enrolled. A Relief algorithm was used to select the feature subset with the best distinguishing characteristics and to establish a radiomics model with a support vector machine (SVM) classifier for malignant risk differentiation. Determinant clinical characteristics and subjective CT features were assessed to separately construct a corresponding model. The models showing statistical significance in a multivariable logistic regression analysis were used to develop a nomogram. The diagnostic performance of these models was evaluated using ROC curves. Further calibration of the nomogram was evaluated by calibration curves. RESULTS: The generated radiomics model had an AUC value of 0.867 (95% CI 0.803-0.932) in the primary cohort and 0.847 (95% CI 0.765-0.930) in the external cohort. In the entire cohort, the AUCs for the radiomics model, subjective CT findings model, clinical index model and radiomics nomogram were 0.858 (95% CI 0.807-0.908), 0.774 (95% CI 0.713-0.835), 0.759 (95% CI 0.697-0.821) and 0.867 (95% CI 0.818-0.915), respectively. The nomogram showed good calibration. CONCLUSIONS: This radiomics nomogram predicted the malignant potential of GISTs with excellent accuracy and may be used as an effective tool to guide preoperative clinical decision-making. KEY POINTS: • CT-based radiomics model can differentiate low- and high-malignant-potential GISTs with satisfactory accuracy compared with subjective CT findings and clinical indexes. • Radiomics nomogram integrated with the radiomics signature, subjective CT findings and clinical indexes can achieve individualised risk prediction with improved diagnostic performance. • This study might provide significant and valuable background information for further studies such as response evaluation of neoadjuvant imatinib and recurrence risk prediction.


Assuntos
Algoritmos , Tumores do Estroma Gastrointestinal/diagnóstico , Imageamento Tridimensional/métodos , Gradação de Tumores/métodos , Nomogramas , Tomografia Computadorizada por Raios X/métodos , Diagnóstico Diferencial , Feminino , Tumores do Estroma Gastrointestinal/classificação , Tumores do Estroma Gastrointestinal/cirurgia , Humanos , Masculino , Pessoa de Meia-Idade , Período Pré-Operatório , Curva ROC , Máquina de Vetores de Suporte
8.
Radiology ; 273(2): 417-24, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24991991

RESUMO

PURPOSE: To evaluate the accuracy of a method of automatic coregistration of the endoluminal surfaces at computed tomographic (CT) colonography performed on separate occasions to facilitate identification of polyps in patients undergoing polyp surveillance. MATERIALS AND METHODS: Institutional review board and HIPAA approval were obtained. A registration algorithm that was designed to coregister the coordinates of endoluminal colonic surfaces on images from prone and supine CT colonographic acquisitions was used to match polyps in sequential studies in patients undergoing polyp surveillance. Initial and follow-up CT colonographic examinations in 26 patients (35 polyps) were selected and the algorithm was tested by means of two methods, the longitudinal method (polyp coordinates from the initial prone and supine acquisitions were used to identify the expected polyp location automatically at follow-up CT colonography) and the consistency method (polyp coordinates from the initial supine acquisition were used to identify polyp location on images from the initial prone acquisition, then on those for follow-up prone and follow-up supine acquisitions). Two observers measured the Euclidean distance between true and expected polyp locations, and mean per-patient registration accuracy was calculated. Segments with and without collapse were compared by using the Kruskal-Wallace test, and the relationship between registration error and temporal separation was investigated by using the Pearson correlation. RESULTS: Coregistration was achieved for all 35 polyps by using both longitudinal and consistency methods. Mean ± standard deviation Euclidean registration error for the longitudinal method was 17.4 mm ± 12.1 and for the consistency method, 26.9 mm ± 20.8. There was no significant difference between these results and the registration error when prone and supine acquisitions in the same study were compared (16.9 mm ± 17.6; P = .451). CONCLUSION: Automatic endoluminal coregistration by using an algorithm at initial CT colonography allowed prediction of endoluminal polyp location at subsequent CT colonography, thereby facilitating detection of known polyps in patients undergoing CT colonographic surveillance.


Assuntos
Pólipos do Colo/diagnóstico por imagem , Colonografia Tomográfica Computadorizada/métodos , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Meios de Contraste , Diatrizoato , Seguimentos , Humanos , Pessoa de Meia-Idade , Vigilância da População , Interpretação de Imagem Radiográfica Assistida por Computador
9.
Int J Comput Assist Radiol Surg ; 19(4): 655-664, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38498132

RESUMO

PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.


Assuntos
Abdome , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pâncreas , Tomografia Computadorizada por Raios X , Ductos Pancreáticos/diagnóstico por imagem
10.
medRxiv ; 2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37961086

RESUMO

Background: Diffuse midline gliomas (DMG) are aggressive pediatric brain tumors that are diagnosed and monitored through MRI. We developed an automatic pipeline to segment subregions of DMG and select radiomic features that predict patient overall survival (OS). Methods: We acquired diagnostic and post-radiation therapy (RT) multisequence MRI (T1, T1ce, T2, T2 FLAIR) and manual segmentations from two centers of 53 (internal cohort) and 16 (external cohort) DMG patients. We pretrained a deep learning model on a public adult brain tumor dataset, and finetuned it to automatically segment tumor core (TC) and whole tumor (WT) volumes. PyRadiomics and sequential feature selection were used for feature extraction and selection based on the segmented volumes. Two machine learning models were trained on our internal cohort to predict patient 1-year survival from diagnosis. One model used only diagnostic tumor features and the other used both diagnostic and post-RT features. Results: For segmentation, Dice score (mean [median]±SD) was 0.91 (0.94)±0.12 and 0.74 (0.83)±0.32 for TC, and 0.88 (0.91)±0.07 and 0.86 (0.89)±0.06 for WT for internal and external cohorts, respectively. For OS prediction, accuracy was 77% and 81% at time of diagnosis, and 85% and 78% post-RT for internal and external cohorts, respectively. Homogeneous WT intensity in baseline T2 FLAIR and larger post-RT TC/WT volume ratio indicate shorter OS. Conclusions: Machine learning analysis of MRI radiomics has potential to accurately and non-invasively predict which pediatric patients with DMG will survive less than one year from the time of diagnosis to provide patient stratification and guide therapy.

11.
Med Image Anal ; 95: 103207, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38776843

RESUMO

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.


Assuntos
Inteligência Artificial , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Software
12.
J Imaging Inform Med ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38980623

RESUMO

Malposition of a nasogastric tube (NGT) can lead to severe complications. We aimed to develop a computer-aided detection (CAD) system to localize NGTs and detect NGT malposition on portable chest X-rays (CXRs). A total of 7378 portable CXRs were retrospectively retrieved from two hospitals between 2015 and 2020. All CXRs were annotated with pixel-level labels for NGT localization and image-level labels for NGT presence and malposition. In the CAD system, DeepLabv3 + with backbone ResNeSt50 and DenseNet121 served as the model architecture for segmentation and classification models, respectively. The CAD system was tested on images from chronologically different datasets (National Taiwan University Hospital (National Taiwan University Hospital)-20), geographically different datasets (National Taiwan University Hospital-Yunlin Branch (YB)), and the public CLiP dataset. For the segmentation model, the Dice coefficients indicated accurate delineation of the NGT course (National Taiwan University Hospital-20: 0.665, 95% confidence interval (CI) 0.630-0.696; National Taiwan University Hospital-Yunlin Branch: 0.646, 95% CI 0.614-0.678). The distance between the predicted and ground-truth NGT tips suggested accurate tip localization (National Taiwan University Hospital-20: 1.64 cm, 95% CI 0.99-2.41; National Taiwan University Hospital-Yunlin Branch: 2.83 cm, 95% CI 1.94-3.76). For the classification model, NGT presence was detected with high accuracy (area under the receiver operating characteristic curve (AUC): National Taiwan University Hospital-20: 0.998, 95% CI 0.995-1.000; National Taiwan University Hospital-Yunlin Branch: 0.998, 95% CI 0.995-1.000; CLiP dataset: 0.991, 95% CI 0.990-0.992). The CAD system also detected NGT malposition with high accuracy (AUC: National Taiwan University Hospital-20: 0.964, 95% CI 0.917-1.000; National Taiwan University Hospital-Yunlin Branch: 0.991, 95% CI 0.970-1.000) and detected abnormal nasoenteric tube positions with favorable performance (AUC: 0.839, 95% CI 0.807-0.869). The CAD system accurately localized NGTs and detected NGT malposition, demonstrating excellent potential for external generalizability.

13.
Neurooncol Adv ; 6(1): vdae108, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39027132

RESUMO

Background: Diffuse midline gliomas (DMG) are aggressive pediatric brain tumors that are diagnosed and monitored through MRI. We developed an automatic pipeline to segment subregions of DMG and select radiomic features that predict patient overall survival (OS). Methods: We acquired diagnostic and post-radiation therapy (RT) multisequence MRI (T1, T1ce, T2, and T2 FLAIR) and manual segmentations from 2 centers: 53 from 1 center formed the internal cohort and 16 from the other center formed the external cohort. We pretrained a deep learning model on a public adult brain tumor data set (BraTS 2021), and finetuned it to automatically segment tumor core (TC) and whole tumor (WT) volumes. PyRadiomics and sequential feature selection were used for feature extraction and selection based on the segmented volumes. Two machine learning models were trained on our internal cohort to predict patient 12-month survival from diagnosis. One model used only data obtained at diagnosis prior to any therapy (baseline study) and the other used data at both diagnosis and post-RT (post-RT study). Results: Overall survival prediction accuracy was 77% and 81% for the baseline study, and 85% and 78% for the post-RT study, for internal and external cohorts, respectively. Homogeneous WT intensity in baseline T2 FLAIR and larger post-RT TC/WT volume ratio indicate shorter OS. Conclusions: Machine learning analysis of MRI radiomics has potential to accurately and noninvasively predict which pediatric patients with DMG will survive less than 12 months from the time of diagnosis to provide patient stratification and guide therapy.

14.
J Imaging Inform Med ; 37(2): 589-600, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38343228

RESUMO

Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning-based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; n = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; n = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854-0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912-0.965, p-value < 0.001) compared with anterior-posterior (AUC 0.782, 95% CI 0.644-0.897) or portable anterior-posterior (AUC 0.869, 95% CI 0.814-0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823-0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765-0.904) and Shenzhen (AUC 0.806, 95% CI 0.771-0.839). A deep learning-based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.

15.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38512516

RESUMO

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso , Próstata/diagnóstico por imagem , Aprendizado Profundo
16.
Med Image Anal ; 95: 103206, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38776844

RESUMO

The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical Schools' Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.


Assuntos
Algoritmos , Densidade da Mama , Neoplasias da Mama , Mamografia , Humanos , Feminino , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Aprendizado de Máquina
17.
Radiology ; 268(3): 752-60, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23687175

RESUMO

PURPOSE: To perform external validation of a computer-assisted registration algorithm for prone and supine computed tomographic (CT) colonography and to compare the results with those of an existing centerline method. MATERIALS AND METHODS: All contributing centers had institutional review board approval; participants provided informed consent. A validation sample of CT colonographic examinations of 51 patients with 68 polyps (6-55 mm) was selected from a publicly available, HIPAA compliant, anonymized archive. No patients were excluded because of poor preparation or inadequate distension. Corresponding prone and supine polyp coordinates were recorded, and endoluminal surfaces were registered automatically by using a computer algorithm. Two observers independently scored three-dimensional endoluminal polyp registration success. Results were compared with those obtained by using the normalized distance along the colonic centerline (NDACC) method. Pairwise Wilcoxon signed rank tests were used to compare gross registration error and McNemar tests were used to compare polyp conspicuity. RESULTS: Registration was possible in all 51 patients, and 136 paired polyp coordinates were generated (68 polyps) to test the algorithm. Overall mean three-dimensional polyp registration error (mean ± standard deviation, 19.9 mm ± 20.4) was significantly less than that for the NDACC method (mean, 27.4 mm ± 15.1; P = .001). Accuracy was unaffected by colonic segment (P = .76) or luminal collapse (P = .066). During endoluminal review by two observers (272 matching tasks, 68 polyps, prone to supine and supine to prone coordinates), 223 (82%) polyp matches were visible (120° field of view) compared with just 129 (47%) when the NDACC method was used (P < .001). By using multiplanar visualization, 48 (70%) polyps were visible after scrolling ± 15 mm in any multiplanar axis compared with 16 (24%) for NDACC (P < .001). CONCLUSION: Computer-assisted registration is more accurate than the NDACC method for mapping the endoluminal surface and matching the location of polyps in corresponding prone and supine CT colonographic acquisitions.


Assuntos
Algoritmos , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/epidemiologia , Colonografia Tomográfica Computadorizada/estatística & dados numéricos , Posicionamento do Paciente/estatística & dados numéricos , Intensificação de Imagem Radiográfica/métodos , Técnica de Subtração/estatística & dados numéricos , Pontos de Referência Anatômicos/diagnóstico por imagem , Humanos , Prevalência , Decúbito Ventral , Decúbito Dorsal , Estados Unidos/epidemiologia
18.
IEEE Trans Med Imaging ; 42(7): 2044-2056, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37021996

RESUMO

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Humanos , Privacidade , Informática Médica
19.
Artigo em Inglês | MEDLINE | ID: mdl-38083430

RESUMO

Children with optic pathway gliomas (OPGs), a low-grade brain tumor associated with neurofibromatosis type 1 (NF1-OPG), are at risk for permanent vision loss. While OPG size has been associated with vision loss, it is unclear how changes in size, shape, and imaging features of OPGs are associated with the likelihood of vision loss. This paper presents a fully automatic framework for accurate prediction of visual acuity loss using multi-sequence magnetic resonance images (MRIs). Our proposed framework includes a transformer-based segmentation network using transfer learning, statistical analysis of radiomic features, and a machine learning method for predicting vision loss. Our segmentation network was evaluated on multi-sequence MRIs acquired from 75 pediatric subjects with NF1-OPG and obtained an average Dice similarity coefficient of 0.791. The ability to predict vision loss was evaluated on a subset of 25 subjects with ground truth using cross-validation and achieved an average accuracy of 0.8. Analyzing multiple MRI features appear to be good indicators of vision loss, potentially permitting early treatment decisions.Clinical relevance- Accurately determining which children with NF1-OPGs are at risk and hence require preventive treatment before vision loss remains challenging, towards this we present a fully automatic deep learning-based framework for vision outcome prediction, potentially permitting early treatment decisions.


Assuntos
Neurofibromatose 1 , Glioma do Nervo Óptico , Humanos , Criança , Glioma do Nervo Óptico/complicações , Glioma do Nervo Óptico/diagnóstico por imagem , Glioma do Nervo Óptico/patologia , Neurofibromatose 1/complicações , Neurofibromatose 1/diagnóstico por imagem , Neurofibromatose 1/patologia , Imageamento por Ressonância Magnética/métodos , Transtornos da Visão , Acuidade Visual
20.
Health Informatics J ; 29(4): 14604582231207744, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37864543

RESUMO

Cross-institution collaborations are constrained by data-sharing challenges. These challenges hamper innovation, particularly in artificial intelligence, where models require diverse data to ensure strong performance. Federated learning (FL) solves data-sharing challenges. In typical collaborations, data is sent to a central repository where models are trained. With FL, models are sent to participating sites, trained locally, and model weights aggregated to create a master model with improved performance. At the 2021 Radiology Society of North America's (RSNA) conference, a panel was conducted titled "Accelerating AI: How Federated Learning Can Protect Privacy, Facilitate Collaboration and Improve Outcomes." Two groups shared insights: researchers from the EXAM study (EMC CXR AI Model) and members of the National Cancer Institute's Early Detection Research Network's (EDRN) pancreatic cancer working group. EXAM brought together 20 institutions to create a model to predict oxygen requirements of patients seen in the emergency department with COVID-19 symptoms. The EDRN collaboration is focused on improving outcomes for pancreatic cancer patients through earlier detection. This paper describes major insights from the panel, including direct quotes. The panelists described the impetus for FL, the long-term potential vision of FL, challenges faced in FL, and the immediate path forward for FL.


Assuntos
Inteligência Artificial , Neoplasias Pancreáticas , Humanos , Privacidade , Aprendizagem , Neoplasias Pancreáticas
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa