Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Crit Care ; 28(1): 118, 2024 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-38594772

RESUMEN

BACKGROUND: This study aimed to develop an automated method to measure the gray-white matter ratio (GWR) from brain computed tomography (CT) scans of patients with out-of-hospital cardiac arrest (OHCA) and assess its significance in predicting early-stage neurological outcomes. METHODS: Patients with OHCA who underwent brain CT imaging within 12 h of return of spontaneous circulation were enrolled in this retrospective study. The primary outcome endpoint measure was a favorable neurological outcome, defined as cerebral performance category 1 or 2 at hospital discharge. We proposed an automated method comprising image registration, K-means segmentation, segmentation refinement, and GWR calculation to measure the GWR for each CT scan. The K-means segmentation and segmentation refinement was employed to refine the segmentations within regions of interest (ROIs), consequently enhancing GWR calculation accuracy through more precise segmentations. RESULTS: Overall, 443 patients were divided into derivation N=265, 60% and validation N=178, 40% sets, based on age and sex. The ROI Hounsfield unit values derived from the automated method showed a strong correlation with those obtained from the manual method. Regarding outcome prediction, the automated method significantly outperformed the manual method in GWR calculation (AUC 0.79 vs. 0.70) across the entire dataset. The automated method also demonstrated superior performance across sensitivity, specificity, and positive and negative predictive values using the cutoff value determined from the derivation set. Moreover, GWR was an independent predictor of outcomes in logistic regression analysis. Incorporating the GWR with other clinical and resuscitation variables significantly enhanced the performance of prediction models compared to those without the GWR. CONCLUSIONS: Automated measurement of the GWR from non-contrast brain CT images offers valuable insights for predicting neurological outcomes during the early post-cardiac arrest period.


Asunto(s)
Paro Cardíaco Extrahospitalario , Sustancia Blanca , Humanos , Estudios Retrospectivos , Sustancia Gris/diagnóstico por imagen , Paro Cardíaco Extrahospitalario/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Pronóstico
2.
J Imaging Inform Med ; 37(2): 589-600, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38343228

RESUMEN

Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning-based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; n = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; n = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854-0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912-0.965, p-value < 0.001) compared with anterior-posterior (AUC 0.782, 95% CI 0.644-0.897) or portable anterior-posterior (AUC 0.869, 95% CI 0.814-0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823-0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765-0.904) and Shenzhen (AUC 0.806, 95% CI 0.771-0.839). A deep learning-based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.

3.
Crit Care Med ; 52(2): 237-247, 2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38095506

RESUMEN

OBJECTIVES: We aimed to develop a computer-aided detection (CAD) system to localize and detect the malposition of endotracheal tubes (ETTs) on portable supine chest radiographs (CXRs). DESIGN: This was a retrospective diagnostic study. DeepLabv3+ with ResNeSt50 backbone and DenseNet121 served as the model architecture for segmentation and classification tasks, respectively. SETTING: Multicenter study. PATIENTS: For the training dataset, images meeting the following inclusion criteria were included: 1) patient age greater than or equal to 20 years; 2) portable supine CXR; 3) examination in emergency departments or ICUs; and 4) examination between 2015 and 2019 at National Taiwan University Hospital (NTUH) (NTUH-1519 dataset: 5,767 images). The derived CAD system was tested on images from chronologically (examination during 2020 at NTUH, NTUH-20 dataset: 955 images) or geographically (examination between 2015 and 2020 at NTUH Yunlin Branch [YB], NTUH-YB dataset: 656 images) different datasets. All CXRs were annotated with pixel-level labels of ETT and with image-level labels of ETT presence and malposition. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: For the segmentation model, the Dice coefficients indicated that ETT would be delineated accurately (NTUH-20: 0.854; 95% CI, 0.824-0.881 and NTUH-YB: 0.839; 95% CI, 0.820-0.857). For the classification model, the presence of ETT could be accurately detected with high accuracy (area under the receiver operating characteristic curve [AUC]: NTUH-20, 1.000; 95% CI, 0.999-1.000 and NTUH-YB: 0.994; 95% CI, 0.984-1.000). Furthermore, among those images with ETT, ETT malposition could be detected with high accuracy (AUC: NTUH-20, 0.847; 95% CI, 0.671-0.980 and NTUH-YB, 0.734; 95% CI, 0.630-0.833), especially for endobronchial intubation (AUC: NTUH-20, 0.991; 95% CI, 0.969-1.000 and NTUH-YB, 0.966; 95% CI, 0.933-0.991). CONCLUSIONS: The derived CAD system could localize ETT and detect ETT malposition with excellent performance, especially for endobronchial intubation, and with favorable potential for external generalizability.


Asunto(s)
Aprendizaje Profundo , Medicina de Emergencia , Humanos , Estudios Retrospectivos , Intubación Intratraqueal/efectos adversos , Intubación Intratraqueal/métodos , Hospitales Universitarios
4.
J Med Syst ; 48(1): 1, 2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38048012

RESUMEN

PURPOSE: To develop two deep learning-based systems for diagnosing and localizing pneumothorax on portable supine chest X-rays (SCXRs). METHODS: For this retrospective study, images meeting the following inclusion criteria were included: (1) patient age ≥ 20 years; (2) portable SCXR; (3) imaging obtained in the emergency department or intensive care unit. Included images were temporally split into training (1571 images, between January 2015 and December 2019) and testing (1071 images, between January 2020 to December 2020) datasets. All images were annotated using pixel-level labels. Object detection and image segmentation were adopted to develop separate systems. For the detection-based system, EfficientNet-B2, DneseNet-121, and Inception-v3 were the architecture for the classification model; Deformable DETR, TOOD, and VFNet were the architecture for the localization model. Both classification and localization models of the segmentation-based system shared the UNet architecture. RESULTS: In diagnosing pneumothorax, performance was excellent for both detection-based (Area under receiver operating characteristics curve [AUC]: 0.940, 95% confidence interval [CI]: 0.907-0.967) and segmentation-based (AUC: 0.979, 95% CI: 0.963-0.991) systems. For images with both predicted and ground-truth pneumothorax, lesion localization was highly accurate (detection-based Dice coefficient: 0.758, 95% CI: 0.707-0.806; segmentation-based Dice coefficient: 0.681, 95% CI: 0.642-0.721). The performance of the two deep learning-based systems declined as pneumothorax size diminished. Nonetheless, both systems were similar or better than human readers in diagnosis or localization performance across all sizes of pneumothorax. CONCLUSIONS: Both deep learning-based systems excelled when tested in a temporally different dataset with differing patient or image characteristics, showing favourable potential for external generalizability.


Asunto(s)
Aprendizaje Profundo , Medicina de Emergencia , Neumotórax , Humanos , Adulto Joven , Adulto , Estudios Retrospectivos , Neumotórax/diagnóstico por imagen , Rayos X
5.
Artif Intell Med ; 144: 102644, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37783539

RESUMEN

The proliferation of wearable devices has allowed the collection of electrocardiogram (ECG) recordings daily to monitor heart rhythm and rate. For example, 24-hour Holter monitors, cardiac patches, and smartwatches are widely used for ECG gathering and application. An automatic atrial fibrillation (AF) detector is required for timely ECG interpretation. Deep learning models can accurately identify AFs if large amounts of annotated data are available for model training. However, it is impractical to request sufficient labels for ECG recordings for an individual patient to train a personalized model. We propose a Siamese-network-based approach for transfer learning to address this issue. A pre-trained Siamese convolutional neural network is created by comparing two labeled ECG segments from the same patient. We sampled 30-second ECG segments with a 50% overlapping window from the ECG recordings of patients in the MIT-BIH Atrial Fibrillation Database. Subsequently, we independently detected the occurrence of AF in each patient in the Long-Term AF Database. By fine-tuning the model with the 1, 3, 5, 7, 9, or 11 ECG segments ranging from 30 to 180 s, our method achieved macro-F1 scores of 96.84%, 96.91%, 96.97%, 97.02%, 97.05%, and 97.07%, respectively.


Asunto(s)
Fibrilación Atrial , Humanos , Fibrilación Atrial/diagnóstico , Redes Neurales de la Computación , Electrocardiografía/métodos , Aprendizaje Automático , Algoritmos
6.
BMC Cancer ; 23(1): 58, 2023 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-36650440

RESUMEN

BACKGROUND: CT is the major detection tool for pancreatic cancer (PC). However, approximately 40% of PCs < 2 cm are missed on CT, underscoring a pressing need for tools to supplement radiologist interpretation. METHODS: Contrast-enhanced CT studies of 546 patients with pancreatic adenocarcinoma diagnosed by histology/cytology between January 2005 and December 2019 and 733 CT studies of controls with normal pancreas obtained between the same period in a tertiary referral center were retrospectively collected for developing an automatic end-to-end computer-aided detection (CAD) tool for PC using two-dimensional (2D) and three-dimensional (3D) radiomic analysis with machine learning. The CAD tool was tested in a nationwide dataset comprising 1,477 CT studies (671 PCs, 806 controls) obtained from institutions throughout Taiwan. RESULTS: The CAD tool achieved 0.918 (95% CI, 0.895-0.938) sensitivity and 0.822 (95% CI, 0.794-0.848) specificity in differentiating between studies with and without PC (area under curve 0.947, 95% CI, 0.936-0.958), with 0.707 (95% CI, 0.602-0.797) sensitivity for tumors < 2 cm. The positive and negative likelihood ratios of PC were 5.17 (95% CI, 4.45-6.01) and 0.10 (95% CI, 0.08-0.13), respectively. Where high specificity is needed, using 2D and 3D analyses in series yielded 0.952 (95% CI, 0.934-0.965) specificity with a sensitivity of 0.742 (95% CI, 0.707-0.775), whereas using 2D and 3D analyses in parallel to maximize sensitivity yielded 0.915 (95% CI, 0.891-0.935) sensitivity at a specificity of 0.791 (95% CI, 0.762-0.819). CONCLUSIONS: The high accuracy and robustness of the CAD tool supported its potential for enhancing the detection of PC.


Asunto(s)
Adenocarcinoma , Neoplasias Pancreáticas , Humanos , Neoplasias Pancreáticas/diagnóstico por imagen , Estudios Retrospectivos , Adenocarcinoma/diagnóstico por imagen , Taiwán/epidemiología , Sensibilidad y Especificidad , Neoplasias Pancreáticas
7.
Radiology ; 306(1): 172-182, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36098642

RESUMEN

Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years ± 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pancreáticas , Masculino , Humanos , Anciano , Estudios Retrospectivos , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/métodos , Páncreas
8.
PLoS One ; 17(10): e0273262, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36240135

RESUMEN

The fundamental challenge in machine learning is ensuring that trained models generalize well to unseen data. We developed a general technique for ameliorating the effect of dataset shift using generative adversarial networks (GANs) on a dataset of 149,298 handwritten digits and dataset of 868,549 chest radiographs obtained from four academic medical centers. Efficacy was assessed by comparing area under the curve (AUC) pre- and post-adaptation. On the digit recognition task, the baseline CNN achieved an average internal test AUC of 99.87% (95% CI, 99.87-99.87%), which decreased to an average external test AUC of 91.85% (95% CI, 91.82-91.88%), with an average salvage of 35% from baseline upon adaptation. On the lung pathology classification task, the baseline CNN achieved an average internal test AUC of 78.07% (95% CI, 77.97-78.17%) and an average external test AUC of 71.43% (95% CI, 71.32-71.60%), with a salvage of 25% from baseline upon adaptation. Adversarial domain adaptation leads to improved model performance on radiographic data derived from multiple out-of-sample healthcare populations. This work can be applied to other medical imaging domains to help shape the deployment toolkit of machine learning in medicine.


Asunto(s)
Aprendizaje Profundo , Aprendizaje Automático , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía
9.
Sci Rep ; 12(1): 8892, 2022 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-35614110

RESUMEN

We performed the present study to investigate the role of computed tomography (CT) radiomics in differentiating nonfunctional adenoma and aldosterone-producing adenoma (APA) and outcome prediction in patients with clinically suspected primary aldosteronism (PA). This study included 60 patients diagnosed with essential hypertension (EH) with nonfunctional adenoma on CT and 91 patients with unilateral surgically proven APA. Each whole nodule on unenhanced and venous phase CT images was segmented manually and randomly split into training and test sets at a ratio of 8:2. Radiomic models for nodule discrimination and outcome prediction of APA after adrenalectomy were established separately using the training set by least absolute shrinkage and selection operator (LASSO) logistic regression, and the performance was evaluated on test sets. The model can differentiate adrenal nodules in EH and PA with a sensitivity, specificity, and accuracy of 83.3%, 78.9% and 80.6% (AUC = 0.91 [0.72, 0.97]) in unenhanced CT and 81.2%, 100% and 87.5% (AUC = 0.98 [0.77, 1.00]) in venous phase CT, respectively. In the outcome after adrenalectomy, the models showed a favorable ability to predict biochemical success (Unenhanced/venous CT: AUC = 0.67 [0.52, 0.79]/0.62 [0.46, 0.76]) and clinical success (Unenhanced/venous CT: AUC = 0.59 [0.47, 0.70]/0.64 [0.51, 0.74]). The results showed that CT-based radiomic models hold promise to discriminate APA and nonfunctional adenoma when an adrenal incidentaloma was detected on CT images of hypertensive patients in clinical practice, while the role of radiomic analysis in outcome prediction after adrenalectomy needs further investigation.


Asunto(s)
Adenoma , Hiperaldosteronismo , Adenoma/diagnóstico por imagen , Adenoma/cirugía , Adrenalectomía , Aldosterona , Hipertensión Esencial/diagnóstico por imagen , Humanos , Hiperaldosteronismo/diagnóstico por imagen , Hiperaldosteronismo/cirugía , Estudios Retrospectivos
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3535-3538, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892002

RESUMEN

Assessment of cardiovascular disease (CVD) with cine magnetic resonance imaging (MRI) has been used to non-invasively evaluate detailed cardiac structure and function. Accurate segmentation of cardiac structures from cine MRI is a crucial step for early diagnosis and prognosis of CVD, and has been greatly improved with convolutional neural networks (CNN). There, however, are a number of limitations identified in CNN models, such as limited interpretability and high complexity, thus limiting their use in clinical practice. In this work, to address the limitations, we propose a lightweight and interpretable machine learning model, successive subspace learning with the subspace approximation with adjusted bias (Saab) transform, for accurate and efficient segmentation from cine MRI. Specifically, our segmentation framework is comprised of the following steps: (1) sequential expansion of near-to-far neighborhood at different resolutions; (2) channel-wise subspace approximation using the Saab transform for unsupervised dimension reduction; (3) class-wise entropy guided feature selection for supervised dimension reduction; (4) concatenation of features and pixel-wise classification with gradient boost; and (5) conditional random field for post-processing. Experimental results on the ACDC 2017 segmentation database, showed that our framework performed better than state-of-the-art U-Net models with 200× fewer parameters in delineating the left ventricle, right ventricle, and myocardium, thus showing its potential to be used in clinical practice.Clinical relevance- Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac MR images is a common clinical task to establish diagnosis and prognosis of CVD.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Cinemagnética , Corazón/diagnóstico por imagen , Ventrículos Cardíacos/diagnóstico por imagen , Redes Neurales de la Computación
11.
Nat Med ; 27(10): 1735-1743, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34526699

RESUMEN

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Asunto(s)
COVID-19/fisiopatología , Aprendizaje Automático , Evaluación de Resultado en la Atención de Salud , COVID-19/terapia , COVID-19/virología , Registros Electrónicos de Salud , Humanos , Pronóstico , SARS-CoV-2/aislamiento & purificación
12.
Radiol Imaging Cancer ; 3(4): e210010, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34241550

RESUMEN

Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years ± 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. © RSNA, 2021.


Asunto(s)
Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Anciano , Humanos , Masculino , Páncreas/diagnóstico por imagen , Neoplasias Pancreáticas/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X
13.
Sci Rep ; 11(1): 13855, 2021 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-34226598

RESUMEN

This study aims to apply a CCTA-derived territory-based patient-specific estimation of boundary conditions for coronary artery fractional flow reserve (FFR) and wall shear stress (WSS) simulation. The non-invasive simulation can help diagnose the significance of coronary stenosis and the likelihood of myocardial ischemia. FFR is often regarded as the gold standard to evaluate the functional significance of stenosis in coronary arteries. In another aspect, proximal wall shear stress ([Formula: see text]) can also be an indicator of plaque vulnerability. During the simulation process, the mass flow rate of the blood in coronary arteries is one of the most important boundary conditions. This study utilized the myocardium territory to estimate and allocate the mass flow rate. 20 patients are included in this study. From the knowledge of anatomical information of coronary arteries and the myocardium, the territory-based FFR and the [Formula: see text] can both be derived from fluid dynamics simulations. Applying the threshold of distinguishing between significant and non-significant stenosis, the territory-based method can reach the accuracy, sensitivity, and specificity of 0.88, 0.90, and 0.80, respectively. For significantly stenotic cases ([Formula: see text] [Formula: see text] 0.80), the vessels usually have higher wall shear stress in the proximal region of the lesion.


Asunto(s)
Enfermedad de la Arteria Coronaria/diagnóstico , Estenosis Coronaria/diagnóstico , Vasos Coronarios/fisiopatología , Reserva del Flujo Fraccional Miocárdico/fisiología , Anciano , Angiografía por Tomografía Computarizada , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/patología , Estenosis Coronaria/diagnóstico por imagen , Estenosis Coronaria/patología , Vasos Coronarios/diagnóstico por imagen , Femenino , Hemodinámica , Humanos , Masculino , Isquemia Miocárdica/diagnóstico , Isquemia Miocárdica/diagnóstico por imagen , Isquemia Miocárdica/patología , Placa Aterosclerótica/diagnóstico , Placa Aterosclerótica/diagnóstico por imagen , Placa Aterosclerótica/patología , Estrés Mecánico
14.
J Gastroenterol Hepatol ; 36(2): 286-294, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33624891

RESUMEN

The application of artificial intelligence (AI) in medicine has increased rapidly with respect to tasks including disease detection/diagnosis, risk stratification, and prognosis prediction. With recent advances in computing power and algorithms, AI has shown promise in taking advantage of vast electronic health data and imaging studies to supplement clinicians. Machine learning and deep learning are the most widely used AI methodologies for medical research and have been applied in pancreatobiliary diseases for which diagnosis and treatment selection are often complicated and require joint consideration of data from multiple sources. The aim of this review is to provide a concise introduction of the major AI methodologies and the current landscape of AI research in pancreatobiliary diseases.


Asunto(s)
Inteligencia Artificial , Enfermedades de las Vías Biliares/diagnóstico , Enfermedades de las Vías Biliares/terapia , Enfermedades Pancreáticas/diagnóstico , Enfermedades Pancreáticas/terapia , Aprendizaje Profundo , Registros Electrónicos de Salud , Predicción , Humanos , Aprendizaje Automático , Pronóstico , Medición de Riesgo
15.
Res Sq ; 2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33442676

RESUMEN

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

17.
Lancet Digit Health ; 2(6): e303-e313, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-33328124

RESUMEN

BACKGROUND: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. METHODS: In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison. FINDINGS: Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0·973, specificity of 1·000, and accuracy of 0·986 (area under the curve [AUC] 0·997 (95% CI 0·992-1·000). In local test set 2, CNN-based analysis had a sensitivity of 0·990, specificity of 0·989, and accuracy of 0·989 (AUC 0·999 [0·998-1·000]). In the US test set, CNN-based analysis had a sensitivity of 0·790, specificity of 0·976, and accuracy of 0·832 (AUC 0·920 [0·891-0·948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011-0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1-1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0-3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set. INTERPRETATION: CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation. FUNDING: Taiwan Ministry of Science and Technology.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pancreáticas/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Anciano , Medios de Contraste , Diagnóstico Diferencial , Femenino , Humanos , Masculino , Persona de Mediana Edad , Páncreas/diagnóstico por imagen , Grupos Raciales , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad , Taiwán
18.
Neurooncol Adv ; 2(1): vdaa100, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33817641

RESUMEN

BACKGROUND: Brain metastasis velocity (BMV) predicts outcomes after initial distant brain failure (DBF) following upfront stereotactic radiosurgery (SRS). We developed an integrated model of clinical predictors and pre-SRS MRI-derived radiomic scores (R-scores) to identify high-BMV (BMV-H) patients upon initial identification of brain metastases (BMs). METHODS: In total, 256 patients with BMs treated with upfront SRS alone were retrospectively included. R-scores were built from 1246 radiomic features in 2 target volumes by using the Extreme Gradient Boosting algorithm to predict BMV-H groups, as defined by BMV at least 4 or leptomeningeal disease at first DBF. Two R-scores and 3 clinical predictors were integrated into a predictive clinico-radiomic (CR) model. RESULTS: The related R-scores showed significant differences between BMV-H and low BMV (BMV-L), as defined by BMV less than 4 or no DBF (P < .001). Regression analysis identified BMs number, perilesional edema, and extracranial progression as significant predictors. The CR model using these 5 predictors achieved a bootstrapping corrected C-index of 0.842 and 0.832 in the discovery and test sets, respectively. Overall survival (OS) after first DBF was significantly different between the CR-predicted BMV-L and BMV-H groups (median OS: 26.7 vs 13.0 months, P = .016). Among patients with a diagnosis-specific graded prognostic assessment of 1.5-2 or 2.5-4, the median OS after initial SRS was 33.8 and 67.8 months for CR-predicted BMV-L, compared to 13.5 and 31.0 months for CR-predicted BMV-H (P < .001 and <.001), respectively. CONCLUSION: Our CR model provides a novel approach showing good performance to predict BMV and clinical outcomes.

19.
Clin Transl Radiat Oncol ; 25: 1-9, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33426314

RESUMEN

BACKGROUND AND PURPOSE: To develop and validate a magnetic resonance imaging (MRI)-derived radiomic signature (RS) for the prediction of 1-year locoregional failure (LRF) in patients with hypopharyngeal squamous cell carcinoma (HPSCC) who received organ preservation therapy (OPT). MATERIAL AND METHODS: A total of 800 MRI-based features of pretreatment tumors were obtained from 116 patients with HPSCC who received OPT from two independent cohorts. The least absolute shrinkage and selection operator regression model were used to select the features used to develop the RS. Harrell's C-index and corrected C-index were used to evaluate the discriminative ability of RS. The Youden index was used to select the optimal cut-point for risk category. RESULTS: The RS yielded 1000 times bootstrapping corrected C-index of 0.8036 and 0.78235 in the experimental (n = 82) and validation cohorts (n = 34), respectively. With respect to the subgroup of patients with stage III/IV and cT4 disease, the RS also showed good predictive performance with corrected C-indices of 0.760 and 0.754, respectively. The dichotomized risk category using an RS of 0.0326 as the cut-off value yielded a 1-year LRF predictive accuracy of 79.27%, 79.41%, 76.74%, and 71.15% in the experimental, validation, stage III/IV, and cT4a cohorts, respectively. The low-risk group was associated with a significantly better progression-free laryngectomy-free and overall survival outcome in two independent institutions, stage III/IV, and cT4a cohorts. CONCLUSION: The RS-based model provides a novel and convenient approach for the prediction of the 1-year LRF and survival outcome in patients with HPSCC who received OPT.

20.
ACS Cent Sci ; 4(11): 1485-1494, 2018 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-30555900

RESUMEN

Rapid and low-cost pathogen diagnostic approaches are critical for clinical decision-making procedures. Cultivating bacteria often takes days to identify pathogens and provide antimicrobial susceptibilities. The delay in diagnosis may result in compromised treatment and inappropriate antibiotic use. Over the past decades, molecular-based techniques have significantly shortened pathogen identification turnaround time with high accuracy. However, these assays often use complex fluorescent labeling and nucleic acid amplification processes, which limit their use in resource-limited settings. In this work, we demonstrate a wash-free molecular agglutination assay with a straightforward mixing and incubation step that significantly simplifies procedures of molecular testing. By targeting the 16S rRNA gene of pathogens, we perform a rapid pathogen identification within 30 min on a dark-field imaging microfluidic cytometry platform. The dark-field images with low background noise can be obtained using a narrow beam scanning technique with off-the-shelf complementary metal oxide semiconductor (CMOS) imagers such as smartphone cameras. We utilize a machine learning algorithm to deconvolute topological features of agglutinated clusters and thus quantify the abundance of bacteria. Consequently, we unambiguously distinguish Escherichia coli positive from other E. coli negative among 50 clinical urinary tract infection samples with 96% sensitivity and 100% specificity. Furthermore, we also apply this quantitative detection approach to achieve rapid antimicrobial susceptibility testing within 3 h. This work exhibits easy-to-use protocols, high sensitivity, and short turnaround time for point-of-care testing uses.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA