Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Crit Care Med ; 52(2): 237-247, 2024 02 01.
Article in English | MEDLINE | ID: mdl-38095506

ABSTRACT

OBJECTIVES: We aimed to develop a computer-aided detection (CAD) system to localize and detect the malposition of endotracheal tubes (ETTs) on portable supine chest radiographs (CXRs). DESIGN: This was a retrospective diagnostic study. DeepLabv3+ with ResNeSt50 backbone and DenseNet121 served as the model architecture for segmentation and classification tasks, respectively. SETTING: Multicenter study. PATIENTS: For the training dataset, images meeting the following inclusion criteria were included: 1) patient age greater than or equal to 20 years; 2) portable supine CXR; 3) examination in emergency departments or ICUs; and 4) examination between 2015 and 2019 at National Taiwan University Hospital (NTUH) (NTUH-1519 dataset: 5,767 images). The derived CAD system was tested on images from chronologically (examination during 2020 at NTUH, NTUH-20 dataset: 955 images) or geographically (examination between 2015 and 2020 at NTUH Yunlin Branch [YB], NTUH-YB dataset: 656 images) different datasets. All CXRs were annotated with pixel-level labels of ETT and with image-level labels of ETT presence and malposition. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: For the segmentation model, the Dice coefficients indicated that ETT would be delineated accurately (NTUH-20: 0.854; 95% CI, 0.824-0.881 and NTUH-YB: 0.839; 95% CI, 0.820-0.857). For the classification model, the presence of ETT could be accurately detected with high accuracy (area under the receiver operating characteristic curve [AUC]: NTUH-20, 1.000; 95% CI, 0.999-1.000 and NTUH-YB: 0.994; 95% CI, 0.984-1.000). Furthermore, among those images with ETT, ETT malposition could be detected with high accuracy (AUC: NTUH-20, 0.847; 95% CI, 0.671-0.980 and NTUH-YB, 0.734; 95% CI, 0.630-0.833), especially for endobronchial intubation (AUC: NTUH-20, 0.991; 95% CI, 0.969-1.000 and NTUH-YB, 0.966; 95% CI, 0.933-0.991). CONCLUSIONS: The derived CAD system could localize ETT and detect ETT malposition with excellent performance, especially for endobronchial intubation, and with favorable potential for external generalizability.


Subject(s)
Deep Learning , Emergency Medicine , Humans , Retrospective Studies , Intubation, Intratracheal/adverse effects , Intubation, Intratracheal/methods , Hospitals, University
2.
Crit Care ; 28(1): 118, 2024 04 09.
Article in English | MEDLINE | ID: mdl-38594772

ABSTRACT

BACKGROUND: This study aimed to develop an automated method to measure the gray-white matter ratio (GWR) from brain computed tomography (CT) scans of patients with out-of-hospital cardiac arrest (OHCA) and assess its significance in predicting early-stage neurological outcomes. METHODS: Patients with OHCA who underwent brain CT imaging within 12 h of return of spontaneous circulation were enrolled in this retrospective study. The primary outcome endpoint measure was a favorable neurological outcome, defined as cerebral performance category 1 or 2 at hospital discharge. We proposed an automated method comprising image registration, K-means segmentation, segmentation refinement, and GWR calculation to measure the GWR for each CT scan. The K-means segmentation and segmentation refinement was employed to refine the segmentations within regions of interest (ROIs), consequently enhancing GWR calculation accuracy through more precise segmentations. RESULTS: Overall, 443 patients were divided into derivation N=265, 60% and validation N=178, 40% sets, based on age and sex. The ROI Hounsfield unit values derived from the automated method showed a strong correlation with those obtained from the manual method. Regarding outcome prediction, the automated method significantly outperformed the manual method in GWR calculation (AUC 0.79 vs. 0.70) across the entire dataset. The automated method also demonstrated superior performance across sensitivity, specificity, and positive and negative predictive values using the cutoff value determined from the derivation set. Moreover, GWR was an independent predictor of outcomes in logistic regression analysis. Incorporating the GWR with other clinical and resuscitation variables significantly enhanced the performance of prediction models compared to those without the GWR. CONCLUSIONS: Automated measurement of the GWR from non-contrast brain CT images offers valuable insights for predicting neurological outcomes during the early post-cardiac arrest period.


Subject(s)
Out-of-Hospital Cardiac Arrest , White Matter , Humans , Retrospective Studies , Gray Matter/diagnostic imaging , Out-of-Hospital Cardiac Arrest/diagnostic imaging , Tomography, X-Ray Computed/methods , Prognosis
3.
Radiology ; 306(1): 172-182, 2023 01.
Article in English | MEDLINE | ID: mdl-36098642

ABSTRACT

Background Approximately 40% of pancreatic tumors smaller than 2 cm are missed at abdominal CT. Purpose To develop and to validate a deep learning (DL)-based tool able to detect pancreatic cancer at CT. Materials and Methods Retrospectively collected contrast-enhanced CT studies in patients diagnosed with pancreatic cancer between January 2006 and July 2018 were compared with CT studies of individuals with a normal pancreas (control group) obtained between January 2004 and December 2019. An end-to-end tool comprising a segmentation convolutional neural network (CNN) and a classifier ensembling five CNNs was developed and validated in the internal test set and a nationwide real-world validation set. The sensitivities of the computer-aided detection (CAD) tool and radiologist interpretation were compared using the McNemar test. Results A total of 546 patients with pancreatic cancer (mean age, 65 years ± 12 [SD], 297 men) and 733 control subjects were randomly divided into training, validation, and test sets. In the internal test set, the DL tool achieved 89.9% (98 of 109; 95% CI: 82.7, 94.9) sensitivity and 95.9% (141 of 147; 95% CI: 91.3, 98.5) specificity (area under the receiver operating characteristic curve [AUC], 0.96; 95% CI: 0.94, 0.99), without a significant difference (P = .11) in sensitivity compared with the original radiologist report (96.1% [98 of 102]; 95% CI: 90.3, 98.9). In a test set of 1473 real-world CT studies (669 malignant, 804 control) from institutions throughout Taiwan, the DL tool distinguished between CT malignant and control studies with 89.7% (600 of 669; 95% CI: 87.1, 91.9) sensitivity and 92.8% specificity (746 of 804; 95% CI: 90.8, 94.5) (AUC, 0.95; 95% CI: 0.94, 0.96), with 74.7% (68 of 91; 95% CI: 64.5, 83.3) sensitivity for malignancies smaller than 2 cm. Conclusion The deep learning-based tool enabled accurate detection of pancreatic cancer on CT scans, with reasonable sensitivity for tumors smaller than 2 cm. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Aisen and Rodrigues in this issue.


Subject(s)
Deep Learning , Pancreatic Neoplasms , Male , Humans , Aged , Retrospective Studies , Sensitivity and Specificity , Tomography, X-Ray Computed/methods , Pancreas
4.
BMC Cancer ; 23(1): 58, 2023 Jan 17.
Article in English | MEDLINE | ID: mdl-36650440

ABSTRACT

BACKGROUND: CT is the major detection tool for pancreatic cancer (PC). However, approximately 40% of PCs < 2 cm are missed on CT, underscoring a pressing need for tools to supplement radiologist interpretation. METHODS: Contrast-enhanced CT studies of 546 patients with pancreatic adenocarcinoma diagnosed by histology/cytology between January 2005 and December 2019 and 733 CT studies of controls with normal pancreas obtained between the same period in a tertiary referral center were retrospectively collected for developing an automatic end-to-end computer-aided detection (CAD) tool for PC using two-dimensional (2D) and three-dimensional (3D) radiomic analysis with machine learning. The CAD tool was tested in a nationwide dataset comprising 1,477 CT studies (671 PCs, 806 controls) obtained from institutions throughout Taiwan. RESULTS: The CAD tool achieved 0.918 (95% CI, 0.895-0.938) sensitivity and 0.822 (95% CI, 0.794-0.848) specificity in differentiating between studies with and without PC (area under curve 0.947, 95% CI, 0.936-0.958), with 0.707 (95% CI, 0.602-0.797) sensitivity for tumors < 2 cm. The positive and negative likelihood ratios of PC were 5.17 (95% CI, 4.45-6.01) and 0.10 (95% CI, 0.08-0.13), respectively. Where high specificity is needed, using 2D and 3D analyses in series yielded 0.952 (95% CI, 0.934-0.965) specificity with a sensitivity of 0.742 (95% CI, 0.707-0.775), whereas using 2D and 3D analyses in parallel to maximize sensitivity yielded 0.915 (95% CI, 0.891-0.935) sensitivity at a specificity of 0.791 (95% CI, 0.762-0.819). CONCLUSIONS: The high accuracy and robustness of the CAD tool supported its potential for enhancing the detection of PC.


Subject(s)
Adenocarcinoma , Pancreatic Neoplasms , Humans , Pancreatic Neoplasms/diagnostic imaging , Retrospective Studies , Adenocarcinoma/diagnostic imaging , Taiwan/epidemiology , Sensitivity and Specificity , Pancreatic Neoplasms
5.
J Med Syst ; 48(1): 1, 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38048012

ABSTRACT

PURPOSE: To develop two deep learning-based systems for diagnosing and localizing pneumothorax on portable supine chest X-rays (SCXRs). METHODS: For this retrospective study, images meeting the following inclusion criteria were included: (1) patient age ≥ 20 years; (2) portable SCXR; (3) imaging obtained in the emergency department or intensive care unit. Included images were temporally split into training (1571 images, between January 2015 and December 2019) and testing (1071 images, between January 2020 to December 2020) datasets. All images were annotated using pixel-level labels. Object detection and image segmentation were adopted to develop separate systems. For the detection-based system, EfficientNet-B2, DneseNet-121, and Inception-v3 were the architecture for the classification model; Deformable DETR, TOOD, and VFNet were the architecture for the localization model. Both classification and localization models of the segmentation-based system shared the UNet architecture. RESULTS: In diagnosing pneumothorax, performance was excellent for both detection-based (Area under receiver operating characteristics curve [AUC]: 0.940, 95% confidence interval [CI]: 0.907-0.967) and segmentation-based (AUC: 0.979, 95% CI: 0.963-0.991) systems. For images with both predicted and ground-truth pneumothorax, lesion localization was highly accurate (detection-based Dice coefficient: 0.758, 95% CI: 0.707-0.806; segmentation-based Dice coefficient: 0.681, 95% CI: 0.642-0.721). The performance of the two deep learning-based systems declined as pneumothorax size diminished. Nonetheless, both systems were similar or better than human readers in diagnosis or localization performance across all sizes of pneumothorax. CONCLUSIONS: Both deep learning-based systems excelled when tested in a temporally different dataset with differing patient or image characteristics, showing favourable potential for external generalizability.


Subject(s)
Deep Learning , Emergency Medicine , Pneumothorax , Humans , Young Adult , Adult , Retrospective Studies , Pneumothorax/diagnostic imaging , X-Rays
6.
J Gastroenterol Hepatol ; 36(2): 286-294, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33624891

ABSTRACT

The application of artificial intelligence (AI) in medicine has increased rapidly with respect to tasks including disease detection/diagnosis, risk stratification, and prognosis prediction. With recent advances in computing power and algorithms, AI has shown promise in taking advantage of vast electronic health data and imaging studies to supplement clinicians. Machine learning and deep learning are the most widely used AI methodologies for medical research and have been applied in pancreatobiliary diseases for which diagnosis and treatment selection are often complicated and require joint consideration of data from multiple sources. The aim of this review is to provide a concise introduction of the major AI methodologies and the current landscape of AI research in pancreatobiliary diseases.


Subject(s)
Artificial Intelligence , Biliary Tract Diseases/diagnosis , Biliary Tract Diseases/therapy , Pancreatic Diseases/diagnosis , Pancreatic Diseases/therapy , Deep Learning , Electronic Health Records , Forecasting , Humans , Machine Learning , Prognosis , Risk Assessment
7.
Opt Express ; 26(17): 22342-22347, 2018 Aug 20.
Article in English | MEDLINE | ID: mdl-30130928

ABSTRACT

Here, we propose and demonstrate a performance degradation mitigation scheme in TV backlight and smart-phone-based visible light communication (VLC) system when the display content in the light-panel is dynamically changing. In order to evaluate the influence of the dynamic display contents to the VLC performance, we use a noise-ratio (NR) and noise-ratio standard deviation (NRSD) as the figure-of-merits for the bright-and-dark contrast of the display content; and the dispersal of the changing display content regarding the bright-and-dark contrast respectively. Performances of 4 dynamic display contents with different combinations of NR and NRSD are analyzed. They are: low NR and low NRSD (NR = 36.69%; NRSD = 0.0226); low NR and high NRSD (NR = 30.09%; NRSD = 0.2698); high NR and low NRSD (NR = 81.66%; NRSD = 0.0052); high NR and high NRSD (NR = 73.91%; and NRSD = 0.2717). The proposed scheme can work well; that is, even the transmission distance is up to 200 cm in both smart-phones. If the proposed scheme is not used, then high success rate can be observed only at the low NR and low NRSD display content when the transmission distance is < 100 cm.

8.
Opt Express ; 25(9): 10103-10108, 2017 May 01.
Article in English | MEDLINE | ID: mdl-28468385

ABSTRACT

We propose and demonstrate a long distance non-line-of-sight (NLOS) visible light signal detection based on the rolling shutter patterning using commercial mobile phone camera. By using our improved rolling shutter pattern demodulation algorithm, such as the background compensation (BC) blooming mitigation, extinction-ratio (ER) enhancement and Bradley adaptive thresholding, a 1.5 m NLOS visible signal (at low illumination of 145 lux) can be retrieved.

9.
Sensors (Basel) ; 17(6)2017 Jun 09.
Article in English | MEDLINE | ID: mdl-28598391

ABSTRACT

Pathogen detection in water samples, without complex and time consuming procedures such as fluorescent-labeling or culture-based incubation, is essential to public safety. We propose an immunoagglutination-based protocol together with the microfluidic device to quantify pathogen levels directly from water samples. Utilizing ubiquitous complementary metal-oxide-semiconductor (CMOS) imagers from mobile electronics, a low-cost and one-step reaction detection protocol is developed to enable field detection for waterborne pathogens. 10 mL of pathogen-containing water samples was processed using the developed protocol including filtration enrichment, immune-reaction detection and imaging processing. The limit of detection of 10 E. coli O157:H7 cells/10 mL has been demonstrated within 10 min of turnaround time. The protocol can readily be integrated into a mobile electronics such as smartphones for rapid and reproducible field detection of waterborne pathogens.


Subject(s)
Electrical Equipment and Supplies , Escherichia coli O157
10.
J Imaging Inform Med ; 37(2): 589-600, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38343228

ABSTRACT

Prompt and correct detection of pulmonary tuberculosis (PTB) is critical in preventing its spread. We aimed to develop a deep learning-based algorithm for detecting PTB on chest X-ray (CXRs) in the emergency department. This retrospective study included 3498 CXRs acquired from the National Taiwan University Hospital (NTUH). The images were chronologically split into a training dataset, NTUH-1519 (images acquired during the years 2015 to 2019; n = 2144), and a testing dataset, NTUH-20 (images acquired during the year 2020; n = 1354). Public databases, including the NIH ChestX-ray14 dataset (model training; 112,120 images), Montgomery County (model testing; 138 images), and Shenzhen (model testing; 662 images), were also used in model development. EfficientNetV2 was the basic architecture of the algorithm. Images from ChestX-ray14 were employed for pseudo-labelling to perform semi-supervised learning. The algorithm demonstrated excellent performance in detecting PTB (area under the receiver operating characteristic curve [AUC] 0.878, 95% confidence interval [CI] 0.854-0.900) in NTUH-20. The algorithm showed significantly better performance in posterior-anterior (PA) CXR (AUC 0.940, 95% CI 0.912-0.965, p-value < 0.001) compared with anterior-posterior (AUC 0.782, 95% CI 0.644-0.897) or portable anterior-posterior (AUC 0.869, 95% CI 0.814-0.918) CXR. The algorithm accurately detected cases of bacteriologically confirmed PTB (AUC 0.854, 95% CI 0.823-0.883). Finally, the algorithm tested favourably in Montgomery County (AUC 0.838, 95% CI 0.765-0.904) and Shenzhen (AUC 0.806, 95% CI 0.771-0.839). A deep learning-based algorithm could detect PTB on CXR with excellent performance, which may help shorten the interval between detection and airborne isolation for patients with PTB.

11.
Artif Intell Med ; 144: 102644, 2023 10.
Article in English | MEDLINE | ID: mdl-37783539

ABSTRACT

The proliferation of wearable devices has allowed the collection of electrocardiogram (ECG) recordings daily to monitor heart rhythm and rate. For example, 24-hour Holter monitors, cardiac patches, and smartwatches are widely used for ECG gathering and application. An automatic atrial fibrillation (AF) detector is required for timely ECG interpretation. Deep learning models can accurately identify AFs if large amounts of annotated data are available for model training. However, it is impractical to request sufficient labels for ECG recordings for an individual patient to train a personalized model. We propose a Siamese-network-based approach for transfer learning to address this issue. A pre-trained Siamese convolutional neural network is created by comparing two labeled ECG segments from the same patient. We sampled 30-second ECG segments with a 50% overlapping window from the ECG recordings of patients in the MIT-BIH Atrial Fibrillation Database. Subsequently, we independently detected the occurrence of AF in each patient in the Long-Term AF Database. By fine-tuning the model with the 1, 3, 5, 7, 9, or 11 ECG segments ranging from 30 to 180 s, our method achieved macro-F1 scores of 96.84%, 96.91%, 96.97%, 97.02%, 97.05%, and 97.07%, respectively.


Subject(s)
Atrial Fibrillation , Humans , Atrial Fibrillation/diagnosis , Neural Networks, Computer , Electrocardiography/methods , Machine Learning , Algorithms
12.
Sci Rep ; 12(1): 8892, 2022 05 25.
Article in English | MEDLINE | ID: mdl-35614110

ABSTRACT

We performed the present study to investigate the role of computed tomography (CT) radiomics in differentiating nonfunctional adenoma and aldosterone-producing adenoma (APA) and outcome prediction in patients with clinically suspected primary aldosteronism (PA). This study included 60 patients diagnosed with essential hypertension (EH) with nonfunctional adenoma on CT and 91 patients with unilateral surgically proven APA. Each whole nodule on unenhanced and venous phase CT images was segmented manually and randomly split into training and test sets at a ratio of 8:2. Radiomic models for nodule discrimination and outcome prediction of APA after adrenalectomy were established separately using the training set by least absolute shrinkage and selection operator (LASSO) logistic regression, and the performance was evaluated on test sets. The model can differentiate adrenal nodules in EH and PA with a sensitivity, specificity, and accuracy of 83.3%, 78.9% and 80.6% (AUC = 0.91 [0.72, 0.97]) in unenhanced CT and 81.2%, 100% and 87.5% (AUC = 0.98 [0.77, 1.00]) in venous phase CT, respectively. In the outcome after adrenalectomy, the models showed a favorable ability to predict biochemical success (Unenhanced/venous CT: AUC = 0.67 [0.52, 0.79]/0.62 [0.46, 0.76]) and clinical success (Unenhanced/venous CT: AUC = 0.59 [0.47, 0.70]/0.64 [0.51, 0.74]). The results showed that CT-based radiomic models hold promise to discriminate APA and nonfunctional adenoma when an adrenal incidentaloma was detected on CT images of hypertensive patients in clinical practice, while the role of radiomic analysis in outcome prediction after adrenalectomy needs further investigation.


Subject(s)
Adenoma , Hyperaldosteronism , Adenoma/diagnostic imaging , Adenoma/surgery , Adrenalectomy , Aldosterone , Essential Hypertension/diagnostic imaging , Humans , Hyperaldosteronism/diagnostic imaging , Hyperaldosteronism/surgery , Retrospective Studies
13.
PLoS One ; 17(10): e0273262, 2022.
Article in English | MEDLINE | ID: mdl-36240135

ABSTRACT

The fundamental challenge in machine learning is ensuring that trained models generalize well to unseen data. We developed a general technique for ameliorating the effect of dataset shift using generative adversarial networks (GANs) on a dataset of 149,298 handwritten digits and dataset of 868,549 chest radiographs obtained from four academic medical centers. Efficacy was assessed by comparing area under the curve (AUC) pre- and post-adaptation. On the digit recognition task, the baseline CNN achieved an average internal test AUC of 99.87% (95% CI, 99.87-99.87%), which decreased to an average external test AUC of 91.85% (95% CI, 91.82-91.88%), with an average salvage of 35% from baseline upon adaptation. On the lung pathology classification task, the baseline CNN achieved an average internal test AUC of 78.07% (95% CI, 77.97-78.17%) and an average external test AUC of 71.43% (95% CI, 71.32-71.60%), with a salvage of 25% from baseline upon adaptation. Adversarial domain adaptation leads to improved model performance on radiographic data derived from multiple out-of-sample healthcare populations. This work can be applied to other medical imaging domains to help shape the deployment toolkit of machine learning in medicine.


Subject(s)
Deep Learning , Machine Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography
14.
Med Phys ; 38(7): 4052-65, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21859004

ABSTRACT

PURPOSE: Iterative reconstruction techniques hold great potential to mitigate the effects of data noise and/or incompleteness, and hence can facilitate the patient dose reduction. However, they are not suitable for routine clinical practice due to their long reconstruction times. In this work, the authors accelerated the computations by fully taking advantage of the highly parallel computational power on single and multiple graphics processing units (GPUs). In particular, the forward projection algorithm, which is not included in the close-form formulas, will be accelerated and optimized by using GPU here. METHODS: The main contribution is a novel forward projection algorithm that uses multithreads to handle the computations associated with a bunch of adjacent rays simultaneously. The proposed algorithm is free of divergence and bank conflict on GPU, and benefits from data locality and data reuse. It achieves the efficiency particularly by (i) employing a tiled algorithm with three-level parallelization, (ii) optimizing thread block size, (iii) maximizing data reuse on constant memory and shared memory, and (iv) exploiting built-in texture memory interpolation capability to increase efficiency. In addition, to accelerate the iterative algorithms and the Feldkamp-Davis-Kress (FDK) algorithm on GPU, the authors apply batched fast Fourier transform (FFT) to expedite filtering process in FDK and utilize projection bundling parallelism during backprojection to shorten the execution times in FDK and the expectation-maximization (EM). RESULTS: Numerical experiments conducted on an NVIDIA Tesla C1060 GPU demonstrated the superiority of the proposed algorithms in computational time saving. The forward projection, filtering, and backprojection times for generating a volume image of 512 x 512 x 512 with 360 projection data of 512 x 512 using one GPU are about 4.13, 0.65, and 2.47 s (including distance weighting), respectively. In particular, the proposed forward projection algorithm is ray-driven and its paralleli-zation strategy evolves from single-thread-for-single-ray (38.56 s), multithreads-for-single-ray (26.05 s), to multithreads-for-multirays (4.13 s). For the voxel-driven backprojection, the use of texture memory reduces the reconstruction time from 4.95 to 3.35 s. By applying the projection bundle technique, the computation time is further reduced to 2.47 s. When employing multiple GPUs, near-perfect speedups were observed as the number of GPUs increases. For example, by using four GPUs, the time for the forward projection, filtering, and backprojection are further reduced to 1.11, 0.18, and 0.66 s. The results obtained by GPU-based algorithms are virtually indistinguishable with those by CPU. CONCLUSIONS: The authors have proposed a highly optimized GPU-based forward projection algorithm, as well as the GPU-based FDK and expectation-maximization reconstruction algorithms. Our compute unified device architecture (CUDA) codes provide the exceedingly fast forward projection and backprojection that outperform those using the shading languages, cell broadband engine architecture and previous CUDA implementations. The reconstruction times in the FDK and the EM algorithms were considerably shortened, and thus can facilitate their routine usage in a variety of applications such as image quality improvement and dose reduction.


Subject(s)
Algorithms , Computer Graphics , Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Cone-Beam Computed Tomography/instrumentation , Humans , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity
15.
Sci Rep ; 11(1): 13855, 2021 07 05.
Article in English | MEDLINE | ID: mdl-34226598

ABSTRACT

This study aims to apply a CCTA-derived territory-based patient-specific estimation of boundary conditions for coronary artery fractional flow reserve (FFR) and wall shear stress (WSS) simulation. The non-invasive simulation can help diagnose the significance of coronary stenosis and the likelihood of myocardial ischemia. FFR is often regarded as the gold standard to evaluate the functional significance of stenosis in coronary arteries. In another aspect, proximal wall shear stress ([Formula: see text]) can also be an indicator of plaque vulnerability. During the simulation process, the mass flow rate of the blood in coronary arteries is one of the most important boundary conditions. This study utilized the myocardium territory to estimate and allocate the mass flow rate. 20 patients are included in this study. From the knowledge of anatomical information of coronary arteries and the myocardium, the territory-based FFR and the [Formula: see text] can both be derived from fluid dynamics simulations. Applying the threshold of distinguishing between significant and non-significant stenosis, the territory-based method can reach the accuracy, sensitivity, and specificity of 0.88, 0.90, and 0.80, respectively. For significantly stenotic cases ([Formula: see text] [Formula: see text] 0.80), the vessels usually have higher wall shear stress in the proximal region of the lesion.


Subject(s)
Coronary Artery Disease/diagnosis , Coronary Stenosis/diagnosis , Coronary Vessels/physiopathology , Fractional Flow Reserve, Myocardial/physiology , Aged , Computed Tomography Angiography , Coronary Artery Disease/diagnostic imaging , Coronary Artery Disease/pathology , Coronary Stenosis/diagnostic imaging , Coronary Stenosis/pathology , Coronary Vessels/diagnostic imaging , Female , Hemodynamics , Humans , Male , Myocardial Ischemia/diagnosis , Myocardial Ischemia/diagnostic imaging , Myocardial Ischemia/pathology , Plaque, Atherosclerotic/diagnosis , Plaque, Atherosclerotic/diagnostic imaging , Plaque, Atherosclerotic/pathology , Stress, Mechanical
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3535-3538, 2021 11.
Article in English | MEDLINE | ID: mdl-34892002

ABSTRACT

Assessment of cardiovascular disease (CVD) with cine magnetic resonance imaging (MRI) has been used to non-invasively evaluate detailed cardiac structure and function. Accurate segmentation of cardiac structures from cine MRI is a crucial step for early diagnosis and prognosis of CVD, and has been greatly improved with convolutional neural networks (CNN). There, however, are a number of limitations identified in CNN models, such as limited interpretability and high complexity, thus limiting their use in clinical practice. In this work, to address the limitations, we propose a lightweight and interpretable machine learning model, successive subspace learning with the subspace approximation with adjusted bias (Saab) transform, for accurate and efficient segmentation from cine MRI. Specifically, our segmentation framework is comprised of the following steps: (1) sequential expansion of near-to-far neighborhood at different resolutions; (2) channel-wise subspace approximation using the Saab transform for unsupervised dimension reduction; (3) class-wise entropy guided feature selection for supervised dimension reduction; (4) concatenation of features and pixel-wise classification with gradient boost; and (5) conditional random field for post-processing. Experimental results on the ACDC 2017 segmentation database, showed that our framework performed better than state-of-the-art U-Net models with 200× fewer parameters in delineating the left ventricle, right ventricle, and myocardium, thus showing its potential to be used in clinical practice.Clinical relevance- Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac MR images is a common clinical task to establish diagnosis and prognosis of CVD.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging, Cine , Heart/diagnostic imaging , Heart Ventricles/diagnostic imaging , Neural Networks, Computer
17.
Radiol Imaging Cancer ; 3(4): e210010, 2021 07.
Article in English | MEDLINE | ID: mdl-34241550

ABSTRACT

Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years ± 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. © RSNA, 2021.


Subject(s)
Carcinoma, Pancreatic Ductal , Pancreatic Neoplasms , Aged , Humans , Male , Pancreas/diagnostic imaging , Pancreatic Neoplasms/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed
18.
Res Sq ; 2021 Jan 08.
Article in English | MEDLINE | ID: mdl-33442676

ABSTRACT

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

19.
Nat Med ; 27(10): 1735-1743, 2021 10.
Article in English | MEDLINE | ID: mdl-34526699

ABSTRACT

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Subject(s)
COVID-19/physiopathology , Machine Learning , Outcome Assessment, Health Care , COVID-19/therapy , COVID-19/virology , Electronic Health Records , Humans , Prognosis , SARS-CoV-2/isolation & purification
20.
J Mater Sci Mater Med ; 21(4): 1057-68, 2010 Apr.
Article in English | MEDLINE | ID: mdl-19941041

ABSTRACT

Novel washout-resistant bone substitute materials consisting of gelatin-containing calcium silicate cements (CSCs) were developed. The washout resistance, setting time, diametral tensile strength (DTS), morphology, and phase composition of the hybrid cements were evaluated. The results indicated that the dominant phase of beta-Ca(2)SiO(4) for the SiO(2)-CaO powders increased with an increase in the CaO content of the sols. After mixing with water, the setting times of the CSCs ranged from 10 to 29 min, increasing with a decrease in the amount of CaO in the sols. Addition of gelatin into the CSC significantly prolonged (P < 0.05) the setting time by about 2 and 8 times, respectively, for 5% and 10% gelatin. However, the presence of gelatin appreciably improved the anti-washout and brittle properties of the cements without adversely affecting mechanical strength. It was concluded that 5% gelatin-containing CSC may be useful as bioactive bone repair materials.


Subject(s)
Bone Cements/pharmacology , Calcium Compounds/chemistry , Gelatin/chemistry , Hardness/drug effects , Silicates/chemistry , Water/pharmacology , Bone Cements/chemical synthesis , Bone Cements/chemistry , Bone Substitutes/chemical synthesis , Bone Substitutes/chemistry , Bone Substitutes/pharmacology , Cementation , Compressive Strength/drug effects , Gelatin/pharmacology , Materials Testing , Powders , Silicate Cement/chemistry , Silicate Cement/pharmacology , Stress, Mechanical , Surface Properties , Tensile Strength/drug effects , X-Ray Diffraction
SELECTION OF CITATIONS
SEARCH DETAIL