Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Phys Med Biol ; 69(11)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38688292

RESUMEN

Objective.The mean squared error (MSE), also known asL2loss, has been widely used as a loss function to optimize image denoising models due to its strong performance as a mean estimator of the Gaussian noise model. Recently, various low-dose computed tomography (LDCT) image denoising methods using deep learning combined with the MSE loss have been developed; however, this approach has been observed to suffer from the regression-to-the-mean problem, leading to over-smoothed edges and degradation of texture in the image.Approach.To overcome this issue, we propose a stochastic function in the loss function to improve the texture of the denoised CT images, rather than relying on complicated networks or feature space losses. The proposed loss function includes the MSE loss to learn the mean distribution and the Pearson divergence loss to learn feature textures. Specifically, the Pearson divergence loss is computed in an image space to measure the distance between two intensity measures of denoised low-dose and normal-dose CT images. The evaluation of the proposed model employs a novel approach of multi-metric quantitative analysis utilizing relative texture feature distance.Results.Our experimental results show that the proposed Pearson divergence loss leads to a significant improvement in texture compared to the conventional MSE loss and generative adversarial network (GAN), both qualitatively and quantitatively.Significance.Achieving consistent texture preservation in LDCT is a challenge in conventional GAN-type methods due to adversarial aspects aimed at minimizing noise while preserving texture. By incorporating the Pearson regularizer in the loss function, we can easily achieve a balance between two conflicting properties. Consistent high-quality CT images can significantly help clinicians in diagnoses and supporting researchers in the development of AI-diagnostic models.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Dosis de Radiación , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Aprendizaje Profundo
2.
Med Phys ; 51(5): 3309-3321, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569143

RESUMEN

BACKGROUND: Patient head motion is a common source of image artifacts in computed tomography (CT) of the head, leading to degraded image quality and potentially incorrect diagnoses. The partial angle reconstruction (PAR) means dividing the CT projection into several consecutive angular segments and reconstructing each segment individually. Although motion estimation and compensation using PAR has been developed and investigated in cardiac CT scans, its potential for reducing motion artifacts in head CT scans remains unexplored. PURPOSE: To develop a deep learning (DL) model capable of directly estimating head motion from PAR images of head CT scans and to integrate the estimated motion into an iterative reconstruction process to compensate for the motion. METHODS: Head motion is considered as a rigid transformation described by six time-variant variables, including the three variables for translation and three variables for rotation. Each motion variable is modeled using a B-spline defined by five control points (CP) along time. We split the full projections from 360° into 25 consecutive PARs and subsequently input them into a convolutional neural network (CNN) that outputs the estimated CPs for each motion variable. The estimated CPs are used to calculate the object motion in each projection, which are incorporated into the forward and backprojection of an iterative reconstruction algorithm to reconstruct the motion-compensated image. The performance of our DL model is evaluated through both simulation and phantom studies. RESULTS: The DL model achieved high accuracy in estimating head motion, as demonstrated in both the simulation study (mean absolute error (MAE) ranging from 0.28 to 0.45 mm or degree across different motion variables) and the phantom study (MAE ranging from 0.40 to 0.48 mm or degree). The resulting motion-corrected image, I D L , P A R ${I}_{DL,\ PAR}$ , exhibited a significant reduction in motion artifacts when compared to the traditional filtered back-projection reconstructions, which is evidenced both in the simulation study (image MAE drops from 178 ± $ \pm $ 33HU to 37 ± $ \pm $ 9HU, structural similarity index (SSIM) increases from 0.60 ± $ \pm $ 0.06 to 0.98 ± $ \pm $ 0.01) and the phantom study (image MAE drops from 117 ± $ \pm $ 17HU to 42 ± $ \pm $ 19HU, SSIM increases from 0.83 ± $ \pm $ 0.04 to 0.98 ± $ \pm $ 0.02). CONCLUSIONS: We demonstrate that using PAR and our proposed deep learning model enables accurate estimation of patient head motion and effectively reduces motion artifacts in the resulting head CT images.


Asunto(s)
Artefactos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cabeza/diagnóstico por imagen , Movimientos de la Cabeza , Fantasmas de Imagen
3.
Osteoarthr Cartil Open ; 6(1): 100436, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38384979

RESUMEN

Background: Recent reports suggested that dual-energy CT (DECT) may help discriminate between different types of calcium phosphate crystals in vivo, which would have important implications for the characterization of crystal deposition occurring in osteoarthritis. Purpose: Our aim was to test the hypothesis that DECT can effectively differentiate basic calcium phosphate (BCP) from calcium pyrophosphate (CPP) deposition diseases. Methods: Discarded tissue after total knee replacement specimens in a 71 year-old patient with knee osteoarthritis and chondrocalcinosis was scanned using DECT at standard clinical parameters. Specimens were then examined on light microscopy which revealed CPP deposition in 4 specimens (medial femoral condyle, lateral tibial plateau and both menisci) without BCP deposition. Regions of interest were placed on post-processed CT images using Rho/Z maps (Syngo.via, Siemens Healthineers, VB10B) in different areas of CPP deposition, trabecular bone BCP (T-BCP) and subchondral bone plate BCP (C-BCP). Results: Dual Energy Index (DEI) of CPP was 0.12 (SD â€‹= â€‹0.02) for reader 1 and 0.09 (SD â€‹= â€‹0.03) for reader 2, The effective atomic number (Zeff) of CPP was 10.83 (SD â€‹= â€‹0.44) for reader 1 and 10.11 (SD â€‹= â€‹0.66) for reader 2. Nearly all DECT parameters of CPP were higher than those of T-BCP, lower than those of C-BCP, and largely overlapping with Aggregate-BCP (aggregate of T-BCP and C-BCP). Conclusion: Differentiation of different types of calcium crystals using DECT is not feasible in a clinical setting.

4.
Phys Med Biol ; 68(20)2023 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-37726013

RESUMEN

Objective. Ultrasound is extensively utilized as a convenient and cost-effective method in emergency situations. Unfortunately, the limited availability of skilled clinicians in emergency hinders the wider adoption of point-of-care ultrasound. To overcome this challenge, this paper aims to aid less experienced healthcare providers in emergency lung ultrasound scans.Approach. To assist healthcare providers, it is important to have a comprehensive model that can automatically guide the entire process of lung ultrasound based on the clinician's workflow. In this paper, we propose a framework for diagnosing pneumothorax using artificial intelligence (AI) assistance. Specifically, the proposed framework for lung ultrasound scan follows the steps taken by skilled physicians. It begins with finding the appropriate transducer position on the chest to locate the pleural line accurately in B-mode. The next step involves acquiring temporal M-mode data to determine the presence of lung sliding, a crucial indicator for pneumothorax. To mimic the sequential process of clinicians, two DL models were developed. The first model focuses on quality assurance (QA) and regression of the pleural line region-of-interest, while the second model classifies lung sliding. To achieve the inference on a mobile device, a size of EfficientNet-Lite0 model was further reduced to have fewer than 3 million parameters.Main results. The results showed that both the QA and lung sliding classification models achieved over 95% in area under the receiver operating characteristic (AUC), while the ROI performance reached 89% in the dice similarity coefficient. The entire stepwise pipeline was simulated using retrospective data, yielding an AUC of 89%.Significance. The step-wise AI framework for the pneumothorax diagnosis with QA offers an intelligible guide for each clinical workflow, which achieved significantly high precision and real-time inferences.


Asunto(s)
Neumotórax , Humanos , Neumotórax/diagnóstico por imagen , Estudios Retrospectivos , Sistemas de Atención de Punto , Inteligencia Artificial , Ultrasonografía/métodos
5.
Phys Med Biol ; 68(9)2023 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-36990097

RESUMEN

Objective. The purpose of this study is to assess its human images and its unique capabilities such as the 'on demand' higher spatial resolution and multi-spectral imaging of photon-counting-detector (PCD)-CT.Approach. In this study, the FDA 510(k) cleared mobile PCD-CT (OmniTom Elite) was used. To this end, we imaged internationally certified CT phantoms and a human cadaver head to evaluate the feasibility of high resolution (HR) and multi-energy imaging. We also demonstrate the performance of PCD-CT via first-in-human imaging by scanning three human volunteers.Main results. At the 5 mm slice thickness, routinely used in diagnostic head CT, the first human PCD-CT images were diagnostically equivalent to the EID-CT scanner. The HR acquisition mode of PCD-CT achieved a resolution of 11 line-pairs (lp)/cm as compared to 7 lp cm-1using the same kernel (posterior fossa-kernel) in the standard acquisition mode of EID-CT. For the quantitative multi-energy CT performance, the measured CT numbers in virtual mono-energetic images (VMI) of iodine inserts in the Gammex Multi-Energy CT phantom (model 1492, Sun Nuclear Corporation, USA) matched the manufacturer reference values with mean percent error of 3.25%. Multi-energy decomposition with PCD-CT demonstrated the separation and quantification of iodine, calcium, and water.Significance. PCD-CT can achieve multi-resolution acquisition modes without physically changing the CT detector. It can provide superior spatial resolution compared with the standard acquisition mode the conventional mobile EID-CT. Quantitative spectral capability of PCD-CT can provide accurate, simultaneous multi-energy images for material decomposition and VMI generation using a single exposure.


Asunto(s)
Yodo , Fotones , Humanos , Tomografía Computarizada por Rayos X/métodos , Tomógrafos Computarizados por Rayos X , Cabeza , Fantasmas de Imagen
6.
Phys Med Biol ; 67(11)2022 05 16.
Artículo en Inglés | MEDLINE | ID: mdl-35390782

RESUMEN

Objective.There are several x-ray computed tomography (CT) scanning strategies used to reduce radiation dose, such as (1) sparse-view CT, (2) low-dose CT and (3) region-of-interest (ROI) CT (called interior tomography). To further reduce the dose, sparse-view and/or low-dose CT settings can be applied together with interior tomography. Interior tomography has various advantages in terms of reducing the number of detectors and decreasing the x-ray radiation dose. However, a large patient or a small field-of-view (FOV) detector can cause truncated projections, and then the reconstructed images suffer from severe cupping artifacts. In addition, although low-dose CT can reduce the radiation exposure dose, analytic reconstruction algorithms produce image noise. Recently, many researchers have utilized image-domain deep learning (DL) approaches to remove each artifact and demonstrated impressive performances, and the theory of deep convolutional framelets supports the reason for the performance improvement.Approach.In this paper, we found that it is difficult to solve coupled artifacts using an image-domain convolutional neural network (CNN) based on deep convolutional framelets.Significance.To address the coupled problem, we decouple it into two sub-problems: (i) image-domain noise reduction inside the truncated projection to solve low-dose CT problem and (ii) extrapolation of the projection outside the truncated projection to solve the ROI CT problem. The decoupled sub-problems are solved directly with a novel proposed end-to-end learning method using dual-domain CNNs.Main results.We demonstrate that the proposed method outperforms the conventional image-domain DL methods, and a projection-domain CNN shows better performance than the image-domain CNNs commonly used by many researchers.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Artefactos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Rayos X
7.
BMJ Open ; 12(2): e053635, 2022 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-35190428

RESUMEN

OBJECTIVE: To develop simple but clinically informative risk stratification tools using a few top demographic factors and biomarkers at COVID-19 diagnosis to predict acute kidney injury (AKI) and death. DESIGN: Retrospective cohort analysis, follow-up from 1 February through 28 May 2020. SETTING: 3 teaching hospitals, 2 urban and 1 community-based in the Boston area. PARTICIPANTS: Eligible patients were at least 18 years old, tested COVID-19 positive from 1 February through 28 May 2020, and had at least two serum creatinine measurements within 30 days of a new COVID-19 diagnosis. Exclusion criteria were having chronic kidney disease or having a previous AKI within 3 months of a new COVID-19 diagnosis. MAIN OUTCOMES AND MEASURES: Time from new COVID-19 diagnosis until AKI event, time until death event. RESULTS: Among 3716 patients, there were 1855 (49.9%) males and the average age was 58.6 years (SD 19.2 years). Age, sex, white blood cell, haemoglobin, platelet, C reactive protein (CRP) and D-dimer levels were most strongly associated with AKI and/or death. We created risk scores using these variables predicting AKI within 3 days and death within 30 days of a new COVID-19 diagnosis. Area under the curve (AUC) for predicting AKI within 3 days was 0.785 (95% CI 0.758 to 0.813) and AUC for death within 30 days was 0.861 (95% CI 0.843 to 0.878). Haemoglobin was the most predictive component for AKI, and age the most predictive for death. Predictive accuracies using all study variables were similar to using the simplified scores. CONCLUSION: Simple risk scores using age, sex, a complete blood cell count, CRP and D-dimer were highly predictive of AKI and death and can help simplify and better inform clinical decision making.


Asunto(s)
Lesión Renal Aguda , COVID-19 , Insuficiencia Renal Crónica , Lesión Renal Aguda/complicaciones , Lesión Renal Aguda/diagnóstico , Adolescente , Prueba de COVID-19 , Estudios de Cohortes , Hospitales , Humanos , Masculino , Persona de Mediana Edad , Insuficiencia Renal Crónica/complicaciones , Insuficiencia Renal Crónica/diagnóstico , Estudios Retrospectivos , Medición de Riesgo , Factores de Riesgo , SARS-CoV-2
8.
Diagnostics (Basel) ; 12(1)2022 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-35054267

RESUMEN

Imaging plays an important role in assessing the severity of COVID-19 pneumonia. Recent COVID-19 research indicates that the disease progress propagates from the bottom of the lungs to the top. However, chest radiography (CXR) cannot directly provide a quantitative metric of radiographic opacities, and existing AI-assisted CXR analysis methods do not quantify the regional severity. In this paper, to assist the regional analysis, we developed a fully automated framework using deep learning-based four-region segmentation and detection models to assist the quantification of COVID-19 pneumonia. Specifically, a segmentation model is first applied to separate left and right lungs, and then a detection network of the carina and left hilum is used to separate upper and lower lungs. To improve the segmentation performance, an ensemble strategy with five models is exploited. We evaluated the clinical relevance of the proposed method compared with the radiographic assessment of the quality of lung edema (RALE) annotated by physicians. Mean intensities of segmented four regions indicate a positive correlation to the regional extent and density scores of pulmonary opacities based on the RALE. Therefore, the proposed method can accurately assist the quantification of regional pulmonary opacities of COVID-19 pneumonia patients.

9.
Phys Med Biol ; 66(23)2021 11 26.
Artículo en Inglés | MEDLINE | ID: mdl-34768246

RESUMEN

Segmentation has been widely used in diagnosis, lesion detection, and surgery planning. Although deep learning (DL)-based segmentation methods currently outperform traditional methods, most DL-based segmentation models are computationally expensive and memory inefficient, which are not suitable for the intervention of liver surgery. To address this issue, a simple solution is to make a segmentation model very small for the fast inference time, however, there is a trade-off between the model size and performance. In this paper, we propose a DL-based real-time 3-D liver CT segmentation method, where knowledge distillation (KD) method, known as knowledge transfer from teacher to student models, is incorporated to compress the model while preserving the performance. Because it is well known that the knowledge transfer is inefficient when the disparity of teacher and student model sizes is large, we propose a growing teacher assistant network (GTAN) to gradually learn the knowledge without extra computational cost, which can efficiently transfer knowledge even with the large gap of teacher and student model sizes. In our results, dice similarity coefficient of the student model with KD improved 1.2% (85.9% to 87.1%) compared to the student model without KD, which is a similar performance of the teacher model using only 8% (100k) parameters. Furthermore, with a student model of 2% (30k) parameters, the proposed model using the GTAN improved the dice coefficient about 2% compared to the student model without KD, and the inference time is 13 ms per a 3-D image. Therefore, the proposed method has a great potential for intervention in liver surgery as well as in many real-time applications.


Asunto(s)
Hígado , Tomografía Computarizada por Rayos X , Humanos , Hígado/diagnóstico por imagen , Cintigrafía
10.
Med Phys ; 48(12): 7657-7672, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34791655

RESUMEN

PURPOSE: Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS: In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS: We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS: The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Humanos , Fantasmas de Imagen , Proyectos de Investigación , Tomografía Computarizada por Rayos X
11.
Nat Med ; 27(10): 1735-1743, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34526699

RESUMEN

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Asunto(s)
COVID-19/fisiopatología , Aprendizaje Automático , Evaluación de Resultado en la Atención de Salud , COVID-19/terapia , COVID-19/virología , Registros Electrónicos de Salud , Humanos , Pronóstico , SARS-CoV-2/aislamiento & purificación
12.
PET Clin ; 16(4): 533-542, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34537129

RESUMEN

PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Relación Señal-Ruido
13.
Phys Med Biol ; 66(15)2021 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-34126602

RESUMEN

Compared to conventional computed tomography (CT), spectral CT can provide the capability of material decomposition, which can be used in many clinical diagnosis applications. However, the decomposed images can be very noisy due to the dose limit in CT scanning and the noise magnification of the material decomposition process. To alleviate this situation, we proposed an iterative one-step inversion material decomposition algorithm with a Noise2Noise prior. The algorithm estimated material images directly from projection data and used a Noise2Noise prior for denoising. In contrast to supervised deep learning methods, the designed Noise2Noise prior was built based on self-supervised learning and did not need external data for training. In our method, the data consistency term and the Noise2Noise network were alternatively optimized in the iterative framework, respectively, using a separable quadratic surrogate (SQS) and the Adam algorithm. The proposed iterative algorithm was validated and compared to other methods on simulated spectral CT data, preclinical photon-counting CT data and clinical dual-energy CT data. Quantitative analysis showed that our proposed method performs promisingly on noise suppression and structure detail recovery.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Relación Señal-Ruido , Tomografía Computarizada por Rayos X
14.
Eur J Radiol ; 139: 109583, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33846041

RESUMEN

PURPOSE: As of August 30th, there were in total 25.1 million confirmed cases and 845 thousand deaths caused by coronavirus disease of 2019 (COVID-19) worldwide. With overwhelming demands on medical resources, patient stratification based on their risks is essential. In this multi-center study, we built prognosis models to predict severity outcomes, combining patients' electronic health records (EHR), which included vital signs and laboratory data, with deep learning- and CT-based severity prediction. METHOD: We first developed a CT segmentation network using datasets from multiple institutions worldwide. Two biomarkers were extracted from the CT images: total opacity ratio (TOR) and consolidation ratio (CR). After obtaining TOR and CR, further prognosis analysis was conducted on datasets from INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3. For each data cohort, generalized linear model (GLM) was applied for prognosis prediction. RESULTS: For the deep learning model, the correlation coefficient of the network prediction and manual segmentation was 0.755, 0.919, and 0.824 for the three cohorts, respectively. The AUC (95 % CI) of the final prognosis models was 0.85(0.77,0.92), 0.93(0.87,0.98), and 0.86(0.75,0.94) for INSTITUTE-1, INSTITUTE-2 and INSTITUTE-3 cohorts, respectively. Either TOR or CR exist in all three final prognosis models. Age, white blood cell (WBC), and platelet (PLT) were chosen predictors in two cohorts. Oxygen saturation (SpO2) was a chosen predictor in one cohort. CONCLUSION: The developed deep learning method can segment lung infection regions. Prognosis results indicated that age, SpO2, CT biomarkers, PLT, and WBC were the most important prognostic predictors of COVID-19 in our prognosis model.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Registros Electrónicos de Salud , Humanos , Pulmón , Pronóstico , SARS-CoV-2 , Tomografía Computarizada por Rayos X
15.
Med Image Anal ; 70: 101993, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33711739

RESUMEN

In recent years, deep learning-based image analysis methods have been widely applied in computer-aided detection, diagnosis and prognosis, and has shown its value during the public health crisis of the novel coronavirus disease 2019 (COVID-19) pandemic. Chest radiograph (CXR) has been playing a crucial role in COVID-19 patient triaging, diagnosing and monitoring, particularly in the United States. Considering the mixed and unspecific signals in CXR, an image retrieval model of CXR that provides both similar images and associated clinical information can be more clinically meaningful than a direct image diagnostic model. In this work we develop a novel CXR image retrieval model based on deep metric learning. Unlike traditional diagnostic models which aim at learning the direct mapping from images to labels, the proposed model aims at learning the optimized embedding space of images, where images with the same labels and similar contents are pulled together. The proposed model utilizes multi-similarity loss with hard-mining sampling strategy and attention mechanism to learn the optimized embedding space, and provides similar images, the visualizations of disease-related attention maps and useful clinical information to assist clinical decisions. The model is trained and validated on an international multi-site COVID-19 dataset collected from 3 different sources. Experimental results of COVID-19 image retrieval and diagnosis tasks show that the proposed model can serve as a robust solution for CXR analysis and patient management for COVID-19. The model is also tested on its transferability on a different clinical decision support task for COVID-19, where the pre-trained model is applied to extract image features from a new dataset without any further training. The extracted features are then combined with COVID-19 patient's vitals, lab tests and medical histories to predict the possibility of airway intubation in 72 hours, which is strongly associated with patient prognosis, and is crucial for patient care and hospital resource planning. These results demonstrate our deep metric learning based image retrieval model is highly efficient in the CXR retrieval, diagnosis and prognosis, and thus has great clinical value for the treatment and management of COVID-19 patients.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias
16.
Sci Rep ; 11(1): 858, 2021 01 13.
Artículo en Inglés | MEDLINE | ID: mdl-33441578

RESUMEN

To compare the performance of artificial intelligence (AI) and Radiographic Assessment of Lung Edema (RALE) scores from frontal chest radiographs (CXRs) for predicting patient outcomes and the need for mechanical ventilation in COVID-19 pneumonia. Our IRB-approved study included 1367 serial CXRs from 405 adult patients (mean age 65 ± 16 years) from two sites in the US (Site A) and South Korea (Site B). We recorded information pertaining to patient demographics (age, gender), smoking history, comorbid conditions (such as cancer, cardiovascular and other diseases), vital signs (temperature, oxygen saturation), and available laboratory data (such as WBC count and CRP). Two thoracic radiologists performed the qualitative assessment of all CXRs based on the RALE score for assessing the severity of lung involvement. All CXRs were processed with a commercial AI algorithm to obtain the percentage of the lung affected with findings related to COVID-19 (AI score). Independent t- and chi-square tests were used in addition to multiple logistic regression with Area Under the Curve (AUC) as output for predicting disease outcome and the need for mechanical ventilation. The RALE and AI scores had a strong positive correlation in CXRs from each site (r2 = 0.79-0.86; p < 0.0001). Patients who died or received mechanical ventilation had significantly higher RALE and AI scores than those with recovery or without the need for mechanical ventilation (p < 0.001). Patients with a more substantial difference in baseline and maximum RALE scores and AI scores had a higher prevalence of death and mechanical ventilation (p < 0.001). The addition of patients' age, gender, WBC count, and peripheral oxygen saturation increased the outcome prediction from 0.87 to 0.94 (95% CI 0.90-0.97) for RALE scores and from 0.82 to 0.91 (95% CI 0.87-0.95) for the AI scores. AI algorithm is as robust a predictor of adverse patient outcome (death or need for mechanical ventilation) as subjective RALE scores in patients with COVID-19 pneumonia.


Asunto(s)
Inteligencia Artificial , COVID-19/diagnóstico , COVID-19/terapia , Respiración Artificial , Adulto , Anciano , Anciano de 80 o más Años , COVID-19/diagnóstico por imagen , Estudios de Cohortes , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón/diagnóstico por imagen , Pulmón/patología , Masculino , Persona de Mediana Edad , Tamaño de los Órganos , Pronóstico , Tomografía Computarizada por Rayos X , Adulto Joven
17.
Res Sq ; 2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33442676

RESUMEN

'Federated Learning' (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the "EXAM" (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.

18.
IEEE J Biomed Health Inform ; 24(12): 3529-3538, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33044938

RESUMEN

Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Tomografía Computarizada por Rayos X/métodos , Algoritmos , COVID-19/virología , Femenino , Humanos , Masculino , Estudios Retrospectivos , SARS-CoV-2/aislamiento & purificación , Índice de Severidad de la Enfermedad
19.
Med Phys ; 46(11): 4763-4776, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31132144

RESUMEN

PURPOSE: Deep neural network-based image reconstruction has demonstrated promising performance in medical imaging for undersampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. METHODS: We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. RESULTS: The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on two-dimensional (2D) sparse-view and limited-angle problems on the low-dose CT challenge dataset. The difference in root-mean-square-error (RMSE) and structural similarity index (SSIM) was within [-0.23,0.47] HU and [0,0.001], respectively, with 95% confidence level. For three-dimensional (3D) image reconstruction with ordinary-size CT volume, the proposed method only needed 2 GB graphics processing unit (GPU) memory and 0.45 s per training iteration as minimum requirement, whereas existing methods may require 417 GB and 31 min. The proposed method achieved improved performance compared to total variation- and dictionary learning-based iterative reconstruction for both 2D and 3D problems. CONCLUSIONS: We proposed a training-time computationally efficient neural network for CT image reconstruction. The proposed method achieved comparable image quality with state-of-the-art neural network for CT reconstruction, with significantly reduced memory and time requirement during training. The proposed method is applicable to 3D image reconstruction problems such as cone-beam CT and tomosynthesis on mainstream GPUs.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X , Imagenología Tridimensional , Control de Calidad , Dosis de Radiación , Factores de Tiempo
20.
Sci Rep ; 8(1): 14195, 2018 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-30242169

RESUMEN

Computed tomography (CT) is used to diagnose many emergent medical conditions, including stroke and traumatic brain injuries. Unfortunately, the size, weight, and expense of CT systems make them largely inaccessible for patients outside of major hospitals. We have designed a module containing multiple miniature x-ray sources that could allow for CT systems to be significantly lighter, smaller, and cheaper, and to operate without any moving parts. We have developed a novel photocathode-based x-ray source, created by depositing a thin film of magnesium on an electron multiplier. When illuminated by a UV LED, this photocathode emits a beam of electrons, with a beam current of up to 1 mA. The produced electrons are accelerated through a high voltage to a tungsten target. These sources are individually addressable and can be pulsed rapidly, through electronic control of the LEDs. Seven of these sources are housed together in a 17.5 degree arc within a custom vacuum manifold. A full ring of these modules could be used for CT imaging. By pulsing the sources in series, we are able to demonstrate x-ray tomosynthesis without any moving parts. With a clinical flat-panel detector, we demonstrate 3D acquisition and reconstructions of a cadaver swine lung.


Asunto(s)
Tomografía Computarizada por Rayos X/métodos , Animales , Electrones , Humanos , Fantasmas de Imagen , Porcinos , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...