Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Radiology ; 308(1): e222937, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37489991

RESUMEN

Background An artificial intelligence (AI) algorithm has been developed for fully automated body composition assessment of lung cancer screening noncontrast low-dose CT of the chest (LDCT) scans, but the utility of these measurements in disease risk prediction models has not been assessed. Purpose To evaluate the added value of CT-based AI-derived body composition measurements in risk prediction of lung cancer incidence, lung cancer death, cardiovascular disease (CVD) death, and all-cause mortality in the National Lung Screening Trial (NLST). Materials and Methods In this secondary analysis of the NLST, body composition measurements, including area and attenuation attributes of skeletal muscle and subcutaneous adipose tissue, were derived from baseline LDCT examinations by using a previously developed AI algorithm. The added value of these measurements was assessed with sex- and cause-specific Cox proportional hazards models with and without the AI-derived body composition measurements for predicting lung cancer incidence, lung cancer death, CVD death, and all-cause mortality. Models were adjusted for confounding variables including age; body mass index; quantitative emphysema; coronary artery calcification; history of diabetes, heart disease, hypertension, and stroke; and other PLCOM2012 lung cancer risk factors. Goodness-of-fit improvements were assessed with the likelihood ratio test. Results Among 20 768 included participants (median age, 61 years [IQR, 57-65 years]; 12 317 men), 865 were diagnosed with lung cancer and 4180 died during follow-up. Including the AI-derived body composition measurements improved risk prediction for lung cancer death (male participants: χ2 = 23.09, P < .001; female participants: χ2 = 15.04, P = .002), CVD death (males: χ2 = 69.94, P < .001; females: χ2 = 16.60, P < .001), and all-cause mortality (males: χ2 = 248.13, P < .001; females: χ2 = 94.54, P < .001), but not for lung cancer incidence (male participants: χ2 = 2.53, P = .11; female participants: χ2 = 1.73, P = .19). Conclusion The body composition measurements automatically derived from baseline low-dose CT examinations added predictive value for lung cancer death, CVD death, and all-cause death, but not for lung cancer incidence in the NLST. Clinical trial registration no. NCT00047385 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Fintelmann in this issue.


Asunto(s)
Enfermedades Cardiovasculares , Neoplasias Pulmonares , Femenino , Masculino , Humanos , Persona de Mediana Edad , Detección Precoz del Cáncer , Inteligencia Artificial , Composición Corporal , Pulmón
2.
Neurocomputing (Amst) ; 397: 48-59, 2020 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-32863584

RESUMEN

With the rapid development of image acquisition and storage, multiple images per class are commonly available for computer vision tasks (e.g., face recognition, object detection, medical imaging, etc.). Recently, the recurrent neural network (RNN) has been widely integrated with convolutional neural networks (CNN) to perform image classification on ordered (sequential) data. In this paper, by permutating multiple images as multiple dummy orders, we generalize the ordered "RNN+CNN" design (longitudinal) to a novel unordered fashion, called Multi-path x-D Recurrent Neural Network (MxDRNN) for image classification. To the best of our knowledge, few (if any) existing studies have deployed the RNN framework to unordered intra-class images to leverage classification performance. Specifically, multiple learning paths are introduced in the MxDRNN to extract discriminative features by permutating input dummy orders. Eight datasets from five different fields (MNIST, 3D-MNIST, CIFAR, VGGFace2, and lung screening computed tomography) are included to evaluate the performance of our method. The proposed MxDRNN improves the baseline performance by a large margin across the different application fields (e.g., accuracy from 46.40% to 76.54% in VGGFace2 test pose set, AUC from 0.7418 to 0.8162 in NLST lung dataset). Additionally, empirical experiments show the MxDRNN is more robust to category-irrelevant attributes (e.g., expression, pose in face images), which may introduce difficulties for image classification and algorithm generalizability. The code is publicly available.

3.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38224508

RESUMEN

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático Supervisado
4.
J Med Imaging (Bellingham) ; 11(2): 024008, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38571764

RESUMEN

Purpose: Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured. Approach: To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space. Results: Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area. Conclusion: This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.

5.
J Med Imaging (Bellingham) ; 10(4): 044002, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37469854

RESUMEN

Purpose: Anatomy-based quantification of emphysema in a lung screening cohort has the potential to improve lung cancer risk stratification and risk communication. Segmenting lung lobes is an essential step in this analysis, but leading lobe segmentation algorithms have not been validated for lung screening computed tomography (CT). Approach: In this work, we develop an automated approach to lobar emphysema quantification and study its association with lung cancer incidence. We combine self-supervised training with level set regularization and finetuning with radiologist annotations on three datasets to develop a lobe segmentation algorithm that is robust for lung screening CT. Using this algorithm, we extract quantitative CT measures for a cohort (n=1189) from the National Lung Screening Trial and analyze the multivariate association with lung cancer incidence. Results: Our lobe segmentation approach achieved an external validation Dice of 0.93, significantly outperforming a leading algorithm at 0.90 (p<0.01). The percentage of low attenuation volume in the right upper lobe was associated with increased lung cancer incidence (odds ratio: 1.97; 95% CI: [1.06, 3.66]) independent of PLCOm2012 risk factors and diagnosis of whole lung emphysema. Quantitative lobar emphysema improved the goodness-of-fit to lung cancer incidence (χ2=7.48, p=0.02). Conclusions: We are the first to develop and validate an automated lobe segmentation algorithm that is robust to smoking-related pathology. We discover a quantitative risk factor, lending further evidence that regional emphysema is independently associated with increased lung cancer incidence. The algorithm is provided at https://github.com/MASILab/EmphysemaSeg.

6.
Artículo en Inglés | MEDLINE | ID: mdl-37465093

RESUMEN

Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC≥0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.

7.
Artículo en Inglés | MEDLINE | ID: mdl-37465096

RESUMEN

Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.

8.
Artículo en Inglés | MEDLINE | ID: mdl-37465098

RESUMEN

In lung cancer screening, estimation of future lung cancer risk is usually guided by demographics and smoking status. The role of constitutional profiles of human body, a.k.a. body habitus, is increasingly understood to be important, but has not been integrated into risk models. Chest low dose computed tomography (LDCT) is the standard imaging study in lung cancer screening, with the capability to discriminate differences in body composition and organ arrangement in the thorax. We hypothesize that the primary phenotypes identified using lung screening chest LDCT can form a representation of body habitus and add predictive power for lung cancer risk stratification. In this pilot study, we evaluated the feasibility of body habitus image-based phenotyping on a large lung screening LDCT dataset. A thoracic imaging manifold was estimated based on an intensity-based pairwise (dis)similarity metric for pairs of spatial normalized chest LDCT images. We applied the hierarchical clustering method on this manifold to identify the primary phenotypes. Body habitus features of each identified phenotype were evaluated and associated with future lung cancer risk using time-to-event analysis. We evaluated the method on the baseline LDCT scans of 1,200 male subjects sampled from National Lung Screening Trial. Five primary phenotypes were identified, which were associated with highly distinguishable clinical and body habitus features. Time-to-event analysis against future lung cancer incidences showed two of the five identified phenotypes were associated with elevated future lung cancer risks (HR=1.61, 95% CI = [1.08, 2.38], p=0.019; HR=1.67, 95% CI = [0.98, 2.86], p=0.057). These results indicated that it is feasible to capture the body habitus by image-base phenotyping using lung screening LDCT and the learned body habitus representation can potentially add value for future lung cancer risk stratification.

9.
Med Image Anal ; 88: 102852, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37276799

RESUMEN

Field-of-view (FOV) tissue truncation beyond the lungs is common in routine lung screening computed tomography (CT). This poses limitations for opportunistic CT-based body composition (BC) assessment as key anatomical structures are missing. Traditionally, extending the FOV of CT is considered as a CT reconstruction problem using limited data. However, this approach relies on the projection domain data which might not be available in application. In this work, we formulate the problem from the semantic image extension perspective which only requires image data as inputs. The proposed two-stage method identifies a new FOV border based on the estimated extent of the complete body and imputes missing tissues in the truncated region. The training samples are simulated using CT slices with complete body in FOV, making the model development self-supervised. We evaluate the validity of the proposed method in automatic BC assessment using lung screening CT with limited FOV. The proposed method effectively restores the missing tissues and reduces BC assessment error introduced by FOV tissue truncation. In the BC assessment for large-scale lung screening CT datasets, this correction improves both the intra-subject consistency and the correlation with anthropometric approximations. The developed method is available at https://github.com/MASILab/S-EFOV.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Semántica , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Tórax , Composición Corporal , Fantasmas de Imagen , Algoritmos
10.
Med Image Anal ; 90: 102939, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37725868

RESUMEN

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

11.
Med Image Comput Comput Assist Interv ; 14221: 649-659, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38779102

RESUMEN

The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.

12.
Artículo en Inglés | MEDLINE | ID: mdl-35531320

RESUMEN

Multiplex immunofluorescence (MxIF) is an emerging technique that allows for staining multiple cellular and histological markers to stain simultaneously on a single tissue section. However, with multiple rounds of staining and bleaching, it is inevitable that the scarce tissue may be physically depleted. Thus, a digital way of synthesizing such missing tissue would be appealing since it would increase the useable areas for the downstream single-cell analysis. In this work, we investigate the feasibility of employing generative adversarial network (GAN) approaches to synthesize missing tissues using 11 MxIF structural molecular markers (i.e., epithelial and stromal). Briefly, we integrate a multi-channel high-resolution image synthesis approach to synthesize the missing tissue from the remaining markers. The performance of different methods is quantitatively evaluated via the downstream cell membrane segmentation task. Our contribution is that we, for the first time, assess the feasibility of synthesizing missing tissues in MxIF via quantitative segmentation. The proposed synthesis method has comparable reproducibility with the baseline method on performance for the missing tissue region reconstruction only, but it improves 40% on whole tissue synthesis that is crucial for practical application. We conclude that GANs are a promising direction of advancing MxIF imaging with deep image synthesis.

13.
Comput Biol Med ; 150: 106113, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36198225

RESUMEN

OBJECTIVE: Patients with indeterminate pulmonary nodules (IPN) with an intermediate to a high probability of lung cancer generally undergo invasive diagnostic procedures. Chest computed tomography image and clinical data have been in estimating the pretest probability of lung cancer. In this study, we apply a deep learning network to integrate multi-modal data from CT images and clinical data (including blood-based biomarkers) to improve lung cancer diagnosis. Our goal is to reduce uncertainty and to avoid morbidity, mortality, over- and undertreatment of patients with IPNs. METHOD: We use a retrospective study design with cross-validation and external-validation from four different sites. We introduce a deep learning framework with a two-path structure to learn from CT images and clinical data. The proposed model can learn and predict with single modality if the multi-modal data is not complete. We use 1284 patients in the learning cohort for model development. Three external sites (with 155, 136 and 96 patients, respectively) provided patient data for external validation. We compare our model to widely applied clinical prediction models (Mayo and Brock models) and image-only methods (e.g., Liao et al. model). RESULTS: Our co-learning model improves upon the performance of clinical-factor-only (Mayo and Brock models) and image-only (Liao et al.) models in both cross-validation of learning cohort (e.g. , AUC: 0.787 (ours) vs. 0.707-0.719 (baselines), results reported in validation fold and external-validation using three datasets from University of Pittsburgh Medical Center (e.g., 0.918 (ours) vs. 0.828-0.886 (baselines)), Detection of Early Cancer Among Military Personnel (e.g., 0.712 (ours) vs. 0.576-0.709 (baselines)), and University of Colorado Denver (e.g., 0.847 (ours) vs. 0.679-0.746 (baselines)). In addition, our model achieves better re-classification performance (cNRI 0.04 to 0.20) in all cross- and external-validation sets compared to the Mayo model. CONCLUSIONS: Lung cancer risk estimation in patients with IPNs can benefit from the co-learning of CT image and clinical data. Learning from more subjects, even though those only have a single modality, can improve the prediction accuracy. An integrated deep learning model can achieve reasonable discrimination and re-classification performance.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Humanos , Estudios Retrospectivos , Incertidumbre , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen
14.
Artículo en Inglés | MEDLINE | ID: mdl-36303578

RESUMEN

Certain body composition phenotypes, like sarcopenia, are well established as predictive markers for post-surgery complications and overall survival of lung cancer patients. However, their association with incidental lung cancer risk in the screening population is still unclear. We study the feasibility of body composition analysis using chest low dose computed tomography (LDCT). A two-stage fully automatic pipeline is developed to assess the cross-sectional area of body composition components including subcutaneous adipose tissue (SAT), muscle, visceral adipose tissue (VAT), and bone on T5, T8 and T10 vertebral levels. The pipeline is developed using 61 cases of the VerSe'20 dataset, 40 annotated cases of NLST, and 851 inhouse screening cases. On a test cohort consisting of 30 cases from the inhouse screening cohort (age 55 - 73, 50% female) and 42 cases of NLST (age 55 - 75, 59.5% female), the pipeline achieves a root mean square error (RMSE) of 7.25 mm (95% CI: [6.61, 7.85]) for the vertebral level identification and mean Dice similarity score (DSC) 0.99 ± 0.02, 0.96 ± 0.03, and 0.95 ± 0.04 for SAT, muscle, and VAT, respectively for body composition segmentation. The pipeline is generalized to the CT arm of the NLST dataset (25,205 subjects, 40.8% female, 1,056 lung cancer incidences). Time-to-event analysis for lung cancer incidence indicates inverse association between measured muscle cross-sectional area and incidental lung cancer risks (p < 0.001 female, p < 0.001 male). In conclusion, automatic body composition analysis using routine lung screening LDCT is feasible.

15.
Med Phys ; 48(10): 6060-6068, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34287944

RESUMEN

PURPOSE: Artificial intelligence diagnosis and triage of large vessel occlusion may quicken clinical response for a subset of time-sensitive acute ischemic stroke patients, improving outcomes. Differences in architectural elements within data-driven convolutional neural network (CNN) models impact performance. Foreknowledge of effective model architectural elements for domain-specific problems can narrow the search for candidate models and inform strategic model design and adaptation to optimize performance on available data. Here, we study CNN architectures with a range of learnable parameters and which span the inclusion of architectural elements, such as parallel processing branches and residual connections with varying methods of recombining residual information. METHODS: We compare five CNNs: ResNet-50, DenseNet-121, EfficientNet-B0, PhiNet, and an Inception module-based network, on a computed tomography angiography large vessel occlusion detection task. The models were trained and preliminarily evaluated with 10-fold cross-validation on preprocessed scans (n = 240). An ablation study was performed on PhiNet due to superior cross-validated test performance across accuracy, precision, recall, specificity, and F1 score. The final evaluation of all models was performed on a withheld external validation set (n = 60) and these predictions were subsequently calibrated with sigmoid curves. RESULTS: Uncalibrated results on the withheld external validation set show that DenseNet-121 had the best average performance on accuracy, precision, recall, specificity, and F1 score. After calibration DenseNet-121 maintained superior performance on all metrics except recall. CONCLUSIONS: The number of learnable parameters in our five models and best-ablated PhiNet directly related to cross-validated test performance-the smaller the model the better. However, this pattern did not hold when looking at generalization on the withheld external validation set. DenseNet-121 generalized the best; we posit this was due to its heavy use of residual connections utilizing concatenation, which causes feature maps from earlier layers to be used deeper in the network, while aiding in gradient flow and regularization.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Inteligencia Artificial , Angiografía por Tomografía Computarizada , Humanos , Redes Neurales de la Computación , Accidente Cerebrovascular/diagnóstico por imagen
16.
J Med Imaging (Bellingham) ; 8(1): 014004, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33634205

RESUMEN

Purpose: Deep learning is a promising technique for spleen segmentation. Our study aims to validate the reproducibility of deep learning-based spleen volume estimation by performing spleen segmentation on clinically acquired computed tomography (CT) scans from patients with myeloproliferative neoplasms. Approach: As approved by the institutional review board, we obtained 138 de-identified abdominal CT scans. A sum of voxel volume on an expert annotator's segmentations establishes the ground truth (estimation 1). We used our deep convolutional neural network (estimation 2) alongside traditional linear estimations (estimation 3 and 4) to estimate spleen volumes independently. Dice coefficient, Hausdorff distance, R 2 coefficient, Pearson R coefficient, the absolute difference in volume, and the relative difference in volume were calculated for 2 to 4 against the ground truth to compare and assess methods' performances. We re-labeled on scan-rescan on a subset of 40 studies to evaluate method reproducibility. Results: Calculated against the ground truth, the R 2 coefficients for our method (estimation 2) and linear method (estimation 3 and 4) are 0.998, 0.954, and 0.973, respectively. The Pearson R coefficients for the estimations against the ground truth are 0.999, 0.963, and 0.978, respectively (paired t -tests produced p < 0.05 between 2 and 3, and 2 and 4). Conclusion: The deep convolutional neural network algorithm shows excellent potential in rendering more precise spleen volume estimations. Our computer-aided segmentation exhibits reasonable improvements in splenic volume estimation accuracy.

17.
Radiol Artif Intell ; 3(6): e210032, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34870220

RESUMEN

PURPOSE: To develop a model to estimate lung cancer risk using lung cancer screening CT and clinical data elements (CDEs) without manual reading efforts. MATERIALS AND METHODS: Two screening cohorts were retrospectively studied: the National Lung Screening Trial (NLST; participants enrolled between August 2002 and April 2004) and the Vanderbilt Lung Screening Program (VLSP; participants enrolled between 2015 and 2018). Fivefold cross-validation using the NLST dataset was used for initial development and assessment of the co-learning model using whole CT scans and CDEs. The VLSP dataset was used for external testing of the developed model. Area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve were used to measure the performance of the model. The developed model was compared with published risk-prediction models that used only CDEs or imaging data alone. The Brock model was also included for comparison by imputing missing values for patients without a dominant pulmonary nodule. RESULTS: A total of 23 505 patients from the NLST (mean age, 62 years ± 5 [standard deviation]; 13 838 men, 9667 women) and 147 patients from the VLSP (mean age, 65 years ± 5; 82 men, 65 women) were included. Using cross-validation on the NLST dataset, the AUC of the proposed co-learning model (AUC, 0.88) was higher than the published models predicted with CDEs only (AUC, 0.69; P < .05) and with images only (AUC, 0.86; P < .05). Additionally, using the external VLSP test dataset, the co-learning model had a higher performance than each of the published individual models (AUC, 0.91 [co-learning] vs 0.59 [CDE-only] and 0.88 [image-only]; P < .05 for both comparisons). CONCLUSION: The proposed co-learning predictive model combining chest CT images and CDEs had a higher performance for lung cancer risk prediction than models that contained only CDE or only image data; the proposed model also had a higher performance than the Brock model.Keywords: Computer-aided Diagnosis (CAD), CT, Lung, Thorax Supplemental material is available for this article. © RSNA, 2021.

18.
Med Image Anal ; 69: 101894, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33421919

RESUMEN

Deep learning for three dimensional (3D) abdominal organ segmentation on high-resolution computed tomography (CT) is a challenging topic, in part due to the limited memory provide by graphics processing units (GPU) and large number of parameters and in 3D fully convolutional networks (FCN). Two prevalent strategies, lower resolution with wider field of view and higher resolution with limited field of view, have been explored but have been presented with varying degrees of success. In this paper, we propose a novel patch-based network with random spatial initialization and statistical fusion on overlapping regions of interest (ROIs). We evaluate the proposed approach using three datasets consisting of 260 subjects with varying numbers of manual labels. Compared with the canonical "coarse-to-fine" baseline methods, the proposed method increases the performance on multi-organ segmentation from 0.799 to 0.856 in terms of mean DSC score (p-value < 0.01 with paired t-test). The effect of different numbers of patches is evaluated by increasing the depth of coverage (expected number of patches evaluated per voxel). In addition, our method outperforms other state-of-the-art methods in abdominal organ segmentation. In conclusion, the approach provides a memory-conservative framework to enable 3D segmentation on high-resolution CT. The approach is compatible with many base network structures, without substantially increasing the complexity during inference. Given a CT scan with at high resolution, a low-res section (left panel) is trained with multi-channel segmentation. The low-res part contains down-sampling and normalization in order to preserve the complete spatial information. Interpolation and random patch sampling (mid panel) is employed to collect patches. The high-dimensional probability maps are acquired (right panel) from integration of all patches on field of views.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Abdomen/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X
19.
Med Phys ; 48(3): 1276-1285, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33410167

RESUMEN

PURPOSE: Dynamic contrast-enhanced computed tomography (CT) is widely used to provide dynamic tissue contrast for diagnostic investigation and vascular identification. However, the phase information of contrast injection is typically recorded manually by technicians, which introduces missing or mislabeling. Hence, imaging-based contrast phase identification is appealing, but challenging, due to large variations among different contrast protocols, vascular dynamics, and metabolism, especially for clinically acquired CT scans. The purpose of this study is to perform imaging-based phase identification for dynamic abdominal CT using a proposed adversarial learning framework across five representative contrast phases. METHODS: A generative adversarial network (GAN) is proposed as a disentangled representation learning model. To explicitly model different contrast phases, a low dimensional common representation and a class specific code are fused in the hidden layer. Then, the low dimensional features are reconstructed following a discriminator and classifier. 36 350 slices of CT scans from 400 subjects are used to evaluate the proposed method with fivefold cross-validation with splits on subjects. Then, 2216 slices images from 20 independent subjects are employed as independent testing data, which are evaluated using multiclass normalized confusion matrix. RESULTS: The proposed network significantly improved correspondence (0.93) over VGG, ResNet50, StarGAN, and 3DSE with accuracy scores 0.59, 0.62, 0.72, and 0.90, respectively (P < 0.001 Stuart-Maxwell test for normalized multiclass confusion matrix). CONCLUSION: We show that adversarial learning for discriminator can be benefit for capturing contrast information among phases. The proposed discriminator from the disentangled network achieves promising results.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Abdomen , Humanos , Tomografía Computarizada Espiral
20.
Artículo en Inglés | MEDLINE | ID: mdl-34650321

RESUMEN

Clinical data elements (CDEs) (e.g., age, smoking history), blood markers and chest computed tomography (CT) structural features have been regarded as effective means for assessing lung cancer risk. These independent variables can provide complementary information and we hypothesize that combining them will improve the prediction accuracy. In practice, not all patients have all these variables available. In this paper, we propose a new network design, termed as multi-path multi-modal missing network (M3Net), to integrate the multi-modal data (i.e., CDEs, biomarker and CT image) considering missing modality with multiple paths neural network. Each path learns discriminative features of one modality, and different modalities are fused in a second stage for an integrated prediction. The network can be trained end-to-end with both medical image features and CDEs/biomarkers, or make a prediction with single modality. We evaluate M3Net with datasets including three sites from the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions (MCL) project. Our method is cross validated within a cohort of 1291 subjects (383 subjects with complete CDEs/biomarkers and CT images), and externally validated with a cohort of 99 subjects (99 with complete CDEs/biomarkers and CT images). Both cross-validation and external-validation results show that combining multiple modality significantly improves the predicting performance of single modality. The results suggest that integrating subjects with missing either CDEs/biomarker or CT imaging features can contribute to the discriminatory power of our model (p < 0.05, bootstrap two-tailed test). In summary, the proposed M3Net framework provides an effective way to integrate image and non-image data in the context of missing information.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA