Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Eur Radiol ; 30(12): 6828-6837, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32683550

RESUMEN

OBJECTIVE: To develop a fully automated AI system to quantitatively assess the disease severity and disease progression of COVID-19 using thick-section chest CT images. METHODS: In this retrospective study, an AI system was developed to automatically segment and quantify the COVID-19-infected lung regions on thick-section chest CT images. Five hundred thirty-one CT scans from 204 COVID-19 patients were collected from one appointed COVID-19 hospital. The automatically segmented lung abnormalities were compared with manual segmentation of two experienced radiologists using the Dice coefficient on a randomly selected subset (30 CT scans). Two imaging biomarkers were automatically computed, i.e., the portion of infection (POI) and the average infection HU (iHU), to assess disease severity and disease progression. The assessments were compared with patient status of diagnosis reports and key phrases extracted from radiology reports using the area under the receiver operating characteristic curve (AUC) and Cohen's kappa, respectively. RESULTS: The dice coefficient between the segmentation of the AI system and two experienced radiologists for the COVID-19-infected lung abnormalities was 0.74 ± 0.28 and 0.76 ± 0.29, respectively, which were close to the inter-observer agreement (0.79 ± 0.25). The computed two imaging biomarkers can distinguish between the severe and non-severe stages with an AUC of 0.97 (p value < 0.001). Very good agreement (κ = 0.8220) between the AI system and the radiologists was achieved on evaluating the changes in infection volumes. CONCLUSIONS: A deep learning-based AI system built on the thick-section CT imaging can accurately quantify the COVID-19-associated lung abnormalities and assess the disease severity and its progressions. KEY POINTS: • A deep learning-based AI system was able to accurately segment the infected lung regions by COVID-19 using the thick-section CT scans (Dice coefficient ≥ 0.74). • The computed imaging biomarkers were able to distinguish between the non-severe and severe COVID-19 stages (area under the receiver operating characteristic curve 0.97). • The infection volume changes computed by the AI system were able to assess the COVID-19 progression (Cohen's kappa 0.8220).


Asunto(s)
Betacoronavirus , Infecciones Comunitarias Adquiridas/diagnóstico , Infecciones por Coronavirus/diagnóstico , Aprendizaje Profundo , Pulmón/diagnóstico por imagen , Neumonía Viral/diagnóstico , Neumonía/diagnóstico , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , COVID-19 , China/epidemiología , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Curva ROC , Estudios Retrospectivos , SARS-CoV-2
2.
Am J Respir Crit Care Med ; 193(6): 652-61, 2016 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-26569033

RESUMEN

RATIONALE: Endothelial dysfunction is of interest in relation to smoking-associated emphysema, a component of chronic obstructive pulmonary disease (COPD). We previously demonstrated that computed tomography (CT)-derived pulmonary blood flow (PBF) heterogeneity is greater in smokers with normal pulmonary function tests (PFTs) but who have visual evidence of centriacinar emphysema (CAE) on CT. OBJECTIVES: We introduced dual-energy CT (DECT) perfused blood volume (PBV) as a PBF surrogate to evaluate whether the CAE-associated increased PBF heterogeneity is reversible with sildenafil. METHODS: Seventeen PFT-normal current smokers were divided into CAE-susceptible (SS; n = 10) and nonsusceptible (NS; n = 7) smokers, based on the presence or absence of CT-detected CAE. DECT-PBV images were acquired before and 1 hour after administration of 20 mg oral sildenafil. Regional PBV and PBV coefficients of variation (CV), a measure of spatial blood flow heterogeneity, were determined, followed by quantitative assessment of the central arterial tree. MEASUREMENTS AND MAIN RESULTS: After sildenafil administration, regional PBV-CV decreased in SS subjects but did not decrease in NS subjects (P < 0.05), after adjusting for age and pack-years. Quantitative evaluation of the central pulmonary arteries revealed higher arterial volume and greater cross-sectional area (CSA) in the lower lobes of SS smokers, which suggested arterial enlargement in response to increased peripheral resistance. After sildenafil, arterial CSA decreased in SS smokers but did not decrease in NS smokers (P < 0.01). CONCLUSIONS: These results demonstrate that sildenafil restores peripheral perfusion and reduces central arterial enlargement in normal SS subjects with little effect in NS subjects, highlighting DECT-PBV as a biomarker of reversible endothelial dysfunction in smokers with CAE.


Asunto(s)
Endotelio Vascular/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Enfisema Pulmonar/diagnóstico por imagen , Imagen Radiográfica por Emisión de Doble Fotón , Fumar/efectos adversos , Tomografía Computarizada por Rayos X , Adulto , Endotelio Vascular/fisiopatología , Femenino , Humanos , Pulmón/fisiopatología , Masculino , Persona de Mediana Edad , Enfisema Pulmonar/fisiopatología
3.
Pattern Recognit Lett ; 76: 32-40, 2016 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-27175043

RESUMEN

Conventional curve skeletonization algorithms using the principle of Blum's transform, often, produce unwanted spurious branches due to boundary irregularities, digital effects, and other artifacts. This paper presents a new robust and efficient curve skeletonization algorithm for three-dimensional (3-D) elongated fuzzy objects using a minimum cost path approach, which avoids spurious branches without requiring post-pruning. Starting from a root voxel, the method iteratively expands the skeleton by adding new branches in each iteration that connects the farthest quench voxel to the current skeleton using a minimum cost path. The path-cost function is formulated using a novel measure of local significance factor defined by the fuzzy distance transform field, which forces the path to stick to the centerline of an object. The algorithm terminates when dilated skeletal branches fill the entire object volume or the current farthest quench voxel fails to generate a meaningful skeletal branch. Accuracy of the algorithm has been evaluated using computer-generated phantoms with known skeletons. Performance of the method in terms of false and missing skeletal branches, as defined by human experts, has been examined using in vivo CT imaging of human intrathoracic airways. Results from both experiments have established the superiority of the new method as compared to the existing methods in terms of accuracy as well as robustness in detecting true and false skeletal branches. The new algorithm makes a significant reduction in computation complexity by enabling detection of multiple new skeletal branches in one iteration. Specifically, this algorithm reduces the number of iterations from the number of terminal tree branches to the worst case performance of tree depth. In fact, experimental results suggest that, on an average, the order of computation complexity is reduced to the logarithm of the number of terminal branches of a tree-like object.

4.
IEEE Trans Med Imaging ; 43(1): 96-107, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37399157

RESUMEN

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.


Asunto(s)
Lenguaje , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
5.
IEEE Trans Med Imaging ; PP2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38923479

RESUMEN

Intrathoracic airway segmentation in computed tomography is a prerequisite for various respiratory disease analyses such as chronic obstructive pulmonary disease, asthma and lung cancer. Due to the low imaging contrast and noises execrated at peripheral branches, the topological-complexity and the intra-class imbalance of airway tree, it remains challenging for deep learning-based methods to segment the complete airway tree (on extracting deeper branches). Unlike other organs with simpler shapes or topology, the airway's complex tree structure imposes an unbearable burden to generate the "ground truth" label (up to 7 or 3 hours of manual or semi-automatic annotation per case). Most of the existing airway datasets are incompletely labeled/annotated, thus limiting the completeness of computer-segmented airway. In this paper, we propose a new anatomy-aware multi-class airway segmentation method enhanced by topology-guided iterative self-learning. Based on the natural airway anatomy, we formulate a simple yet highly effective anatomy-aware multi-class segmentation task to intuitively handle the severe intra-class imbalance of the airway. To solve the incomplete labeling issue, we propose a tailored iterative self-learning scheme to segment toward the complete airway tree. For generating pseudo-labels to achieve higher sensitivity (while retaining similar specificity), we introduce a novel breakage attention map and design a topology-guided pseudo-label refinement method by iteratively connecting breaking branches commonly existed from initial pseudo-labels. Extensive experiments have been conducted on four datasets including two public challenges. The proposed method achieves the top performance in both EXACT'09 challenge using average score and ATM'22 challenge on weighted average score. In a public BAS dataset and a private lung cancer dataset, our method significantly improves previous leading approaches by extracting at least (absolute) 6.1% more detected tree length and 5.2% more tree branches, while maintaining comparable precision.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38687670

RESUMEN

Automated colorectal cancer (CRC) segmentation in medical imaging is the key to achieving automation of CRC detection, staging, and treatment response monitoring. Compared with magnetic resonance imaging (MRI) and computed tomography colonography (CTC), conventional computed tomography (CT) has enormous potential because of its broad implementation, superiority for the hollow viscera (colon), and convenience without needing bowel preparation. However, the segmentation of CRC in conventional CT is more challenging due to the difficulties presenting with the unprepared bowel, such as distinguishing the colorectum from other structures with similar appearance and distinguishing the CRC from the contents of the colorectum. To tackle these challenges, we introduce DeepCRC-SL, the first automated segmentation algorithm for CRC and colorectum in conventional contrast-enhanced CT scans. We propose a topology-aware deep learning-based approach, which builds a novel 1-D colorectal coordinate system and encodes each voxel of the colorectum with a relative position along the coordinate system. We then induce an auxiliary regression task to predict the colorectal coordinate value of each voxel, aiming to integrate global topology into the segmentation network and thus improve the colorectum's continuity. Self-attention layers are utilized to capture global contexts for the coordinate regression task and enhance the ability to differentiate CRC and colorectum tissues. Moreover, a coordinate-driven self-learning (SL) strategy is introduced to leverage a large amount of unlabeled data to improve segmentation performance. We validate the proposed approach on a dataset including 227 labeled and 585 unlabeled CRC cases by fivefold cross-validation. Experimental results demonstrate that our method outperforms some recent related segmentation methods and achieves the segmentation accuracy in DSC for CRC of 0.669 and colorectum of 0.892, reaching to the performance (at 0.639 and 0.890, respectively) of a medical resident with two years of specialized CRC imaging fellowship.

7.
Ann Am Thorac Soc ; 21(7): 1022-1033, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38530051

RESUMEN

Rationale: Rates of emphysema progression vary in chronic obstructive pulmonary disease (COPD), and the relationships with vascular and airway pathophysiology remain unclear. Objectives: We sought to determine if indices of peripheral (segmental and beyond) pulmonary arterial dilation measured on computed tomography (CT) are associated with a 1-year index of emphysema (EI; percentage of voxels <-950 Hounsfield units) progression. Methods: Five hundred ninety-nine former and never-smokers (Global Initiative for Chronic Obstructive Lung Disease stages 0-3) were evaluated from the SPIROMICS (Subpopulations and Intermediate Outcome Measures in COPD Study) cohort: rapid emphysema progressors (RPs; n = 188, 1-year ΔEI > 1%), nonprogressors (n = 301, 1-year ΔEI ± 0.5%), and never-smokers (n = 110). Segmental pulmonary arterial cross-sectional areas were standardized to associated airway luminal areas (segmental pulmonary artery-to-airway ratio [PAARseg]). Full-inspiratory CT scan-derived total (arteries and veins) pulmonary vascular volume (TPVV) was compared with small vessel volume (radius smaller than 0.75 mm). Ratios of airway to lung volume (an index of dysanapsis and COPD risk) were compared with ratios of TPVV to lung volume. Results: Compared with nonprogressors, RPs exhibited significantly larger PAARseg (0.73 ± 0.29 vs. 0.67 ± 0.23; P = 0.001), lower ratios of TPVV to lung volume (3.21 ± 0.42% vs. 3.48 ± 0.38%; P = 5.0 × 10-12), lower ratios of airway to lung volume (0.031 ± 0.003 vs. 0.034 ± 0.004; P = 6.1 × 10-13), and larger ratios of small vessel volume to TPVV (37.91 ± 4.26% vs. 35.53 ± 4.89%; P = 1.9 × 10-7). In adjusted analyses, an increment of 1 standard deviation in PAARseg was associated with a 98.4% higher rate of severe exacerbations (95% confidence interval, 29-206%; P = 0.002) and 79.3% higher odds of being in the RP group (95% confidence interval, 24-157%; P = 0.001). At 2-year follow-up, the CT-defined RP group demonstrated a significant decline in postbronchodilator percentage predicted forced expiratory volume in 1 second. Conclusions: Rapid one-year progression of emphysema was associated with indices indicative of higher peripheral pulmonary vascular resistance and a possible role played by pulmonary vascular-airway dysanapsis.


Asunto(s)
Progresión de la Enfermedad , Arteria Pulmonar , Enfisema Pulmonar , Tomografía Computarizada por Rayos X , Humanos , Masculino , Femenino , Enfisema Pulmonar/diagnóstico por imagen , Enfisema Pulmonar/fisiopatología , Anciano , Persona de Mediana Edad , Arteria Pulmonar/diagnóstico por imagen , Arteria Pulmonar/fisiopatología , Pulmón/diagnóstico por imagen , Pulmón/fisiopatología , Volumen Espiratorio Forzado , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen
8.
Med Image Anal ; 90: 102957, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716199

RESUMEN

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).


Asunto(s)
Enfermedades Pulmonares , Árboles , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Pulmón/diagnóstico por imagen
9.
J Natl Cancer Cent ; 2(4): 306-313, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39036546

RESUMEN

Precision radiotherapy is a critical and indispensable cancer treatment means in the modern clinical workflow with the goal of achieving "quality-up and cost-down" in patient care. The challenge of this therapy lies in developing computerized clinical-assistant solutions with precision, automation, and reproducibility built-in to deliver it at scale. In this work, we provide a comprehensive yet ongoing, incomplete survey of and discussions on the recent progress of utilizing advanced deep learning, semantic organ parsing, multimodal imaging fusion, neural architecture search and medical image analytical techniques to address four corner-stone problems or sub-problems required by all precision radiotherapy workflows, namely, organs at risk (OARs) segmentation, gross tumor volume (GTV) segmentation, metastasized lymph node (LN) detection, and clinical tumor volume (CTV) segmentation. Without loss of generality, we mainly focus on using esophageal and head-and-neck cancers as examples, but the methods can be extrapolated to other types of cancers. High-precision, automated and highly reproducible OAR/GTV/LN/CTV auto-delineation techniques have demonstrated their effectiveness in reducing the inter-practitioner variabilities and the time cost to permit rapid treatment planning and adaptive replanning for the benefit of patients. Through the presentation of the achievements and limitations of these techniques in this review, we hope to encourage more collective multidisciplinary precision radiotherapy workflows to transpire.

10.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35442886

RESUMEN

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Radiografía , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos
11.
Nat Commun ; 13(1): 6137, 2022 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-36253346

RESUMEN

Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Cuello , Radiometría
12.
Med Image Anal ; 68: 101909, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33341494

RESUMEN

Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.


Asunto(s)
Neoplasias Esofágicas , Planificación de la Radioterapia Asistida por Computador , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/radioterapia , Humanos , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Carga Tumoral
13.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33370236

RESUMEN

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Radiografía
14.
Front Radiol ; 1: 661237, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-37492171

RESUMEN

Purpose: Computed tomography (CT) characteristics associated with critical outcomes of patients with coronavirus disease 2019 (COVID-19) have been reported. However, CT risk factors for mortality have not been directly reported. We aim to determine the CT-based quantitative predictors for COVID-19 mortality. Methods: In this retrospective study, laboratory-confirmed COVID-19 patients at Wuhan Central Hospital between December 9, 2019, and March 19, 2020, were included. A novel prognostic biomarker, V-HU score, depicting the volume (V) of total pneumonia infection and the average Hounsfield unit (HU) of consolidation areas was automatically quantified from CT by an artificial intelligence (AI) system. Cox proportional hazards models were used to investigate risk factors for mortality. Results: The study included 238 patients (women 136/238, 57%; median age, 65 years, IQR 51-74 years), 126 of whom were survivors. The V-HU score was an independent predictor (hazard ratio [HR] 2.78, 95% confidence interval [CI] 1.50-5.17; p = 0.001) after adjusting for several COVID-19 prognostic indicators significant in univariable analysis. The prognostic performance of the model containing clinical and outpatient laboratory factors was improved by integrating the V-HU score (c-index: 0.695 vs. 0.728; p < 0.001). Older patients (age ≥ 65 years; HR 3.56, 95% CI 1.64-7.71; p < 0.001) and younger patients (age < 65 years; HR 4.60, 95% CI 1.92-10.99; p < 0.001) could be further risk-stratified by the V-HU score. Conclusions: A combination of an increased volume of total pneumonia infection and high HU value of consolidation areas showed a strong correlation to COVID-19 mortality, as determined by AI quantified CT.

15.
Clin Imaging ; 77: 291-298, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34171743

RESUMEN

PURPOSE: To investigate the diagnostic performance of a deep convolutional neural network for differentiation of clear cell renal cell carcinoma (ccRCC) from renal oncocytoma. METHODS: In this retrospective study, 74 patients (49 male, mean age 59.3) with 243 renal masses (203 ccRCC and 40 oncocytoma) that had undergone MR imaging 6 months prior to pathologic confirmation of the lesions were included. Segmentation using seed placement and bounding box selection was used to extract the lesion patches from T2-WI, and T1-WI pre-contrast, post-contrast arterial and venous phases. Then, a deep convolutional neural network (AlexNet) was fine-tuned to distinguish the ccRCC from oncocytoma. Five-fold cross validation was used to evaluate the AI algorithm performance. A subset of 80 lesions (40 ccRCC, 40 oncocytoma) were randomly selected to be classified by two radiologists and their performance was compared to the AI algorithm. Intra-class correlation coefficient was calculated using the Shrout-Fleiss method. RESULTS: Overall accuracy of the AI system was 91% for differentiation of ccRCC from oncocytoma with an area under the curve of 0.9. For the observer study on 80 randomly selected lesions, there was moderate agreement between the two radiologists and AI algorithm. In the comparison sub-dataset, classification accuracies were 81%, 78%, and 70% for AI, radiologist 1, and radiologist 2, respectively. CONCLUSION: The developed AI system in this study showed high diagnostic performance in differentiation of ccRCC versus oncocytoma on multi-phasic MRIs.


Asunto(s)
Adenoma Oxifílico , Carcinoma de Células Renales , Aprendizaje Profundo , Neoplasias Renales , Adenoma Oxifílico/diagnóstico por imagen , Inteligencia Artificial , Carcinoma de Células Renales/diagnóstico por imagen , Diferenciación Celular , Diagnóstico Diferencial , Humanos , Neoplasias Renales/diagnóstico por imagen , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
16.
Front Oncol ; 11: 785788, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35141147

RESUMEN

BACKGROUND: The current clinical workflow for esophageal gross tumor volume (GTV) contouring relies on manual delineation with high labor costs and inter-user variability. PURPOSE: To validate the clinical applicability of a deep learning multimodality esophageal GTV contouring model, developed at one institution whereas tested at multiple institutions. MATERIALS AND METHODS: We collected 606 patients with esophageal cancer retrospectively from four institutions. Among them, 252 patients from institution 1 contained both a treatment planning CT (pCT) and a pair of diagnostic FDG-PET/CT; 354 patients from three other institutions had only pCT scans under different staging protocols or lacking PET scanners. A two-streamed deep learning model for GTV segmentation was developed using pCT and PET/CT scans of a subset (148 patients) from institution 1. This built model had the flexibility of segmenting GTVs via only pCT or pCT+PET/CT combined when available. For independent evaluation, the remaining 104 patients from institution 1 behaved as an unseen internal testing, and 354 patients from the other three institutions were used for external testing. Degrees of manual revision were further evaluated by human experts to assess the contour-editing effort. Furthermore, the deep model's performance was compared against four radiation oncologists in a multi-user study using 20 randomly chosen external patients. Contouring accuracy and time were recorded for the pre- and post-deep learning-assisted delineation process.

17.
J Med Imaging (Bellingham) ; 6(2): 024007, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31205977

RESUMEN

Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.

18.
IEEE Trans Med Imaging ; 38(11): 2556-2568, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-30908194

RESUMEN

Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Sustancia Blanca/diagnóstico por imagen , Anciano , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad
19.
Sci Transl Med ; 11(495)2019 06 05.
Artículo en Inglés | MEDLINE | ID: mdl-31167928

RESUMEN

Autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED), a monogenic disorder caused by AIRE mutations, presents with several autoimmune diseases. Among these, endocrine organ failure is widely recognized, but the prevalence, immunopathogenesis, and treatment of non-endocrine manifestations such as pneumonitis remain poorly characterized. We enrolled 50 patients with APECED in a prospective observational study and comprehensively examined their clinical and radiographic findings, performed pulmonary function tests, and analyzed immunological characteristics in blood, bronchoalveolar lavage fluid, and endobronchial and lung biopsies. Pneumonitis was found in >40% of our patients, presented early in life, was misdiagnosed despite chronic respiratory symptoms and accompanying radiographic and pulmonary function abnormalities, and caused hypoxemic respiratory failure and death. Autoantibodies against BPIFB1 and KCNRG and the homozygous c.967_979del13 AIRE mutation are associated with pneumonitis development. APECED pneumonitis features compartmentalized immunopathology, with accumulation of activated neutrophils in the airways and lymphocytic infiltration in intraepithelial, submucosal, peribronchiolar, and interstitial areas. Beyond APECED, we extend these observations to lung disease seen in other conditions with secondary AIRE deficiency (thymoma and RAG deficiency). Aire-deficient mice had similar compartmentalized cellular immune responses in the airways and lung tissue, which was ameliorated by deficiency of T and B lymphocytes. Accordingly, T and B lymphocyte-directed immunomodulation controlled symptoms and radiographic abnormalities and improved pulmonary function in patients with APECED pneumonitis. Collectively, our findings unveil lung autoimmunity as a common, early, and unrecognized manifestation of APECED and provide insights into the immunopathogenesis and treatment of pulmonary autoimmunity associated with impaired central immune tolerance.


Asunto(s)
Enfermedades Autoinmunes/inmunología , Enfermedades Autoinmunes/patología , Autoinmunidad/fisiología , Linfocitos/inmunología , Neumonía/inmunología , Neumonía/patología , Adolescente , Adulto , Autoanticuerpos/inmunología , Enfermedades Autoinmunes/metabolismo , Linfocitos B/inmunología , Linfocitos B/metabolismo , Niño , Preescolar , Femenino , Humanos , Lactante , Recién Nacido , Linfocitos/metabolismo , Masculino , Persona de Mediana Edad , Neumonía/metabolismo , Estudios Prospectivos , Linfocitos T/inmunología , Linfocitos T/metabolismo , Adulto Joven
20.
IEEE Trans Vis Comput Graph ; 24(8): 2298-2314, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28809701

RESUMEN

Skeletonization offers a compact representation of an object while preserving important topological and geometrical features. Literature on skeletonization of binary objects is quite mature. However, challenges involved with skeletonization of fuzzy objects are mostly unanswered. This paper presents a new theory and algorithm of skeletonization for fuzzy objects, evaluates its performance, and demonstrates its applications. A formulation of fuzzy grassfire propagation is introduced; its relationships with fuzzy distance functions, level sets, and geodesics are discussed; and several new theoretical results are presented in the continuous space. A notion of collision-impact of fire-fronts at skeletal points is introduced, and its role in filtering noisy skeletal points is demonstrated. A fuzzy object skeletonization algorithm is developed using new notions of surface- and curve-skeletal voxels, digital collision-impact, filtering of noisy skeletal voxels, and continuity of skeletal surfaces. A skeletal noise pruning algorithm is presented using branch-level significance. Accuracy and robustness of the new algorithm are examined on computer-generated phantoms and micro- and conventional CT imaging of trabecular bone specimens. An application of fuzzy object skeletonization to compute structure-width at a low image resolution is demonstrated, and its ability to predict bone strength is examined. Finally, the performance of the new fuzzy object skeletonization algorithm is compared with two binary object skeletonization methods.


Asunto(s)
Algoritmos , Gráficos por Computador/estadística & datos numéricos , Lógica Difusa , Animales , Huesos/diagnóstico por imagen , Huesos/fisiología , Simulación por Computador , Humanos , Modelos Anatómicos , Modelos Estadísticos , Fantasmas de Imagen/estadística & datos numéricos , Tomografía Computarizada por Rayos X/estadística & datos numéricos , Microtomografía por Rayos X/estadística & datos numéricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA