Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38687670

RESUMEN

Automated colorectal cancer (CRC) segmentation in medical imaging is the key to achieving automation of CRC detection, staging, and treatment response monitoring. Compared with magnetic resonance imaging (MRI) and computed tomography colonography (CTC), conventional computed tomography (CT) has enormous potential because of its broad implementation, superiority for the hollow viscera (colon), and convenience without needing bowel preparation. However, the segmentation of CRC in conventional CT is more challenging due to the difficulties presenting with the unprepared bowel, such as distinguishing the colorectum from other structures with similar appearance and distinguishing the CRC from the contents of the colorectum. To tackle these challenges, we introduce DeepCRC-SL, the first automated segmentation algorithm for CRC and colorectum in conventional contrast-enhanced CT scans. We propose a topology-aware deep learning-based approach, which builds a novel 1-D colorectal coordinate system and encodes each voxel of the colorectum with a relative position along the coordinate system. We then induce an auxiliary regression task to predict the colorectal coordinate value of each voxel, aiming to integrate global topology into the segmentation network and thus improve the colorectum's continuity. Self-attention layers are utilized to capture global contexts for the coordinate regression task and enhance the ability to differentiate CRC and colorectum tissues. Moreover, a coordinate-driven self-learning (SL) strategy is introduced to leverage a large amount of unlabeled data to improve segmentation performance. We validate the proposed approach on a dataset including 227 labeled and 585 unlabeled CRC cases by fivefold cross-validation. Experimental results demonstrate that our method outperforms some recent related segmentation methods and achieves the segmentation accuracy in DSC for CRC of 0.669 and colorectum of 0.892, reaching to the performance (at 0.639 and 0.890, respectively) of a medical resident with two years of specialized CRC imaging fellowship.

2.
Ann Am Thorac Soc ; 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38530051

RESUMEN

Rationale: Rates of emphysema progression vary in chronic obstructive pulmonary disease (COPD), and the relationship with vascular and airway pathophysiology remain unclear. Objective: We sought to determine if indices of peripheral (segmental and beyond) pulmonary arterial (PA) dilation measured via computed tomography (CT) are associated with a 1-year index of emphysema (EI: %voxels<-950HU) progression. Methods: 599 GOLD 0-3 former and never-smokers were evaluated from the SubPopulations and InterMediate Outcome Measures in COPD Study (SPIROMICS) cohort: rapid-emphysema-progressors (RP, n=188; 1-year ΔEI>1%), non-progressors (NP, n=301; 1-year ΔEI±0.5%) and never-smokers (NS: N=110). Segmental PA cross-sectional areas were standardized to associated airway luminal areas (Segmental : Pulmonary Artery-to-Airway Ratio: PAARseg). Full inspiratory CT scan-derived total (arteries + veins) pulmonary vascular volume (TPVV) was compared to vessel volume with radius smaller than 0.75mm (SVV.75/TPVV). Airway-to-lung ratios (an index of dysanapsis and COPD risk) were compared to TPVV-lung-volume-ratios. Results: Compared with NP, RP exhibited significantly larger PAARseg (0.73±0.29 vs. 0.67±0.23; p=0.001), lower TPVV-to-lung-volume ratio (3.21%±0.42% vs. 3.48%±0.38%; p=5.0 x 10-12), lower airway-to-lung-volume ratio (0.031±0.003 vs. 0.034±0.004; p=6.1 x 10-13) and larger SVV.75/TPVV (37.91%±4.26% vs. 35.53±4.89; p=1.9 x 10-7). In adjusted analyses, a 1-SD increment in PAARseg was associated with a 98.4% higher rate of severe exacerbations (95%CI: 29 to 206%; p = 0.002) and 79.3% higher in odds of being in the rapid emphysema progression group (95%CI: 24% to 157%; p = 0.001). At year-2 followup, the CT-defined RP group demonstrated a significant decline in post-bronchodilator-FEV1% predicted. Conclusion: Rapid one-year progression of emphysema was associated with indices indicative of higher peripheral pulmonary vascular resistance and a possible role played by pulmonary vascular-airway dysanapsis.

3.
IEEE Trans Med Imaging ; 43(1): 96-107, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37399157

RESUMEN

Deep learning has been widely used in medical image segmentation and other aspects. However, the performance of existing medical image segmentation models has been limited by the challenge of obtaining sufficient high-quality labeled data due to the prohibitive data annotation cost. To alleviate this limitation, we propose a new text-augmented medical image segmentation model LViT (Language meets Vision Transformer). In our LViT model, medical text annotation is incorporated to compensate for the quality deficiency in image data. In addition, the text information can guide to generate pseudo labels of improved quality in the semi-supervised learning. We also propose an Exponential Pseudo label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM) preserve local image features in semi-supervised LViT setting. In our model, LV (Language-Vision) loss is designed to supervise the training of unlabeled images using text information directly. For evaluation, we construct three multimodal medical segmentation datasets (image + text) containing X-rays and CT images. Experimental results show that our proposed LViT has superior segmentation performance in both fully-supervised and semi-supervised setting. The code and datasets are available at https://github.com/HUANGLIZI/LViT.


Asunto(s)
Lenguaje , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
4.
Med Image Anal ; 90: 102957, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716199

RESUMEN

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).


Asunto(s)
Enfermedades Pulmonares , Árboles , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Pulmón/diagnóstico por imagen
5.
Nat Commun ; 13(1): 6137, 2022 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-36253346

RESUMEN

Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.


Asunto(s)
Neoplasias de Cabeza y Cuello , Órganos en Riesgo , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Cuello , Radiometría
7.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35442886

RESUMEN

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Radiografía , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos
8.
Clin Imaging ; 77: 291-298, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34171743

RESUMEN

PURPOSE: To investigate the diagnostic performance of a deep convolutional neural network for differentiation of clear cell renal cell carcinoma (ccRCC) from renal oncocytoma. METHODS: In this retrospective study, 74 patients (49 male, mean age 59.3) with 243 renal masses (203 ccRCC and 40 oncocytoma) that had undergone MR imaging 6 months prior to pathologic confirmation of the lesions were included. Segmentation using seed placement and bounding box selection was used to extract the lesion patches from T2-WI, and T1-WI pre-contrast, post-contrast arterial and venous phases. Then, a deep convolutional neural network (AlexNet) was fine-tuned to distinguish the ccRCC from oncocytoma. Five-fold cross validation was used to evaluate the AI algorithm performance. A subset of 80 lesions (40 ccRCC, 40 oncocytoma) were randomly selected to be classified by two radiologists and their performance was compared to the AI algorithm. Intra-class correlation coefficient was calculated using the Shrout-Fleiss method. RESULTS: Overall accuracy of the AI system was 91% for differentiation of ccRCC from oncocytoma with an area under the curve of 0.9. For the observer study on 80 randomly selected lesions, there was moderate agreement between the two radiologists and AI algorithm. In the comparison sub-dataset, classification accuracies were 81%, 78%, and 70% for AI, radiologist 1, and radiologist 2, respectively. CONCLUSION: The developed AI system in this study showed high diagnostic performance in differentiation of ccRCC versus oncocytoma on multi-phasic MRIs.


Asunto(s)
Adenoma Oxifílico , Carcinoma de Células Renales , Aprendizaje Profundo , Neoplasias Renales , Adenoma Oxifílico/diagnóstico por imagen , Inteligencia Artificial , Carcinoma de Células Renales/diagnóstico por imagen , Diferenciación Celular , Diagnóstico Diferencial , Humanos , Neoplasias Renales/diagnóstico por imagen , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
9.
Med Image Anal ; 68: 101909, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33341494

RESUMEN

Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.


Asunto(s)
Neoplasias Esofágicas , Planificación de la Radioterapia Asistida por Computador , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/radioterapia , Humanos , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Carga Tumoral
10.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33370236

RESUMEN

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Radiografía
11.
Front Oncol ; 11: 785788, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35141147

RESUMEN

BACKGROUND: The current clinical workflow for esophageal gross tumor volume (GTV) contouring relies on manual delineation with high labor costs and inter-user variability. PURPOSE: To validate the clinical applicability of a deep learning multimodality esophageal GTV contouring model, developed at one institution whereas tested at multiple institutions. MATERIALS AND METHODS: We collected 606 patients with esophageal cancer retrospectively from four institutions. Among them, 252 patients from institution 1 contained both a treatment planning CT (pCT) and a pair of diagnostic FDG-PET/CT; 354 patients from three other institutions had only pCT scans under different staging protocols or lacking PET scanners. A two-streamed deep learning model for GTV segmentation was developed using pCT and PET/CT scans of a subset (148 patients) from institution 1. This built model had the flexibility of segmenting GTVs via only pCT or pCT+PET/CT combined when available. For independent evaluation, the remaining 104 patients from institution 1 behaved as an unseen internal testing, and 354 patients from the other three institutions were used for external testing. Degrees of manual revision were further evaluated by human experts to assess the contour-editing effort. Furthermore, the deep model's performance was compared against four radiation oncologists in a multi-user study using 20 randomly chosen external patients. Contouring accuracy and time were recorded for the pre- and post-deep learning-assisted delineation process.

12.
Front Radiol ; 1: 661237, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-37492171

RESUMEN

Purpose: Computed tomography (CT) characteristics associated with critical outcomes of patients with coronavirus disease 2019 (COVID-19) have been reported. However, CT risk factors for mortality have not been directly reported. We aim to determine the CT-based quantitative predictors for COVID-19 mortality. Methods: In this retrospective study, laboratory-confirmed COVID-19 patients at Wuhan Central Hospital between December 9, 2019, and March 19, 2020, were included. A novel prognostic biomarker, V-HU score, depicting the volume (V) of total pneumonia infection and the average Hounsfield unit (HU) of consolidation areas was automatically quantified from CT by an artificial intelligence (AI) system. Cox proportional hazards models were used to investigate risk factors for mortality. Results: The study included 238 patients (women 136/238, 57%; median age, 65 years, IQR 51-74 years), 126 of whom were survivors. The V-HU score was an independent predictor (hazard ratio [HR] 2.78, 95% confidence interval [CI] 1.50-5.17; p = 0.001) after adjusting for several COVID-19 prognostic indicators significant in univariable analysis. The prognostic performance of the model containing clinical and outpatient laboratory factors was improved by integrating the V-HU score (c-index: 0.695 vs. 0.728; p < 0.001). Older patients (age ≥ 65 years; HR 3.56, 95% CI 1.64-7.71; p < 0.001) and younger patients (age < 65 years; HR 4.60, 95% CI 1.92-10.99; p < 0.001) could be further risk-stratified by the V-HU score. Conclusions: A combination of an increased volume of total pneumonia infection and high HU value of consolidation areas showed a strong correlation to COVID-19 mortality, as determined by AI quantified CT.

13.
Eur Radiol ; 30(12): 6828-6837, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32683550

RESUMEN

OBJECTIVE: To develop a fully automated AI system to quantitatively assess the disease severity and disease progression of COVID-19 using thick-section chest CT images. METHODS: In this retrospective study, an AI system was developed to automatically segment and quantify the COVID-19-infected lung regions on thick-section chest CT images. Five hundred thirty-one CT scans from 204 COVID-19 patients were collected from one appointed COVID-19 hospital. The automatically segmented lung abnormalities were compared with manual segmentation of two experienced radiologists using the Dice coefficient on a randomly selected subset (30 CT scans). Two imaging biomarkers were automatically computed, i.e., the portion of infection (POI) and the average infection HU (iHU), to assess disease severity and disease progression. The assessments were compared with patient status of diagnosis reports and key phrases extracted from radiology reports using the area under the receiver operating characteristic curve (AUC) and Cohen's kappa, respectively. RESULTS: The dice coefficient between the segmentation of the AI system and two experienced radiologists for the COVID-19-infected lung abnormalities was 0.74 ± 0.28 and 0.76 ± 0.29, respectively, which were close to the inter-observer agreement (0.79 ± 0.25). The computed two imaging biomarkers can distinguish between the severe and non-severe stages with an AUC of 0.97 (p value < 0.001). Very good agreement (κ = 0.8220) between the AI system and the radiologists was achieved on evaluating the changes in infection volumes. CONCLUSIONS: A deep learning-based AI system built on the thick-section CT imaging can accurately quantify the COVID-19-associated lung abnormalities and assess the disease severity and its progressions. KEY POINTS: • A deep learning-based AI system was able to accurately segment the infected lung regions by COVID-19 using the thick-section CT scans (Dice coefficient ≥ 0.74). • The computed imaging biomarkers were able to distinguish between the non-severe and severe COVID-19 stages (area under the receiver operating characteristic curve 0.97). • The infection volume changes computed by the AI system were able to assess the COVID-19 progression (Cohen's kappa 0.8220).


Asunto(s)
Betacoronavirus , Infecciones Comunitarias Adquiridas/diagnóstico , Infecciones por Coronavirus/diagnóstico , Aprendizaje Profundo , Pulmón/diagnóstico por imagen , Neumonía Viral/diagnóstico , Neumonía/diagnóstico , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , COVID-19 , China/epidemiología , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Curva ROC , Estudios Retrospectivos , SARS-CoV-2
14.
Sci Transl Med ; 11(495)2019 06 05.
Artículo en Inglés | MEDLINE | ID: mdl-31167928

RESUMEN

Autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED), a monogenic disorder caused by AIRE mutations, presents with several autoimmune diseases. Among these, endocrine organ failure is widely recognized, but the prevalence, immunopathogenesis, and treatment of non-endocrine manifestations such as pneumonitis remain poorly characterized. We enrolled 50 patients with APECED in a prospective observational study and comprehensively examined their clinical and radiographic findings, performed pulmonary function tests, and analyzed immunological characteristics in blood, bronchoalveolar lavage fluid, and endobronchial and lung biopsies. Pneumonitis was found in >40% of our patients, presented early in life, was misdiagnosed despite chronic respiratory symptoms and accompanying radiographic and pulmonary function abnormalities, and caused hypoxemic respiratory failure and death. Autoantibodies against BPIFB1 and KCNRG and the homozygous c.967_979del13 AIRE mutation are associated with pneumonitis development. APECED pneumonitis features compartmentalized immunopathology, with accumulation of activated neutrophils in the airways and lymphocytic infiltration in intraepithelial, submucosal, peribronchiolar, and interstitial areas. Beyond APECED, we extend these observations to lung disease seen in other conditions with secondary AIRE deficiency (thymoma and RAG deficiency). Aire-deficient mice had similar compartmentalized cellular immune responses in the airways and lung tissue, which was ameliorated by deficiency of T and B lymphocytes. Accordingly, T and B lymphocyte-directed immunomodulation controlled symptoms and radiographic abnormalities and improved pulmonary function in patients with APECED pneumonitis. Collectively, our findings unveil lung autoimmunity as a common, early, and unrecognized manifestation of APECED and provide insights into the immunopathogenesis and treatment of pulmonary autoimmunity associated with impaired central immune tolerance.


Asunto(s)
Enfermedades Autoinmunes/inmunología , Enfermedades Autoinmunes/patología , Autoinmunidad/fisiología , Linfocitos/inmunología , Neumonía/inmunología , Neumonía/patología , Adolescente , Adulto , Autoanticuerpos/inmunología , Enfermedades Autoinmunes/metabolismo , Linfocitos B/inmunología , Linfocitos B/metabolismo , Niño , Preescolar , Femenino , Humanos , Lactante , Recién Nacido , Linfocitos/metabolismo , Masculino , Persona de Mediana Edad , Neumonía/metabolismo , Estudios Prospectivos , Linfocitos T/inmunología , Linfocitos T/metabolismo , Adulto Joven
15.
J Med Imaging (Bellingham) ; 6(2): 024007, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31205977

RESUMEN

Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.

16.
IEEE Trans Med Imaging ; 38(11): 2556-2568, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-30908194

RESUMEN

Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Sustancia Blanca/diagnóstico por imagen , Anciano , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad
17.
IEEE Trans Vis Comput Graph ; 24(8): 2298-2314, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28809701

RESUMEN

Skeletonization offers a compact representation of an object while preserving important topological and geometrical features. Literature on skeletonization of binary objects is quite mature. However, challenges involved with skeletonization of fuzzy objects are mostly unanswered. This paper presents a new theory and algorithm of skeletonization for fuzzy objects, evaluates its performance, and demonstrates its applications. A formulation of fuzzy grassfire propagation is introduced; its relationships with fuzzy distance functions, level sets, and geodesics are discussed; and several new theoretical results are presented in the continuous space. A notion of collision-impact of fire-fronts at skeletal points is introduced, and its role in filtering noisy skeletal points is demonstrated. A fuzzy object skeletonization algorithm is developed using new notions of surface- and curve-skeletal voxels, digital collision-impact, filtering of noisy skeletal voxels, and continuity of skeletal surfaces. A skeletal noise pruning algorithm is presented using branch-level significance. Accuracy and robustness of the new algorithm are examined on computer-generated phantoms and micro- and conventional CT imaging of trabecular bone specimens. An application of fuzzy object skeletonization to compute structure-width at a low image resolution is demonstrated, and its ability to predict bone strength is examined. Finally, the performance of the new fuzzy object skeletonization algorithm is compared with two binary object skeletonization methods.


Asunto(s)
Algoritmos , Gráficos por Computador/estadística & datos numéricos , Lógica Difusa , Animales , Huesos/diagnóstico por imagen , Huesos/fisiología , Simulación por Computador , Humanos , Modelos Anatómicos , Modelos Estadísticos , Fantasmas de Imagen/estadística & datos numéricos , Tomografía Computarizada por Rayos X/estadística & datos numéricos , Microtomografía por Rayos X/estadística & datos numéricos
18.
Med Phys ; 45(1): 236-249, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-29064579

RESUMEN

PURPOSE: Osteoporosis associated with reduced bone mineral density (BMD) and microarchitectural changes puts patients at an elevated risk of fracture. Modern multidetector row CT (MDCT) technology, producing high spatial resolution at increasingly lower dose radiation, is emerging as a viable modality for trabecular bone (Tb) imaging. Wide variation in CT scanners raises concerns of data uniformity in multisite and longitudinal studies. A comprehensive cadaveric study was performed to evaluate MDCT-derived Tb microarchitectural measures. A human pilot study was performed comparing continuity of Tb measures estimated from two MDCT scanners with significantly different image resolution features. METHOD: Micro-CT imaging of cadaveric ankle specimens (n=25) was used to examine the validity of MDCT-derived Tb microarchitectural measures. Repeat scan reproducibility of MDCT-based Tb measures and their ability to predict mechanical properties were examined. To assess multiscanner data continuity of Tb measures, the distal tibias of 20 volunteers (age:26.2±4.5Y,10F) were scanned using the Siemens SOMATOM Definition Flash and the higher resolution Siemens SOMATOM Force scanners with an average 45-day time gap between scans. The correlation of Tb measures derived from the two scanners over 30% and 60% peel regions at the 4% to 8% of distal tibia was analyzed. RESULTS: MDCT-based Tb measures characterizing bone network area density, plate-rod microarchitecture, and transverse trabeculae showed good correlations (r∈0.85,0.92) with the gold standard micro-CT-derived values of matching Tb measures. However, other MDCT-derived Tb measures characterizing trabecular thickness and separation, erosion index, and structure model index produced weak correlation (r<0.8) with their micro-CT-derived values. Most MDCT Tb measures were found repeatable (ICC∈0.94,0.98). The Tb plate-width measure showed a strong correlation (r = 0.89) with experimental yield stress, while the transverse trabecular measure produced the highest correlation (r = 0.81) with Young's modulus. The data continuity experiment showed that, despite significant differences in image resolution between two scanners (10% MTF along xy-plane and z-direction - Flash: 16.2 and 17.9 lp/cm; Force: 24.8 and 21.0 lp/cm), most Tb measures had high Pearson correlations (r > 0.95) between values estimated from the two scanners. Relatively lower correlation coefficients were observed for the bone network area density (r = 0.91) and Tb separation (r = 0.93) measures. CONCLUSION: Most MDCT-derived Tb microarchitectural measures are reproducible and their values derived from two scanners strongly correlate with each other as well as with bone strength. This study has highlighted those MDCT-derived measures which show the greatest promise for characterization of bone network area density, plate-rod and transverse trabecular distributions with a good correlation (r ≥ 0.85) compared with their micro-CT-derived values. At the same time, other measures representing trabecular thickness and separation, erosion index, and structure model index produced weak correlations (r < 0.8) with their micro-CT-derived values, failing to accurately portray the projected trabecular microarchitectural features. Strong correlations of Tb measures estimated from two scanners suggest that image data from different scanners can be used successfully in multisite and longitudinal studies with linear calibration required for some measures. In summary, modern MDCT scanners are suitable for effective quantitative imaging of peripheral Tb microarchitecture if care is taken to focus on appropriate quantitative metrics.


Asunto(s)
Huesos/diagnóstico por imagen , Microtomografía por Rayos X/métodos , Adulto , Anciano , Tobillo/diagnóstico por imagen , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados
19.
Phys Med Biol ; 61(18): N478-N496, 2016 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-27541945

RESUMEN

Osteoporosis is associated with increased risk of fractures, which is clinically defined by low bone mineral density. Increasing evidence suggests that trabecular bone (TB) micro-architecture is an important determinant of bone strength and fracture risk. We present an improved volumetric topological analysis algorithm based on fuzzy skeletonization, results of its application on in vivo MR imaging, and compare its performance with digital topological analysis. The new VTA method eliminates data loss in the binarization step and yields accurate and robust measures of local plate-width for individual trabeculae, which allows classification of TB structures on the continuum between perfect plates and rods. The repeat-scan reproducibility of the method was evaluated on in vivo MRI of distal femur and distal radius, and high intra-class correlation coefficients between 0.93 and 0.97 were observed. The method's ability to detect treatment effects on TB micro-architecture was examined in a 2 years testosterone study on hypogonadal men. It was observed from experimental results that average plate-width and plate-to-rod ratio significantly improved after 6 months and the improvement was found to continue at 12 and 24 months. The bone density of plate-like trabeculae was found to increase by 6.5% (p = 0.06), 7.2% (p = 0.07) and 16.2% (p = 0.003) at 6, 12, 24 months, respectively. While the density of rod-like trabeculae did not change significantly, even at 24 months. A comparative study showed that VTA has enhanced ability to detect treatment effects in TB micro-architecture as compared to conventional method of digital topological analysis for plate/rod characterization in terms of both percent change and effect-size.


Asunto(s)
Algoritmos , Hueso Esponjoso/patología , Eunuquismo/patología , Imagen por Resonancia Magnética/métodos , Osteoporosis/patología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Densidad Ósea , Simulación por Computador , Femenino , Estudios de Seguimiento , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Adulto Joven
20.
Med Phys ; 43(5): 2598, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-27147369

RESUMEN

PURPOSE: A test object (phantom) is an important tool to evaluate comparability and stability of CT scanners used in multicenter and longitudinal studies. However, there are many sources of error that can interfere with the test object-derived quantitative measurements. Here the authors investigated three major possible sources of operator error in the use of a test object employed to assess pulmonary density-related as well as airway-related metrics. METHODS: Two kinds of experiments were carried out to assess measurement variability caused by imperfect scanning status. The first one consisted of three experiments. A COPDGene test object was scanned using a dual source multidetector computed tomographic scanner (Siemens Somatom Flash) with the Subpopulations and Intermediate Outcome Measures in COPD Study (SPIROMICS) inspiration protocol (120 kV, 110 mAs, pitch = 1, slice thickness = 0.75 mm, slice spacing = 0.5 mm) to evaluate the effects of tilt angle, water bottle offset, and air bubble size. After analysis of these results, a guideline was reached in order to achieve more reliable results for this test object. Next the authors applied the above findings to 2272 test object scans collected over 4 years as part of the SPIROMICS study. The authors compared changes of the data consistency before and after excluding the scans that failed to pass the guideline. RESULTS: This study established the following limits for the test object: tilt index ≤0.3, water bottle offset limits of [-6.6 mm, 7.4 mm], and no air bubble within the water bottle, where tilt index is a measure incorporating two tilt angles around x- and y-axis. With 95% confidence, the density measurement variation for all five interested materials in the test object (acrylic, water, lung, inside air, and outside air) resulting from all three error sources can be limited to ±0.9 HU (summed in quadrature), when all the requirements are satisfied. The authors applied these criteria to 2272 SPIROMICS scans and demonstrated a significant reduction in measurement variation associated with the test object. CONCLUSIONS: Three operator errors were identified which significantly affected the usability of the acquired scan images of the test object used for monitoring scanner stability in a multicenter study. The authors' results demonstrated that at the time of test object scan receipt at a radiology core laboratory, quality control procedures should include an assessment of tilt index, water bottle offset, and air bubble size within the water bottle. Application of this methodology to 2272 SPIROMICS scans indicated that their findings were not limited to the scanner make and model used for the initial test but was generalizable to both Siemens and GE scanners which comprise the scanner types used within the SPIROMICS study.


Asunto(s)
Tomografía Computarizada Multidetector/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Aire , Interpretación Estadística de Datos , Estudios Longitudinales , Modelos Anatómicos , Tomografía Computarizada Multidetector/instrumentación , Fantasmas de Imagen , Control de Calidad , Agua
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA