Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38879844

RESUMEN

PURPOSE: MRI-derived brain volume loss (BVL) is widely used as neurodegeneration marker. SIENA is state-of-the-art for BVL measurement, but limited by long computation time. Here we propose "BrainLossNet", a convolutional neural network (CNN)-based method for BVL-estimation. METHODS: BrainLossNet uses CNN-based non-linear registration of baseline(BL)/follow-up(FU) 3D-T1w-MRI pairs. BVL is computed by non-linear registration of brain parenchyma masks segmented in the BL/FU scans. The BVL estimate is corrected for image distortions using the apparent volume change of the total intracranial volume. BrainLossNet was trained on 1525 BL/FU pairs from 83 scanners. Agreement between BrainLossNet and SIENA was assessed in 225 BL/FU pairs from 94 MS patients acquired with a single scanner and 268 BL/FU pairs from 52 scanners acquired for various indications. Robustness to short-term variability of 3D-T1w-MRI was compared in 354 BL/FU pairs from a single healthy men acquired in the same session without repositioning with 116 scanners (Frequently-Traveling-Human-Phantom dataset, FTHP). RESULTS: Processing time of BrainLossNet was 2-3 min. The median [interquartile range] of the SIENA-BrainLossNet BVL difference was 0.10% [- 0.18%, 0.35%] in the MS dataset, 0.08% [- 0.14%, 0.28%] in the various indications dataset. The distribution of apparent BVL in the FTHP dataset was narrower with BrainLossNet (p = 0.036; 95th percentile: 0.20% vs 0.32%). CONCLUSION: BrainLossNet on average provides the same BVL estimates as SIENA, but it is significantly more robust, probably due to its built-in distortion correction. Processing time of 2-3 min makes BrainLossNet suitable for clinical routine. This can pave the way for widespread clinical use of BVL estimation from intra-scanner BL/FU pairs.

2.
J Nucl Med ; 65(3): 446-452, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38238040

RESUMEN

This study evaluated the potential to reduce the scan duration in dopamine transporter (DAT) SPECT when using a second-generation multiple-pinhole (MPH) collimator designed for brain SPECT with improved count sensitivity and improved spatial resolution compared with parallel-hole and fanbeam collimators. Methods: The retrospective study included 640 consecutive clinical DAT SPECT studies that had been acquired in list mode with a triple-head SPECT system with MPH collimators and a 30-min net scan duration after injection of 181 ± 10 MBq of [123I]FP-CIT. Raw data corresponding to scan durations of 20, 15, 12, 8, 6, and 4 min were obtained by restricting the events to a proportionally reduced time interval of the list-mode data for each projection angle. SPECT images were reconstructed iteratively with the same parameter settings irrespective of scan duration. The resulting 5,120 SPECT images were assessed for a neurodegeneration-typical reduction in striatal signal by visual assessment, conventional specific binding ratio analysis, and a deep convolutional neural network trained on 30-min scans. Results: Regarding visual interpretation, image quality was considered diagnostic for all 640 patients down to a 12-min scan duration. The proportion of discrepant visual interpretations between 30 and 12 min (1.2%) was not larger than the proportion of discrepant visual interpretations between 2 reading sessions of the same reader at a 30-min scan duration (1.5%). Agreement with the putamen specific binding ratio from the 30-min images was better than expected for 5% test-retest variability down to a 10-min scan duration. A relevant change in convolutional neural network-based automatic classification was observed at a 6-min scan duration or less. Conclusion: The triple-head SPECT system with MPH collimators allows reliable DAT SPECT after administration of about 180 MBq of [123I]FP-CIT with a 12-min scan duration.


Asunto(s)
Proteínas de Transporte de Dopamina a través de la Membrana Plasmática , Tomografía Computarizada de Emisión de Fotón Único , Humanos , Proteínas de Transporte de Dopamina a través de la Membrana Plasmática/metabolismo , Estudios Retrospectivos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Tropanos
3.
Comput Biol Med ; 163: 107096, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37302375

RESUMEN

Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human-machine collaboration.


Asunto(s)
Aprendizaje Profundo , Humanos , Incertidumbre , Probabilidad , Calibración , Procesamiento de Imagen Asistido por Computador
4.
Diagnostics (Basel) ; 13(17)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37685352

RESUMEN

Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.

5.
Eur Radiol Exp ; 7(1): 77, 2023 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-38057616

RESUMEN

PURPOSE: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods. METHODS: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test. RESULTS: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10-7, 3 × 10-4, 4 × 10-2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10-3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions. CONCLUSION: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions. RELEVANCE STATEMENT: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines. KEY POINTS: • The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.


Asunto(s)
Aprendizaje Profundo , Quistes Ováricos , Neoplasias Ováricas , Humanos , Femenino , Neoplasias Ováricas/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
6.
Front Oncol ; 12: 868265, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35785153

RESUMEN

Background: Pathological response to neoadjuvant treatment for patients with high-grade serous ovarian carcinoma (HGSOC) is assessed using the chemotherapy response score (CRS) for omental tumor deposits. The main limitation of CRS is that it requires surgical sampling after initial neoadjuvant chemotherapy (NACT) treatment. Earlier and non-invasive response predictors could improve patient stratification. We developed computed tomography (CT) radiomic measures to predict neoadjuvant response before NACT using CRS as a gold standard. Methods: Omental CT-based radiomics models, yielding a simplified fully interpretable radiomic signature, were developed using Elastic Net logistic regression and compared to predictions based on omental tumor volume alone. Models were developed on a single institution cohort of neoadjuvant-treated HGSOC (n = 61; 41% complete response to NCT) and tested on an external test cohort (n = 48; 21% complete response). Results: The performance of the comprehensive radiomics models and the fully interpretable radiomics model was significantly higher than volume-based predictions of response in both the discovery and external test sets when assessed using G-mean (geometric mean of sensitivity and specificity) and NPV, indicating high generalizability and reliability in identifying non-responders when using radiomics. The performance of a fully interpretable model was similar to that of comprehensive radiomics models. Conclusions: CT-based radiomics allows for predicting response to NACT in a timely manner and without the need for abdominal surgery. Adding pre-NACT radiomics to volumetry improved model performance for predictions of response to NACT in HGSOC and was robust to external testing. A radiomic signature based on five robust predictive features provides improved clinical interpretability and may thus facilitate clinical acceptance and application.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA