Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
Comput Biol Med ; 182: 109102, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39255659

RESUMEN

Cell imaging assays utilising fluorescence stains are essential for observing sub-cellular organelles and their responses to perturbations. Immunofluorescent staining process is routinely in labs, however the recent innovations in generative AI is challenging the idea of wet lab immunofluorescence (IF) staining. This is especially true when the availability and cost of specific fluorescence dyes is a problem to some labs. Furthermore, staining process takes time and leads to inter-intra-technician and hinders downstream image and data analysis, and the reusability of image data for other projects. Recent studies showed the use of generated synthetic IF images from brightfield (BF) images using generative AI algorithms in the literature. Therefore, in this study, we benchmark and compare five models from three types of IF generation backbones-CNN, GAN, and diffusion models-using a publicly available dataset. This paper not only serves as a comparative study to determine the best-performing model but also proposes a comprehensive analysis pipeline for evaluating the efficacy of generators in IF image synthesis. We highlighted the potential of deep learning-based generators for IF image synthesis, while also discussed potential issues and future research directions. Although generative AI shows promise in simplifying cell phenotyping using only BF images with IF staining, further research and validations are needed to address the key challenges of model generalisability, batch effects, feature relevance and computational costs.

2.
Med Image Anal ; 99: 103334, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39255733

RESUMEN

Deep learning has been extensively applied in medical image reconstruction, where Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) represent the predominant paradigms, each possessing distinct advantages and inherent limitations: CNNs exhibit linear complexity with local sensitivity, whereas ViTs demonstrate quadratic complexity with global sensitivity. The emerging Mamba has shown superiority in learning visual representation, which combines the advantages of linear scalability and global sensitivity. In this study, we introduce MambaMIR, an Arbitrary-Masked Mamba-based model with wavelet decomposition for joint medical image reconstruction and uncertainty estimation. A novel Arbitrary Scan Masking (ASM) mechanism "masks out" redundant information to introduce randomness for further uncertainty estimation. Compared to the commonly used Monte Carlo (MC) dropout, our proposed MC-ASM provides an uncertainty map without the need for hyperparameter tuning and mitigates the performance drop typically observed when applying dropout to low-level tasks. For further texture preservation and better perceptual quality, we employ the wavelet transformation into MambaMIR and explore its variant based on the Generative Adversarial Network, namely MambaMIR-GAN. Comprehensive experiments have been conducted for multiple representative medical image reconstruction tasks, demonstrating that the proposed MambaMIR and MambaMIR-GAN outperform other baseline and state-of-the-art methods in different reconstruction tasks, where MambaMIR achieves the best reconstruction fidelity and MambaMIR-GAN has the best perceptual quality. In addition, our MC-ASM provides uncertainty maps as an additional tool for clinicians, while mitigating the typical performance drop caused by the commonly used dropout.

3.
Eur J Radiol Open ; 13: 100594, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39280120

RESUMEN

Purpose: To assess radiomics and deep learning (DL) methods in identifying symptomatic Carotid Artery Disease (CAD) from carotid CT angiography (CTA) images. We further compare the performance of these novel methods to the conventional calcium score. Methods: Carotid CT angiography (CTA) images from symptomatic patients (ischaemic stroke/transient ischaemic attack within the last 3 months) and asymptomatic patients were analysed. Carotid arteries were classified into culprit, non-culprit and asymptomatic. The calcium score was assessed using the Agatston method. 93 radiomic features were extracted from regions-of-interest drawn on 14 consecutive CTA slices. For DL, convolutional neural networks (CNNs) with and without transfer learning were trained directly on CTA slices. Predictive performance was assessed over 5-fold cross validated AUC scores. SHAP and GRAD-CAM algorithms were used for explainability. Results: 132 carotid arteries were analysed (41 culprit, 41 non-culprit, and 50 asymptomatic). For asymptomatic vs symptomatic arteries, radiomics attained a mean AUC of 0.96(± 0.02), followed by DL 0.86(± 0.06) and then calcium 0.79(± 0.08). For culprit vs non-culprit arteries, radiomics achieved a mean AUC of 0.75(± 0.09), followed by DL 0.67(± 0.10) and then calcium 0.60(± 0.02). For multi-class classification, the mean AUCs were 0.95(± 0.07), 0.79(± 0.05), and 0.71(± 0.07) for radiomics, DL and calcium, respectively. Explainability revealed consistent patterns in the most important radiomic features. Conclusions: Our study highlights the potential of novel image analysis techniques in extracting quantitative information beyond calcification in the identification of CAD. Though further work is required, the transition of these novel techniques into clinical practice may eventually facilitate better stroke risk stratification.

4.
Proc Natl Acad Sci U S A ; 121(33): e2318951121, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39121160

RESUMEN

An increasingly common viewpoint is that protein dynamics datasets reside in a nonlinear subspace of low conformational energy. Ideal data analysis tools should therefore account for such nonlinear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich mathematical structure to account for a wide range of geometries that can be modeled after an energy landscape. Second, many standard data analysis tools developed for data in Euclidean space can be generalized to Riemannian manifolds. In the context of protein dynamics, a conceptual challenge comes from the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium-sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.


Asunto(s)
Conformación Proteica , Proteínas , Proteínas/química , Algoritmos , Simulación de Dinámica Molecular
5.
Patterns (N Y) ; 5(6): 101006, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-39005485

RESUMEN

For healthcare datasets, it is often impossible to combine data samples from multiple sites due to ethical, privacy, or logistical concerns. Federated learning allows for the utilization of powerful machine learning algorithms without requiring the pooling of data. Healthcare data have many simultaneous challenges, such as highly siloed data, class imbalance, missing data, distribution shifts, and non-standardized variables, that require new methodologies to address. Federated learning adds significant methodological complexity to conventional centralized machine learning, requiring distributed optimization, communication between nodes, aggregation of models, and redistribution of models. In this systematic review, we consider all papers on Scopus published between January 2015 and February 2023 that describe new federated learning methodologies for addressing challenges with healthcare data. We reviewed 89 papers meeting these criteria. Significant systemic issues were identified throughout the literature, compromising many methodologies reviewed. We give detailed recommendations to help improve methodology development for federated learning in healthcare.

6.
Comput Med Imaging Graph ; 116: 102420, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39079409

RESUMEN

Glioblastoma, an aggressive brain tumor prevalent in adults, exhibits heterogeneity in its microstructures and vascular patterns. The delineation of its subregions could facilitate the development of region-targeted therapies. However, current unsupervised learning techniques for this task face challenges in reliability due to fluctuations of clustering algorithms, particularly when processing data from diverse patient cohorts. Furthermore, stable clustering results do not guarantee clinical meaningfulness. To establish the clinical relevance of these subregions, we will perform survival predictions using radiomic features extracted from them. Following this, achieving a balance between outcome stability and clinical relevance presents a significant challenge, further exacerbated by the extensive time required for hyper-parameter tuning. In this study, we introduce a multi-objective Bayesian optimization (MOBO) framework, which leverages a Feature-enhanced Auto-Encoder (FAE) and customized losses to assess both the reproducibility of clustering algorithms and the clinical relevance of their outcomes. Specifically, we embed the entirety of these processes within the MOBO framework, modeling both using distinct Gaussian Processes (GPs). The proposed MOBO framework can automatically balance the trade-off between the two criteria by employing bespoke stability and clinical significance losses. Our approach efficiently optimizes all hyper-parameters, including the FAE architecture and clustering parameters, within a few steps. This not only accelerates the process but also consistently yields robust MRI subregion delineations and provides survival predictions with strong statistical validation.


Asunto(s)
Algoritmos , Teorema de Bayes , Neoplasias Encefálicas , Glioblastoma , Humanos , Glioblastoma/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/mortalidad , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Análisis por Conglomerados , Análisis de Supervivencia , Interpretación de Imagen Asistida por Computador/métodos
7.
IMA J Appl Math ; 89(1): 143-174, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38933736

RESUMEN

Partial differential equations (PDEs) play a fundamental role in the mathematical modelling of many processes and systems in physical, biological and other sciences. To simulate such processes and systems, the solutions of PDEs often need to be approximated numerically. The finite element method, for instance, is a usual standard methodology to do so. The recent success of deep neural networks at various approximation tasks has motivated their use in the numerical solution of PDEs. These so-called physics-informed neural networks and their variants have shown to be able to successfully approximate a large range of PDEs. So far, physics-informed neural networks and the finite element method have mainly been studied in isolation of each other. In this work, we compare the methodologies in a systematic computational study. Indeed, we employ both methods to numerically solve various linear and nonlinear PDEs: Poisson in 1D, 2D and 3D, Allen-Cahn in 1D, semilinear Schrödinger in 1D and 2D. We then compare computational costs and approximation accuracies. In terms of solution time and accuracy, physics-informed neural networks have not been able to outperform the finite element method in our study. In some experiments, they were faster at evaluating the solved PDE.

8.
Sci Rep ; 14(1): 5658, 2024 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-38454072

RESUMEN

In vivo cardiac diffusion tensor imaging (cDTI) is a promising Magnetic Resonance Imaging (MRI) technique for evaluating the microstructure of myocardial tissue in living hearts, providing insights into cardiac function and enabling the development of innovative therapeutic strategies. However, the integration of cDTI into routine clinical practice poses challenging due to the technical obstacles involved in the acquisition, such as low signal-to-noise ratio and prolonged scanning times. In this study, we investigated and implemented three different types of deep learning-based MRI reconstruction models for cDTI reconstruction. We evaluated the performance of these models based on the reconstruction quality assessment, the diffusion tensor parameter assessment as well as the computational cost assessment. Our results indicate that the models discussed in this study can be applied for clinical use at an acceleration factor (AF) of × 2 and × 4 , with the D5C5 model showing superior fidelity for reconstruction and the SwinMR model providing higher perceptual scores. There is no statistical difference from the reference for all diffusion tensor parameters at AF × 2 or most DT parameters at AF × 4 , and the quality of most diffusion tensor parameter maps is visually acceptable. SwinMR is recommended as the optimal approach for reconstruction at AF × 2 and AF × 4 . However, we believe that the models discussed in this study are not yet ready for clinical use at a higher AF. At AF × 8 , the performance of all models discussed remains limited, with only half of the diffusion tensor parameters being recovered to a level with no statistical difference from the reference. Some diffusion tensor parameter maps even provide wrong and misleading information.


Asunto(s)
Aprendizaje Profundo , Imagen de Difusión Tensora , Imagen de Difusión Tensora/métodos , Algoritmos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Imagen de Difusión por Resonancia Magnética/métodos
9.
Comput Methods Programs Biomed ; 246: 108057, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38335865

RESUMEN

BACKGROUND AND OBJECTIVE: 4D flow magnetic resonance imaging provides time-resolved blood flow velocity measurements, but suffers from limitations in spatio-temporal resolution and noise. In this study, we investigated the use of sinusoidal representation networks (SIRENs) to improve denoising and super-resolution of velocity fields measured by 4D flow MRI in the thoracic aorta. METHODS: Efficient training of SIRENs in 4D was achieved by sampling voxel coordinates and enforcing the no-slip condition at the vessel wall. A set of synthetic measurements were generated from computational fluid dynamics simulations, reproducing different noise levels. The influence of SIREN architecture was systematically investigated, and the performance of our method was compared to existing approaches for 4D flow denoising and super-resolution. RESULTS: Compared to existing techniques, a SIREN with 300 neurons per layer and 20 layers achieved lower errors (up to 50% lower vector normalized root mean square error, 42% lower magnitude normalized root mean square error, and 15% lower direction error) in velocity and wall shear stress fields. Applied to real 4D flow velocity measurements in a patient-specific aortic aneurysm, our method produced denoised and super-resolved velocity fields while maintaining accurate macroscopic flow measurements. CONCLUSIONS: This study demonstrates the feasibility of using SIRENs for complex blood flow velocity representation from clinical 4D flow, with quick execution and straightforward implementation.


Asunto(s)
Aorta Torácica , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Velocidad del Flujo Sanguíneo/fisiología , Aorta Torácica/diagnóstico por imagen , Aorta Torácica/fisiología , Estrés Mecánico , Hidrodinámica , Imagenología Tridimensional/métodos
10.
Eur Radiol Exp ; 7(1): 77, 2023 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-38057616

RESUMEN

PURPOSE: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods. METHODS: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test. RESULTS: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10-7, 3 × 10-4, 4 × 10-2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10-3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions. CONCLUSION: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions. RELEVANCE STATEMENT: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines. KEY POINTS: • The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.


Asunto(s)
Aprendizaje Profundo , Quistes Ováricos , Neoplasias Ováricas , Humanos , Femenino , Neoplasias Ováricas/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
11.
Artículo en Inglés | MEDLINE | ID: mdl-37983143

RESUMEN

Medical image segmentation is an important task in medical imaging, as it serves as the first step for clinical diagnosis and treatment planning. While major success has been reported using deep learning supervised techniques, they assume a large and well-representative labeled set. This is a strong assumption in the medical domain where annotations are expensive, time-consuming, and inherent to human bias. To address this problem, unsupervised segmentation techniques have been proposed in the literature. Yet, none of the existing unsupervised segmentation techniques reach accuracies that come even near to the state-of-the-art of supervised segmentation methods. In this work, we present a novel optimization model framed in a new convolutional neural network (CNN)-based contrastive registration architecture for unsupervised medical image segmentation called CLMorph. The core idea of our approach is to exploit image-level registration and feature-level contrastive learning, to perform registration-based segmentation. First, we propose an architecture to capture the image-to-image transformation mapping via registration for unsupervised medical image segmentation. Second, we embed a contrastive learning mechanism in the registration architecture to enhance the discriminative capacity of the network at the feature level. We show that our proposed CLMorph technique mitigates the major drawbacks of existing unsupervised techniques. We demonstrate, through numerical and visual experiments, that our technique substantially outperforms the current state-of-the-art unsupervised segmentation methods on two major medical image datasets.

12.
Front Immunol ; 14: 1228812, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37818359

RESUMEN

Background: Pneumonitis is one of the most common adverse events induced by the use of immune checkpoint inhibitors (ICI), accounting for a 20% of all ICI-associated deaths. Despite numerous efforts to identify risk factors and develop predictive models, there is no clinically deployed risk prediction model for patient risk stratification or for guiding subsequent monitoring. We believe this is due to systemic suboptimal approaches in study designs and methodologies in the literature. The nature and prevalence of different methodological approaches has not been thoroughly examined in prior systematic reviews. Methods: The PubMed, medRxiv and bioRxiv databases were used to identify studies that aimed at risk factor discovery and/or risk prediction model development for ICI-induced pneumonitis (ICI pneumonitis). Studies were then analysed to identify common methodological pitfalls and their contribution to the risk of bias, assessed using the QUIPS and PROBAST tools. Results: There were 51 manuscripts eligible for the review, with Japan-based studies over-represented, being nearly half (24/51) of all papers considered. Only 2/51 studies had a low risk of bias overall. Common bias-inducing practices included unclear diagnostic method or potential misdiagnosis, lack of multiple testing correction, the use of univariate analysis for selecting features for multivariable analysis, discretization of continuous variables, and inappropriate handling of missing values. Results from the risk model development studies were also likely to have been overoptimistic due to lack of holdout sets. Conclusions: Studies with low risk of bias in their methodology are lacking in the existing literature. High-quality risk factor identification and risk model development studies are urgently required by the community to give the best chance of them progressing into a clinically deployable risk prediction model. Recommendations and alternative approaches for reducing the risk of bias were also discussed to guide future studies.


Asunto(s)
Neumonía , Humanos , Japón , Neumonía/diagnóstico , Neumonía/inducido químicamente , Factores de Riesgo , Revisiones Sistemáticas como Asunto
13.
Commun Med (Lond) ; 3(1): 139, 2023 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-37803172

RESUMEN

BACKGROUND: Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial. Missing data is found in most real-world datasets and these missing values are typically imputed using established methods, followed by classification of the now complete samples. The focus of the machine learning researcher is to optimise the classifier's performance. METHODS: We utilise three simulated and three real-world clinical datasets with different feature types and missingness patterns. Initially, we evaluate how the downstream classifier performance depends on the choice of classifier and imputation methods. We employ ANOVA to quantitatively evaluate how the choice of missingness rate, imputation method, and classifier method influences the performance. Additionally, we compare commonly used methods for assessing imputation quality and introduce a class of discrepancy scores based on the sliced Wasserstein distance. We also assess the stability of the imputations and the interpretability of model built on the imputed data. RESULTS: The performance of the classifier is most affected by the percentage of missingness in the test data, with a considerable performance decline observed as the test missingness rate increases. We also show that the commonly used measures for assessing imputation quality tend to lead to imputed data which poorly matches the underlying data distribution, whereas our new class of discrepancy scores performs much better on this measure. Furthermore, we show that the interpretability of classifier models trained using poorly imputed data is compromised. CONCLUSIONS: It is imperative to consider the quality of the imputation when performing downstream classification as the effects on the classifier can be considerable.


Many artificial intelligence (AI) methods aim to classify samples of data into groups, e.g., patients with disease vs. those without. This often requires datasets to be complete, i.e., that all data has been collected for all samples. However, in clinical practice this is often not the case and some data can be missing. One solution is to 'complete' the dataset using a technique called imputation to replace those missing values. However, assessing how well the imputation method performs is challenging. In this work, we demonstrate why people should care about imputation, develop a new method for assessing imputation quality, and demonstrate that if we build AI models on poorly imputed data, the model can give different results to those we would hope for. Our findings may improve the utility and quality of AI models in the clinic.

14.
Diagnostics (Basel) ; 13(17)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37685352

RESUMEN

Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.

15.
Herit Sci ; 11(1): 180, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37638147

RESUMEN

Medieval paper, a handmade product, is made with a mould which leaves an indelible imprint on the sheet of paper. This imprint includes chain lines, laid lines and watermarks which are often visible on the sheet. Extracting these features allows the identification of the paper stock and gives information about the chronology, localisation and movement of manuscripts and people. Most computational work for feature extraction of paper analysis has so far focused on radiography or transmitted light images. While these imaging methods provide clear visualisation of the features of interest, they are expensive and time consuming in their acquisition and not feasible for smaller institutions. However, reflected light images of medieval paper manuscripts are abundant and possibly cheaper in their acquisition. In this paper, we propose algorithms to detect and extract the laid and chain lines from reflected light images. We tackle the main drawback of reflected light images, that is, the low contrast attenuation of chain and laid lines and intensity jumps due to noise and degradation, by employing the spectral total variation decomposition and develop methods for subsequent chain and laid line extraction. Our results clearly demonstrate the feasibility of using reflected light images in paper analysis. This work enables feature extraction for paper manuscripts that have otherwise not been analysed due to a lack of appropriate images. We also open the door for paper stock identification at scale.

16.
Sci Data ; 10(1): 493, 2023 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-37500661

RESUMEN

The National COVID-19 Chest Imaging Database (NCCID) is a centralized UK database of thoracic imaging and corresponding clinical data. It is made available by the National Health Service Artificial Intelligence (NHS AI) Lab to support the development of machine learning tools focused on Coronavirus Disease 2019 (COVID-19). A bespoke cleaning pipeline for NCCID, developed by the NHSx, was introduced in 2021. We present an extension to the original cleaning pipeline for the clinical data of the database. It has been adjusted to correct additional systematic inconsistencies in the raw data such as patient sex, oxygen levels and date values. The most important changes will be discussed in this paper, whilst the code and further explanations are made publicly available on GitLab. The suggested cleaning will allow global users to work with more consistent data for the development of machine learning tools without being an expert. In addition, it highlights some of the challenges when working with clinical multi-center data and includes recommendations for similar future initiatives.


Asunto(s)
COVID-19 , Tórax , Humanos , Inteligencia Artificial , Aprendizaje Automático , Medicina Estatal , Radiografía Torácica , Tórax/diagnóstico por imagen
17.
Phys Med Biol ; 68(15)2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37192631

RESUMEN

Krylov subspace methods are a powerful family of iterative solvers for linear systems of equations, which are commonly used for inverse problems due to their intrinsic regularization properties. Moreover, these methods are naturally suited to solve large-scale problems, as they only require matrix-vector products with the system matrix (and its adjoint) to compute approximate solutions, and they display a very fast convergence. Even if this class of methods has been widely researched and studied in the numerical linear algebra community, its use in applied medical physics and applied engineering is still very limited. e.g. in realistic large-scale computed tomography (CT) problems, and more specifically in cone beam CT (CBCT). This work attempts to breach this gap by providing a general framework for the most relevant Krylov subspace methods applied to 3D CT problems, including the most well-known Krylov solvers for non-square systems (CGLS, LSQR, LSMR), possibly in combination with Tikhonov regularization, and methods that incorporate total variation regularization. This is provided within an open source framework: the tomographic iterative GPU-based reconstruction toolbox, with the idea of promoting accessibility and reproducibility of the results for the algorithms presented. Finally, numerical results in synthetic and real-world 3D CT applications (medical CBCT andµ-CT datasets) are provided to showcase and compare the different Krylov subspace methods presented in the paper, as well as their suitability for different kinds of problems.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X , Algoritmos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
18.
IEEE Trans Med Imaging ; 42(11): 3167-3178, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37022918

RESUMEN

The isocitrate dehydrogenase (IDH) gene mutation is an essential biomarker for the diagnosis and prognosis of glioma. It is promising to better predict glioma genotype by integrating focal tumor image and geometric features with brain network features derived from MRI. Convolutional neural networks show reasonable performance in predicting IDH mutation, which, however, cannot learn from non-Euclidean data, e.g., geometric and network data. In this study, we propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks. To mitigate the limited availability of diffusion MRI, we develop a self-supervised approach to generate brain networks from anatomical multi-sequence MRI. Moreover, to extract tumor-related features from the brain network, we design a hierarchical attention module for the brain network encoder. Further, we design a bi-level multi-modal contrastive loss to align the multi-modal features and tackle the domain gap at the focal tumor and global brain. Finally, we propose a weighted population graph to integrate the multi-modal features for genotype prediction. Experimental results on the testing set show that the proposed model outperforms the baseline deep learning models. The ablation experiments validate the performance of different components of the framework. The visualized interpretation corresponds to clinical knowledge with further validation. In conclusion, the proposed learning framework provides a novel approach for predicting the genotype of glioma.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Glioma/diagnóstico por imagen , Glioma/genética , Glioma/patología , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Genotipo , Isocitrato Deshidrogenasa/genética
19.
J Digit Imaging ; 36(2): 739-752, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36474089

RESUMEN

The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus .


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
20.
Brain ; 146(4): 1714-1727, 2023 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-36189936

RESUMEN

Glioblastoma is characterized by diffuse infiltration into the surrounding tissue along white matter tracts. Identifying the invisible tumour invasion beyond focal lesion promises more effective treatment, which remains a significant challenge. It is increasingly accepted that glioblastoma could widely affect brain structure and function, and further lead to reorganization of neural connectivity. Quantifying neural connectivity in glioblastoma may provide a valuable tool for identifying tumour invasion. Here we propose an approach to systematically identify tumour invasion by quantifying the structural connectome in glioblastoma patients. We first recruit two independent prospective glioblastoma cohorts: the discovery cohort with 117 patients and validation cohort with 42 patients. Next, we use diffusion MRI of healthy subjects to construct tractography templates indicating white matter connection pathways between brain regions. Next, we construct fractional anisotropy skeletons from diffusion MRI using an improved voxel projection approach based on the tract-based spatial statistics, where the strengths of white matter connection and brain regions are estimated. To quantify the disrupted connectome, we calculate the deviation of the connectome strengths of patients from that of the age-matched healthy controls. We then categorize the disruption into regional disruptions on the basis of the relative location of connectome to focal lesions. We also characterize the topological properties of the patient connectome based on the graph theory. Finally, we investigate the clinical, cognitive and prognostic significance of connectome metrics using Pearson correlation test, mediation test and survival models. Our results show that the connectome disruptions in glioblastoma patients are widespread in the normal-appearing brain beyond focal lesions, associated with lower preoperative performance (P < 0.001), impaired cognitive function (P < 0.001) and worse survival (overall survival: hazard ratio = 1.46, P = 0.049; progression-free survival: hazard ratio = 1.49, P = 0.019). Additionally, these distant disruptions mediate the effect on topological alterations of the connectome (mediation effect: clustering coefficient -0.017, P < 0.001, characteristic path length 0.17, P = 0.008). Further, the preserved connectome in the normal-appearing brain demonstrates evidence of connectivity reorganization, where the increased neural connectivity is associated with better overall survival (log-rank P = 0.005). In conclusion, our connectome approach could reveal and quantify the glioblastoma invasion distant from the focal lesion and invisible on the conventional MRI. The structural disruptions in the normal-appearing brain were associated with the topological alteration of the brain and could indicate treatment target. Our approach promises to aid more accurate patient stratification and more precise treatment planning.


Asunto(s)
Conectoma , Glioblastoma , Sustancia Blanca , Humanos , Conectoma/métodos , Glioblastoma/diagnóstico por imagen , Glioblastoma/patología , Imagen de Difusión Tensora/métodos , Estudios Prospectivos , Encéfalo/patología , Sustancia Blanca/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...