RESUMEN
An increasingly common viewpoint is that protein dynamics datasets reside in a nonlinear subspace of low conformational energy. Ideal data analysis tools should therefore account for such nonlinear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich mathematical structure to account for a wide range of geometries that can be modeled after an energy landscape. Second, many standard data analysis tools developed for data in Euclidean space can be generalized to Riemannian manifolds. In the context of protein dynamics, a conceptual challenge comes from the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium-sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.
Asunto(s)
Conformación Proteica , Proteínas , Proteínas/química , Algoritmos , Simulación de Dinámica MolecularRESUMEN
Glioblastoma is characterized by diffuse infiltration into the surrounding tissue along white matter tracts. Identifying the invisible tumour invasion beyond focal lesion promises more effective treatment, which remains a significant challenge. It is increasingly accepted that glioblastoma could widely affect brain structure and function, and further lead to reorganization of neural connectivity. Quantifying neural connectivity in glioblastoma may provide a valuable tool for identifying tumour invasion. Here we propose an approach to systematically identify tumour invasion by quantifying the structural connectome in glioblastoma patients. We first recruit two independent prospective glioblastoma cohorts: the discovery cohort with 117 patients and validation cohort with 42 patients. Next, we use diffusion MRI of healthy subjects to construct tractography templates indicating white matter connection pathways between brain regions. Next, we construct fractional anisotropy skeletons from diffusion MRI using an improved voxel projection approach based on the tract-based spatial statistics, where the strengths of white matter connection and brain regions are estimated. To quantify the disrupted connectome, we calculate the deviation of the connectome strengths of patients from that of the age-matched healthy controls. We then categorize the disruption into regional disruptions on the basis of the relative location of connectome to focal lesions. We also characterize the topological properties of the patient connectome based on the graph theory. Finally, we investigate the clinical, cognitive and prognostic significance of connectome metrics using Pearson correlation test, mediation test and survival models. Our results show that the connectome disruptions in glioblastoma patients are widespread in the normal-appearing brain beyond focal lesions, associated with lower preoperative performance (P < 0.001), impaired cognitive function (P < 0.001) and worse survival (overall survival: hazard ratio = 1.46, P = 0.049; progression-free survival: hazard ratio = 1.49, P = 0.019). Additionally, these distant disruptions mediate the effect on topological alterations of the connectome (mediation effect: clustering coefficient -0.017, P < 0.001, characteristic path length 0.17, P = 0.008). Further, the preserved connectome in the normal-appearing brain demonstrates evidence of connectivity reorganization, where the increased neural connectivity is associated with better overall survival (log-rank P = 0.005). In conclusion, our connectome approach could reveal and quantify the glioblastoma invasion distant from the focal lesion and invisible on the conventional MRI. The structural disruptions in the normal-appearing brain were associated with the topological alteration of the brain and could indicate treatment target. Our approach promises to aid more accurate patient stratification and more precise treatment planning.
Asunto(s)
Conectoma , Glioblastoma , Sustancia Blanca , Humanos , Conectoma/métodos , Glioblastoma/diagnóstico por imagen , Glioblastoma/patología , Imagen de Difusión Tensora/métodos , Estudios Prospectivos , Encéfalo/patología , Sustancia Blanca/patologíaRESUMEN
The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus .
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
PURPOSE: Dynamic nuclear polarization is an emerging imaging method that allows noninvasive investigation of tissue metabolism. However, the relatively low metabolic spatial resolution that can be achieved limits some applications, and improving this resolution could have important implications for the technique. METHODS: We propose to enhance the 3D resolution of carbon-13 magnetic resonance imaging (13 C-MRI) using the structural information provided by hydrogen-1 MRI (1 H-MRI). The proposed approach relies on variational regularization in 3D with a directional total variation regularizer, resulting in a convex optimization problem which is robust with respect to the parameters and can efficiently be solved by many standard optimization algorithms. Validation was carried out using an in silico phantom, an in vitro phantom and in vivo data from four human volunteers. RESULTS: The clinical data used in this study were upsampled by a factor of 4 in-plane and by a factor of 15 out-of-plane, thereby revealing occult information. A key finding is that 3D super-resolution shows superior performance compared to several 2D super-resolution approaches: for example, for the in silico data, the mean-squared-error was reduced by around 40% and for all data produced increased anatomical definition of the metabolic imaging. CONCLUSION: The proposed approach generates images with enhanced anatomical resolution while largely preserving the quantitative measurements of metabolism. Although the work requires clinical validation against tissue measures of metabolism, it offers great potential in the field of 13 C-MRI and could significantly improve image quality in the future.
Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Isótopos de Carbono , Humanos , Fantasmas de ImagenRESUMEN
Can one learn to diagnose COVID-19 under extreme minimal supervision? Since the outbreak of the novel COVID-19 there has been a rush for developing automatic techniques for expert-level disease identification on Chest X-ray data. In particular, the use of deep supervised learning has become the go-to paradigm. However, the performance of such models is heavily dependent on the availability of a large and representative labelled dataset. The creation of which is a heavily expensive and time consuming task, and especially imposes a great challenge for a novel disease. Semi-supervised learning has shown the ability to match the incredible performance of supervised models whilst requiring a small fraction of the labelled examples. This makes the semi supervised paradigm an attractive option for identifying COVID-19. In this work, we introduce a graph based deep semi-supervised framework for classifying COVID-19 from chest X-rays. Our framework introduces an optimisation model for graph diffusion that reinforces the natural relation among the tiny labelled set and the vast unlabelled data. We then connect the diffusion prediction output as pseudo-labels that are used in an iterative scheme in a deep net. We demonstrate, through our experiments, that our model is able to outperform the current leading supervised model with a tiny fraction of the labelled examples. Finally, we provide attention maps to accommodate the radiologist's mental model, better fitting their perceptual and cognitive abilities. These visualisation aims to assist the radiologist in judging whether the diagnostic is correct or not, and in consequence to accelerate the decision.
RESUMEN
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. Much of this work has been focused on roto-translational symmetry of R d , but other examples are the scaling symmetry of R d and rotational symmetry of the sphere. In this work, we demonstrate that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach. Indeed, if the regularisation functional is invariant under a group symmetry, the corresponding proximal operator will satisfy an equivariance property with respect to the same group symmetry. As a result of this observation, we design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks. We use roto-translationally equivariant operations in the proposed methodology and apply it to the problems of low-dose computerised tomography reconstruction and subsampled magnetic resonance imaging reconstruction. The proposed methodology is demonstrated to improve the reconstruction quality of a learned reconstruction method with a little extra computational cost at training time but without any extra cost at test time.
RESUMEN
Thalamic alterations occur in many neurological disorders including Alzheimer's disease, Parkinson's disease and multiple sclerosis. Routine interventions to improve symptom severity in movement disorders, for example, often consist of surgery or deep brain stimulation to diencephalic nuclei. Therefore, accurate delineation of grey matter thalamic subregions is of the upmost clinical importance. MRI is highly appropriate for structural segmentation as it provides different views of the anatomy from a single scanning session. Though with several contrasts potentially available, it is also of increasing importance to develop new image segmentation techniques that can operate multi-spectrally. We hereby propose a new segmentation method for use with multi-modality data, which we evaluated for automated segmentation of major thalamic subnuclear groups using T1 -weighted, T2* -weighted and quantitative susceptibility mapping (QSM) information. The proposed method consists of four steps: Highly iterative image co-registration, manual segmentation on the average training-data template, supervised learning for pattern recognition, and a final convex optimisation step imposing further spatial constraints to refine the solution. This led to solutions in greater agreement with manual segmentation than the standard Morel atlas based approach. Furthermore, we show that the multi-contrast approach boosts segmentation performances. We then investigated whether prior knowledge using the training-template contours could further improve convex segmentation accuracy and robustness, which led to highly precise multi-contrast segmentations in single subjects. This approach can be extended to most 3D imaging data types and any region of interest discernible in single scans or multi-subject templates.
Asunto(s)
Sustancia Gris/anatomía & histología , Sustancia Gris/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Núcleos Talámicos/anatomía & histología , Núcleos Talámicos/diagnóstico por imagen , Adulto , Humanos , Procesamiento de Imagen Asistido por Computador , Reconocimiento de Normas Patrones Automatizadas , Aprendizaje Automático SupervisadoRESUMEN
We study variational regularisation methods for inverse problems with imperfect forward operators whose errors can be modelled by order intervals in a partial order of a Banach lattice. We carry out analysis with respect to existence and convex duality for general data fidelity terms and regularisation functionals. Both for a priori and a posteriori parameter choice rules, we obtain convergence rates of the regularised solutions in terms of Bregman distances. Our results apply to fidelity terms such as Wasserstein distances, φ-divergences, norms, as well as sums and infimal convolutions of those.
RESUMEN
Evidence suggests that both the interaction of so-called Merkel cells and the epidermal stress distribution play an important role in the formation of fingerprint patterns during pregnancy. To model the formation of fingerprint patterns in a biologically meaningful way these patterns have to become stationary. For the creation of synthetic fingerprints it is also very desirable that rescaling the model parameters leads to rescaled distances between the stationary fingerprint ridges. Based on these observations, as well as the model introduced by Kücken and Champod we propose a new model for the formation of fingerprint patterns during pregnancy. In this anisotropic interaction model the interaction forces not only depend on the distance vector between the cells and the model parameters, but additionally on an underlying tensor field, representing a stress field. This dependence on the tensor field leads to complex, anisotropic patterns. We study the resulting stationary patterns both analytically and numerically. In particular, we show that fingerprint patterns can be modeled as stationary solutions by choosing the underlying tensor field appropriately.
Asunto(s)
Algoritmos , Simulación por Computador , Dermatoglifia , Células Epidérmicas/citología , Células de Merkel/citología , Estrés Fisiológico , Anisotropía , Células Epidérmicas/fisiología , Femenino , Humanos , Células de Merkel/fisiología , EmbarazoRESUMEN
In this paper we propose a workflow to detect and track mitotic cells in time-lapse microscopy image sequences. In order to avoid the requirement for cell lines expressing fluorescent markers and the associated phototoxicity, phase contrast microscopy is often preferred over fluorescence microscopy in live-cell imaging. However, common specific image characteristics complicate image processing and impede use of standard methods. Nevertheless, automated analysis is desirable due to manual analysis being subjective, biased and extremely time-consuming for large data sets. Here, we present the following workflow based on mathematical imaging methods. In the first step, mitosis detection is performed by means of the circular Hough transform. The obtained circular contour subsequently serves as an initialisation for the tracking algorithm based on variational methods. It is sub-divided into two parts: in order to determine the beginning of the whole mitosis cycle, a backwards tracking procedure is performed. After that, the cell is tracked forwards in time until the end of mitosis. As a result, the average of mitosis duration and ratios of different cell fates (cell death, no division, division into two or more daughter cells) can be measured and statistics on cell morphologies can be obtained. All of the tools are featured in the user-friendly MATLAB®Graphical User Interface MitosisAnalyser.
Asunto(s)
Rastreo Celular/métodos , Células Epiteliales/ultraestructura , Procesamiento de Imagen Asistido por Computador/métodos , Células Secretoras de Insulina/ultraestructura , Microscopía de Contraste de Fase/métodos , Mitosis , Algoritmos , Línea Celular Tumoral , Rastreo Celular/estadística & datos numéricos , Células HeLa , Humanos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Microscopía Fluorescente/instrumentación , Microscopía Fluorescente/métodos , Microscopía de Contraste de Fase/instrumentación , Imagen de Lapso de Tiempo/instrumentación , Imagen de Lapso de Tiempo/métodos , Flujo de TrabajoRESUMEN
Partial differential equations (PDEs) play a fundamental role in the mathematical modelling of many processes and systems in physical, biological and other sciences. To simulate such processes and systems, the solutions of PDEs often need to be approximated numerically. The finite element method, for instance, is a usual standard methodology to do so. The recent success of deep neural networks at various approximation tasks has motivated their use in the numerical solution of PDEs. These so-called physics-informed neural networks and their variants have shown to be able to successfully approximate a large range of PDEs. So far, physics-informed neural networks and the finite element method have mainly been studied in isolation of each other. In this work, we compare the methodologies in a systematic computational study. Indeed, we employ both methods to numerically solve various linear and nonlinear PDEs: Poisson in 1D, 2D and 3D, Allen-Cahn in 1D, semilinear Schrödinger in 1D and 2D. We then compare computational costs and approximation accuracies. In terms of solution time and accuracy, physics-informed neural networks have not been able to outperform the finite element method in our study. In some experiments, they were faster at evaluating the solved PDE.
RESUMEN
BACKGROUND AND OBJECTIVE: 4D flow magnetic resonance imaging provides time-resolved blood flow velocity measurements, but suffers from limitations in spatio-temporal resolution and noise. In this study, we investigated the use of sinusoidal representation networks (SIRENs) to improve denoising and super-resolution of velocity fields measured by 4D flow MRI in the thoracic aorta. METHODS: Efficient training of SIRENs in 4D was achieved by sampling voxel coordinates and enforcing the no-slip condition at the vessel wall. A set of synthetic measurements were generated from computational fluid dynamics simulations, reproducing different noise levels. The influence of SIREN architecture was systematically investigated, and the performance of our method was compared to existing approaches for 4D flow denoising and super-resolution. RESULTS: Compared to existing techniques, a SIREN with 300 neurons per layer and 20 layers achieved lower errors (up to 50% lower vector normalized root mean square error, 42% lower magnitude normalized root mean square error, and 15% lower direction error) in velocity and wall shear stress fields. Applied to real 4D flow velocity measurements in a patient-specific aortic aneurysm, our method produced denoised and super-resolved velocity fields while maintaining accurate macroscopic flow measurements. CONCLUSIONS: This study demonstrates the feasibility of using SIRENs for complex blood flow velocity representation from clinical 4D flow, with quick execution and straightforward implementation.
Asunto(s)
Aorta Torácica , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Velocidad del Flujo Sanguíneo/fisiología , Aorta Torácica/diagnóstico por imagen , Aorta Torácica/fisiología , Estrés Mecánico , Hidrodinámica , Imagenología Tridimensional/métodosRESUMEN
Glioblastoma, an aggressive brain tumor prevalent in adults, exhibits heterogeneity in its microstructures and vascular patterns. The delineation of its subregions could facilitate the development of region-targeted therapies. However, current unsupervised learning techniques for this task face challenges in reliability due to fluctuations of clustering algorithms, particularly when processing data from diverse patient cohorts. Furthermore, stable clustering results do not guarantee clinical meaningfulness. To establish the clinical relevance of these subregions, we will perform survival predictions using radiomic features extracted from them. Following this, achieving a balance between outcome stability and clinical relevance presents a significant challenge, further exacerbated by the extensive time required for hyper-parameter tuning. In this study, we introduce a multi-objective Bayesian optimization (MOBO) framework, which leverages a Feature-enhanced Auto-Encoder (FAE) and customized losses to assess both the reproducibility of clustering algorithms and the clinical relevance of their outcomes. Specifically, we embed the entirety of these processes within the MOBO framework, modeling both using distinct Gaussian Processes (GPs). The proposed MOBO framework can automatically balance the trade-off between the two criteria by employing bespoke stability and clinical significance losses. Our approach efficiently optimizes all hyper-parameters, including the FAE architecture and clustering parameters, within a few steps. This not only accelerates the process but also consistently yields robust MRI subregion delineations and provides survival predictions with strong statistical validation.
Asunto(s)
Algoritmos , Teorema de Bayes , Neoplasias Encefálicas , Glioblastoma , Humanos , Glioblastoma/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/mortalidad , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Análisis por Conglomerados , Análisis de Supervivencia , Interpretación de Imagen Asistida por Computador/métodosRESUMEN
Cell imaging assays utilising fluorescence stains are essential for observing sub-cellular organelles and their responses to perturbations. Immunofluorescent staining process is routinely in labs, however the recent innovations in generative AI is challenging the idea of wet lab immunofluorescence (IF) staining. This is especially true when the availability and cost of specific fluorescence dyes is a problem to some labs. Furthermore, staining process takes time and leads to inter-intra-technician and hinders downstream image and data analysis, and the reusability of image data for other projects. Recent studies showed the use of generated synthetic IF images from brightfield (BF) images using generative AI algorithms in the literature. Therefore, in this study, we benchmark and compare five models from three types of IF generation backbones-CNN, GAN, and diffusion models-using a publicly available dataset. This paper not only serves as a comparative study to determine the best-performing model but also proposes a comprehensive analysis pipeline for evaluating the efficacy of generators in IF image synthesis. We highlighted the potential of deep learning-based generators for IF image synthesis, while also discussed potential issues and future research directions. Although generative AI shows promise in simplifying cell phenotyping using only BF images with IF staining, further research and validations are needed to address the key challenges of model generalisability, batch effects, feature relevance and computational costs.
RESUMEN
In vivo cardiac diffusion tensor imaging (cDTI) is a promising Magnetic Resonance Imaging (MRI) technique for evaluating the microstructure of myocardial tissue in living hearts, providing insights into cardiac function and enabling the development of innovative therapeutic strategies. However, the integration of cDTI into routine clinical practice poses challenging due to the technical obstacles involved in the acquisition, such as low signal-to-noise ratio and prolonged scanning times. In this study, we investigated and implemented three different types of deep learning-based MRI reconstruction models for cDTI reconstruction. We evaluated the performance of these models based on the reconstruction quality assessment, the diffusion tensor parameter assessment as well as the computational cost assessment. Our results indicate that the models discussed in this study can be applied for clinical use at an acceleration factor (AF) of × 2 and × 4 , with the D5C5 model showing superior fidelity for reconstruction and the SwinMR model providing higher perceptual scores. There is no statistical difference from the reference for all diffusion tensor parameters at AF × 2 or most DT parameters at AF × 4 , and the quality of most diffusion tensor parameter maps is visually acceptable. SwinMR is recommended as the optimal approach for reconstruction at AF × 2 and AF × 4 . However, we believe that the models discussed in this study are not yet ready for clinical use at a higher AF. At AF × 8 , the performance of all models discussed remains limited, with only half of the diffusion tensor parameters being recovered to a level with no statistical difference from the reference. Some diffusion tensor parameter maps even provide wrong and misleading information.
Asunto(s)
Aprendizaje Profundo , Imagen de Difusión Tensora , Imagen de Difusión Tensora/métodos , Algoritmos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Imagen de Difusión por Resonancia Magnética/métodosRESUMEN
Deep learning has been extensively applied in medical image reconstruction, where Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) represent the predominant paradigms, each possessing distinct advantages and inherent limitations: CNNs exhibit linear complexity with local sensitivity, whereas ViTs demonstrate quadratic complexity with global sensitivity. The emerging Mamba has shown superiority in learning visual representation, which combines the advantages of linear scalability and global sensitivity. In this study, we introduce MambaMIR, an Arbitrary-Masked Mamba-based model with wavelet decomposition for joint medical image reconstruction and uncertainty estimation. A novel Arbitrary Scan Masking (ASM) mechanism "masks out" redundant information to introduce randomness for further uncertainty estimation. Compared to the commonly used Monte Carlo (MC) dropout, our proposed MC-ASM provides an uncertainty map without the need for hyperparameter tuning and mitigates the performance drop typically observed when applying dropout to low-level tasks. For further texture preservation and better perceptual quality, we employ the wavelet transformation into MambaMIR and explore its variant based on the Generative Adversarial Network, namely MambaMIR-GAN. Comprehensive experiments have been conducted for multiple representative medical image reconstruction tasks, demonstrating that the proposed MambaMIR and MambaMIR-GAN outperform other baseline and state-of-the-art methods in different reconstruction tasks, where MambaMIR achieves the best reconstruction fidelity and MambaMIR-GAN has the best perceptual quality. In addition, our MC-ASM provides uncertainty maps as an additional tool for clinicians, while mitigating the typical performance drop caused by the commonly used dropout.
RESUMEN
For healthcare datasets, it is often impossible to combine data samples from multiple sites due to ethical, privacy, or logistical concerns. Federated learning allows for the utilization of powerful machine learning algorithms without requiring the pooling of data. Healthcare data have many simultaneous challenges, such as highly siloed data, class imbalance, missing data, distribution shifts, and non-standardized variables, that require new methodologies to address. Federated learning adds significant methodological complexity to conventional centralized machine learning, requiring distributed optimization, communication between nodes, aggregation of models, and redistribution of models. In this systematic review, we consider all papers on Scopus published between January 2015 and February 2023 that describe new federated learning methodologies for addressing challenges with healthcare data. We reviewed 89 papers meeting these criteria. Significant systemic issues were identified throughout the literature, compromising many methodologies reviewed. We give detailed recommendations to help improve methodology development for federated learning in healthcare.
RESUMEN
Purpose: To assess radiomics and deep learning (DL) methods in identifying symptomatic Carotid Artery Disease (CAD) from carotid CT angiography (CTA) images. We further compare the performance of these novel methods to the conventional calcium score. Methods: Carotid CT angiography (CTA) images from symptomatic patients (ischaemic stroke/transient ischaemic attack within the last 3 months) and asymptomatic patients were analysed. Carotid arteries were classified into culprit, non-culprit and asymptomatic. The calcium score was assessed using the Agatston method. 93 radiomic features were extracted from regions-of-interest drawn on 14 consecutive CTA slices. For DL, convolutional neural networks (CNNs) with and without transfer learning were trained directly on CTA slices. Predictive performance was assessed over 5-fold cross validated AUC scores. SHAP and GRAD-CAM algorithms were used for explainability. Results: 132 carotid arteries were analysed (41 culprit, 41 non-culprit, and 50 asymptomatic). For asymptomatic vs symptomatic arteries, radiomics attained a mean AUC of 0.96(± 0.02), followed by DL 0.86(± 0.06) and then calcium 0.79(± 0.08). For culprit vs non-culprit arteries, radiomics achieved a mean AUC of 0.75(± 0.09), followed by DL 0.67(± 0.10) and then calcium 0.60(± 0.02). For multi-class classification, the mean AUCs were 0.95(± 0.07), 0.79(± 0.05), and 0.71(± 0.07) for radiomics, DL and calcium, respectively. Explainability revealed consistent patterns in the most important radiomic features. Conclusions: Our study highlights the potential of novel image analysis techniques in extracting quantitative information beyond calcification in the identification of CAD. Though further work is required, the transition of these novel techniques into clinical practice may eventually facilitate better stroke risk stratification.
RESUMEN
Medieval paper, a handmade product, is made with a mould which leaves an indelible imprint on the sheet of paper. This imprint includes chain lines, laid lines and watermarks which are often visible on the sheet. Extracting these features allows the identification of the paper stock and gives information about the chronology, localisation and movement of manuscripts and people. Most computational work for feature extraction of paper analysis has so far focused on radiography or transmitted light images. While these imaging methods provide clear visualisation of the features of interest, they are expensive and time consuming in their acquisition and not feasible for smaller institutions. However, reflected light images of medieval paper manuscripts are abundant and possibly cheaper in their acquisition. In this paper, we propose algorithms to detect and extract the laid and chain lines from reflected light images. We tackle the main drawback of reflected light images, that is, the low contrast attenuation of chain and laid lines and intensity jumps due to noise and degradation, by employing the spectral total variation decomposition and develop methods for subsequent chain and laid line extraction. Our results clearly demonstrate the feasibility of using reflected light images in paper analysis. This work enables feature extraction for paper manuscripts that have otherwise not been analysed due to a lack of appropriate images. We also open the door for paper stock identification at scale.
RESUMEN
Medical image segmentation is an important task in medical imaging, as it serves as the first step for clinical diagnosis and treatment planning. While major success has been reported using deep learning supervised techniques, they assume a large and well-representative labeled set. This is a strong assumption in the medical domain where annotations are expensive, time-consuming, and inherent to human bias. To address this problem, unsupervised segmentation techniques have been proposed in the literature. Yet, none of the existing unsupervised segmentation techniques reach accuracies that come even near to the state-of-the-art of supervised segmentation methods. In this work, we present a novel optimization model framed in a new convolutional neural network (CNN)-based contrastive registration architecture for unsupervised medical image segmentation called CLMorph. The core idea of our approach is to exploit image-level registration and feature-level contrastive learning, to perform registration-based segmentation. First, we propose an architecture to capture the image-to-image transformation mapping via registration for unsupervised medical image segmentation. Second, we embed a contrastive learning mechanism in the registration architecture to enhance the discriminative capacity of the network at the feature level. We show that our proposed CLMorph technique mitigates the major drawbacks of existing unsupervised techniques. We demonstrate, through numerical and visual experiments, that our technique substantially outperforms the current state-of-the-art unsupervised segmentation methods on two major medical image datasets.