Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Nat Commun ; 15(1): 5906, 2024 Jul 13.
Article in English | MEDLINE | ID: mdl-39003292

ABSTRACT

As vast histological archives are digitised, there is a pressing need to be able to associate specific tissue substructures and incident pathology to disease outcomes without arduous annotation. Here, we learn self-supervised representations using a Vision Transformer, trained on 1.7 M histology images across 23 healthy tissues in 838 donors from the Genotype Tissue Expression consortium (GTEx). Using these representations, we can automatically segment tissues into their constituent tissue substructures and pathology proportions across thousands of whole slide images, outperforming other self-supervised methods (43% increase in silhouette score). Additionally, we can detect and quantify histological pathologies present, such as arterial calcification (AUROC = 0.93) and identify missing calcification diagnoses. Finally, to link gene expression to tissue morphology, we introduce RNAPath, a set of models trained on 23 tissue types that can predict and spatially localise individual RNA expression levels directly from H&E histology (mean genes significantly regressed = 5156, FDR 1%). We validate RNAPath spatial predictions with matched ground truth immunohistochemistry for several well characterised control genes, recapitulating their known spatial specificity. Together, these results demonstrate how self-supervised machine learning when applied to vast histological archives allows researchers to answer questions about tissue pathology, its spatial organisation and the interplay between morphological tissue variability and gene expression.


Subject(s)
Supervised Machine Learning , Humans , RNA/genetics , RNA/metabolism , Gene Expression Profiling/methods , Organ Specificity/genetics , Image Processing, Computer-Assisted/methods
2.
bioRxiv ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38826408

ABSTRACT

Magnetic resonance angiography (MRA) performed at ultra-high magnetic field provides a unique opportunity to study the arteries of the living human brain at the mesoscopic level. From this, we can gain new insights into the brain's blood supply and vascular disease affecting small vessels. However, for quantitative characterization and precise representation of human angioarchitecture to, for example, inform blood-flow simulations, detailed segmentations of the smallest vessels are required. Given the success of deep learning-based methods in many segmentation tasks, we here explore their application to high-resolution MRA data, and address the difficulty of obtaining large data sets of correctly and comprehensively labelled data. We introduce VesselBoost, a vessel segmentation package, which utilizes deep learning and imperfect training labels for accurate vasculature segmentation. Combined with an innovative data augmentation technique, which leverages the resemblance of vascular structures, VesselBoost enables detailed vascular segmentations.

3.
J Imaging ; 10(2)2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38392093

ABSTRACT

The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods-occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT-and using a global technique-neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model.

4.
Neural Netw ; 166: 704-721, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37604079

ABSTRACT

Computed tomography (CT) and magnetic resonance imaging (MRI) are two widely used clinical imaging modalities for non-invasive diagnosis. However, both of these modalities come with certain problems. CT uses harmful ionising radiation, and MRI suffers from slow acquisition speed. Both problems can be tackled by undersampling, such as sparse sampling. However, such undersampled data leads to lower resolution and introduces artefacts. Several techniques, including deep learning based methods, have been proposed to reconstruct such data. However, the undersampled reconstruction problem for these two modalities was always considered as two different problems and tackled separately by different research works. This paper proposes a unified solution for both sparse CT and undersampled radial MRI reconstruction, achieved by applying Fourier transform-based pre-processing on the radial MRI and then finally reconstructing both modalities using sinogram upsampling combined with filtered back-projection. The Primal-Dual network is a deep learning based method for reconstructing sparsely-sampled CT data. This paper introduces Primal-Dual UNet, which improves the Primal-Dual network in terms of accuracy and reconstruction speed. The proposed method resulted in an average SSIM of 0.932±0.021 while performing sparse CT reconstruction for fan-beam geometry with a sparsity level of 16, achieving a statistically significant improvement over the previous model, which resulted in 0.919±0.016. Furthermore, the proposed model resulted in 0.903±0.019 and 0.957±0.023 average SSIM while reconstructing undersampled brain and abdominal MRI data with an acceleration factor of 16, respectively - statistically significant improvements over the original model, which resulted in 0.867±0.025 and 0.949±0.025. Finally, this paper shows that the proposed network not only improves the overall image quality, but also improves the image quality for the regions-of-interest: liver, kidneys, and spleen; as well as generalises better than the baselines in presence the of a needle.


Subject(s)
Magnetic Resonance Imaging , Tomography, X-Ray Computed , Artifacts , Brain/diagnostic imaging
5.
Comput Med Imaging Graph ; 108: 102267, 2023 09.
Article in English | MEDLINE | ID: mdl-37506427

ABSTRACT

Image registration is the process of bringing different images into a common coordinate system - a technique widely used in various applications of computer vision, such as remote sensing, image retrieval, and, most commonly, medical imaging. Deep learning based techniques have been applied successfully to tackle various complex medical image processing problems, including medical image registration. Over the years, several image registration techniques have been proposed using deep learning. Deformable image registration techniques such as Voxelmorph have been successful in capturing finer changes and providing smoother deformations. However, Voxelmorph, as well as ICNet and FIRE, do not explicitly encode global dependencies (i.e. the overall anatomical view of the supplied image) and, therefore, cannot track large deformations. In order to tackle the aforementioned problems, this paper extends the Voxelmorph approach in three different ways. To improve the performance in case of small as well as large deformations, supervision of the model at different resolutions has been integrated using a multi-scale UNet. To support the network to learn and encode the minute structural co-relations of the given image-pairs, a self-constructing graph network (SCGNet) has been used as the latent of the multi-scale UNet - which can improve the learning process of the model and help the model to generalise better. And finally, to make the deformations inverse-consistent, cycle consistency loss has been employed. On the task of registration of brain MRIs, the proposed method achieved significant improvements over ANTs and VoxelMorph, obtaining a Dice score of 0.8013 ± 0.0243 for intramodal and 0.6211 ± 0.0309 for intermodal, while VoxelMorph achieved 0.7747 ± 0.0260 and 0.6071 ± 0.0510, respectively.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Image Processing, Computer-Assisted/methods
6.
Comput Biol Med ; 154: 106539, 2023 03.
Article in English | MEDLINE | ID: mdl-36689856

ABSTRACT

Model-based reconstruction employing the time separation technique (TST) was found to improve dynamic perfusion imaging of the liver using C-arm cone-beam computed tomography (CBCT). To apply TST using prior knowledge extracted from CT perfusion data, the liver should be accurately segmented from the CT scans. Reconstructions of primary and model-based CBCT data need to be segmented for proper visualisation and interpretation of perfusion maps. This research proposes Turbolift learning, which trains a modified version of the multi-scale Attention UNet on different liver segmentation tasks serially, following the order of the trainings CT, CBCT, CBCT TST - making the previous trainings act as pre-training stages for the subsequent ones - addressing the problem of limited number of datasets for training. For the final task of liver segmentation from CBCT TST, the proposed method achieved an overall Dice scores of 0.874±0.031 and 0.905±0.007 in 6-fold and 4-fold cross-validation experiments, respectively - securing statistically significant improvements over the model, which was trained only for that task. Experiments revealed that Turbolift not only improves the overall performance of the model but also makes it robust against artefacts originating from the embolisation materials and truncation artefacts. Additionally, in-depth analyses confirmed the order of the segmentation tasks. This paper shows the potential of segmenting the liver from CT, CBCT, and CBCT TST, learning from the available limited training data, which can possibly be used in the future for the visualisation and evaluation of the perfusion maps for the treatment evaluation of liver diseases.


Subject(s)
Cone-Beam Computed Tomography , Tomography, X-Ray Computed , Cone-Beam Computed Tomography/methods , Artifacts , Liver/diagnostic imaging , Image Processing, Computer-Assisted/methods
7.
J Imaging ; 8(10)2022 Sep 22.
Article in English | MEDLINE | ID: mdl-36286353

ABSTRACT

Blood vessels of the brain provide the human brain with the required nutrients and oxygen. As a vulnerable part of the cerebral blood supply, pathology of small vessels can cause serious problems such as Cerebral Small Vessel Diseases (CSVD). It has also been shown that CSVD is related to neurodegeneration, such as Alzheimer's disease. With the advancement of 7 Tesla MRI systems, higher spatial image resolution can be achieved, enabling the depiction of very small vessels in the brain. Non-Deep Learning-based approaches for vessel segmentation, e.g., Frangi's vessel enhancement with subsequent thresholding, are capable of segmenting medium to large vessels but often fail to segment small vessels. The sensitivity of these methods to small vessels can be increased by extensive parameter tuning or by manual corrections, albeit making them time-consuming, laborious, and not feasible for larger datasets. This paper proposes a deep learning architecture to automatically segment small vessels in 7 Tesla 3D Time-of-Flight (ToF) Magnetic Resonance Angiography (MRA) data. The algorithm was trained and evaluated on a small imperfect semi-automatically segmented dataset of only 11 subjects; using six for training, two for validation, and three for testing. The deep learning model based on U-Net Multi-Scale Supervision was trained using the training subset and was made equivariant to elastic deformations in a self-supervised manner using deformation-aware learning to improve the generalisation performance. The proposed technique was evaluated quantitatively and qualitatively against the test set and achieved a Dice score of 80.44 ± 0.83. Furthermore, the result of the proposed method was compared against a selected manually segmented region (62.07 resultant Dice) and has shown a considerable improvement (18.98%) with deformation-aware learning.

8.
Comput Biol Med ; 149: 106093, 2022 10.
Article in English | MEDLINE | ID: mdl-36116318

ABSTRACT

Expert interpretation of anatomical images of the human brain is the central part of neuroradiology. Several machine learning-based techniques have been proposed to assist in the analysis process. However, the ML models typically need to be trained to perform a specific task, e.g., brain tumour segmentation or classification. Not only do the corresponding training data require laborious manual annotations, but a wide variety of abnormalities can be present in a human brain MRI - even more than one simultaneously, which renders a representation of all possible anomalies very challenging. Hence, a possible solution is an unsupervised anomaly detection (UAD) system that can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out-of-distribution samples. Such a technique can then be used to detect anomalies - lesions or abnormalities, for example, brain tumours, without explicitly training the model for that specific pathology. Several Variational Autoencoder (VAE) based techniques have been proposed in the past for this task. Even though they perform very well on controlled artificially simulated anomalies, many of them perform poorly while detecting anomalies in clinical data. This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA), which is more robust on clinical data and shows its applicability in detecting anomalies such as tumours in brain MRIs. The proposed pipeline achieved a Dice score of 0.642 ± 0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859 ± 0.112 while detecting artificially induced anomalies, while the best performing baseline achieved 0.522 ± 0.135 and 0.783 ± 0.111, respectively.


Subject(s)
Brain Neoplasms , Image Processing, Computer-Assisted , Brain/diagnostic imaging , Brain/pathology , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging
9.
Comput Biol Med ; 143: 105321, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35219188

ABSTRACT

MRI is an inherently slow process, which leads to long scan time for high-resolution imaging. The speed of acquisition can be increased by ignoring parts of the data (undersampling). Consequently, this leads to the degradation of image quality, such as loss of resolution or introduction of image artefacts. This work aims to reconstruct highly undersampled Cartesian or radial MR acquisitions, with better resolution and with less to no artefact compared to conventional techniques like compressed sensing. In recent times, deep learning has emerged as a very important area of research and has shown immense potential in solving inverse problems, e.g. MR image reconstruction. In this paper, a deep learning based MR image reconstruction framework is proposed, which includes a modified regularised version of ResNet as the network backbone to remove artefacts from the undersampled image, followed by data consistency steps that fusions the network output with the data already available from undersampled k-space in order to further improve reconstruction quality. The performance of this framework for various undersampling patterns has also been tested, and it has been observed that the framework is robust to deal with various sampling patterns, even when mixed together while training, and results in very high quality reconstruction, in terms of high SSIM (highest being 0.990 ± 0.006 for acceleration factor of 3.5), while being compared with the fully sampled reconstruction. It has been shown that the proposed framework can successfully reconstruct even for an acceleration factor of 20 for Cartesian (0.968 ± 0.005) and 17 for radially (0.962 ± 0.012) sampled data. Furthermore, it has been shown that the framework preserves brain pathology during reconstruction while being trained on healthy subjects.

10.
Sci Rep ; 12(1): 1505, 2022 01 27.
Article in English | MEDLINE | ID: mdl-35087174

ABSTRACT

A brain tumour is a mass or cluster of abnormal cells in the brain, which has the possibility of becoming life-threatening because of its ability to invade neighbouring tissues and also form metastases. An accurate diagnosis is essential for successful treatment planning, and magnetic resonance imaging is the principal imaging modality for diagnosing brain tumours and their extent. Deep Learning methods in computer vision applications have shown significant improvement in recent years, most of which can be credited to the fact that a sizeable amount of data is available to train models, and the improvements in the model architectures yield better approximations in a supervised setting. Classifying tumours using such deep learning methods has made significant progress with the availability of open datasets with reliable annotations. Typically those methods are either 3D models, which use 3D volumetric MRIs or even 2D models considering each slice separately. However, by treating one spatial dimension separately or by considering the slices as a sequence of images over time, spatiotemporal models can be employed as "spatiospatial" models for this task. These models have the capabilities of learning specific spatial and temporal relationships while reducing computational costs. This paper uses two spatiotemporal models, ResNet (2+1)D and ResNet Mixed Convolution, to classify different types of brain tumours. It was observed that both these models performed superior to the pure 3D convolutional model, ResNet18. Furthermore, it was also observed that pre-training the models on a different, even unrelated dataset before training them for the task of tumour classification improves the performance. Finally, Pre-trained ResNet Mixed Convolution was observed to be the best model in these experiments, achieving a macro F1-score of 0.9345 and a test accuracy of 96.98%, while at the same time being the model with the least computational cost.


Subject(s)
Magnetic Resonance Imaging
11.
Magn Reson Med ; 87(2): 646-657, 2022 02.
Article in English | MEDLINE | ID: mdl-34463376

ABSTRACT

PURPOSE: Quantitative assessment of prospective motion correction (PMC) capability at 7T MRI for compliant healthy subjects to improve high-resolution images in the absence of intentional motion. METHODS: Twenty-one healthy subjects were imaged at 7 T. They were asked not to move, to consider only unintentional motion. An in-bore optical tracking system was used to monitor head motion and consequently update the imaging volume. For all subjects, high-resolution T1 (3D-MPRAGE), T2 (2D turbo spin echo), proton density (2D turbo spin echo), and T2∗ (2D gradient echo) weighted images were acquired with and without PMC. The images were evaluated through subjective and objective analysis. RESULTS: Subjective evaluation overall has shown a statistically significant improvement (5.5%) in terms of image quality with PMC ON. In a separate evaluation of every contrast, three of the four contrasts (T1 , T2 , and proton density) have shown a statistically significant improvement (9.62%, 9.85%, and 9.26%), whereas the fourth one ( T2∗ ) has shown improvement, although not statistically significant. In the evaluation with objective metrics, average edge strength has shown an overall improvement of 6% with PMC ON, which was statistically significant; and gradient entropy has shown an overall improvement of 2%, which did not reach statistical significance. CONCLUSION: Based on subjective assessment, PMC improved image quality in high-resolution images of healthy compliant subjects in the absence of intentional motion for all contrasts except T2∗ , in which no significant differences were observed. Quantitative metrics showed an overall trend for an improvement with PMC, but not all differences were significant.


Subject(s)
Artifacts , Image Processing, Computer-Assisted , Brain/diagnostic imaging , Healthy Volunteers , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Motion , Prospective Studies
12.
Artif Intell Med ; 121: 102196, 2021 11.
Article in English | MEDLINE | ID: mdl-34763811

ABSTRACT

Dynamic imaging is a beneficial tool for interventions to assess physiological changes. Nonetheless during dynamic MRI, while achieving a high temporal resolution, the spatial resolution is compromised. To overcome this spatio-temporal trade-off, this research presents a super-resolution (SR) MRI reconstruction with prior knowledge based fine-tuning to maximise spatial information while reducing the required scan-time for dynamic MRIs. A U-Net based network with perceptual loss is trained on a benchmark dataset and fine-tuned using one subject-specific static high resolution MRI as prior knowledge to obtain high resolution dynamic images during the inference stage. 3D dynamic data for three subjects were acquired with different parameters to test the generalisation capabilities of the network. The method was tested for different levels of in-plane undersampling for dynamic MRI. The reconstructed dynamic SR results after fine-tuning showed higher similarity with the high resolution ground-truth, while quantitatively achieving statistically significant improvement. The average SSIM of the lowest resolution experimented during this research (6.25% of the k-space) before and after fine-tuning were 0.939 ± 0.008 and 0.957 ± 0.006 respectively. This could theoretically result in an acceleration factor of 16, which can potentially be acquired in less than half a second. The proposed approach shows that the super-resolution MRI reconstruction with prior-information can alleviate the spatio-temporal trade-off in dynamic MRI, even for high acceleration factors.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
13.
Med Image Anal ; 69: 101950, 2021 04.
Article in English | MEDLINE | ID: mdl-33421920

ABSTRACT

Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) introduced new state-of-the-art segmentation systems. Despite outperforming the overall accuracy of existing systems, the effects of DL model properties and parameters on the performance are hard to interpret. This makes comparative analysis a necessary tool towards interpretable studies and systems. Moreover, the performance of DL for emerging learning approaches such as cross-modality and multi-modal semantic segmentation tasks has been rarely discussed. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. Abdominal organ segmentation from routine acquisitions plays an important role in several clinical applications, such as pre-surgical planning or morphological and volumetric follow-ups for various diseases. These applications require a certain level of performance on a diverse set of metrics such as maximum symmetric surface distance (MSSD) to determine surgical error-margin or overlap errors for tracking size and shape differences. Previous abdomen related challenges are mainly focused on tumor/lesion detection and/or classification with a single modality. Conversely, CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks were designed to analyze the capabilities of participating approaches from multiple perspectives. The results were investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 ± 0.00 / 0.95 ± 0.01), but the best MSSD performance remains limited (21.89 ± 13.94 / 20.85 ± 10.63 mm). The performances of participating models decrease dramatically for cross-modality tasks both for the liver (DICE: 0.88 ± 0.15 MSSD: 36.33 ± 21.97 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs are observed to perform worse compared to organ-specific ones (performance drop around 5%). Nevertheless, some of the successful models show better performance with their multi-organ versions. We conclude that the exploration of those pros and cons in both single vs multi-organ and cross-modality segmentations is poised to have an impact on further research for developing effective algorithms that would support real-world clinical applications. Finally, having more than 1500 participants and receiving more than 550 submissions, another important contribution of this study is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomenon.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Abdomen/diagnostic imaging , Humans , Liver
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2769-2772, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946467

ABSTRACT

Dynamic MRI is a technique of acquiring a series of images continuously to follow the physiological changes over time. However, such fast imaging results in low resolution images. In this work, abdominal deformation model computed from dynamic low resolution images have been applied to high resolution image, acquired previously, to generate dynamic high resolution MRI. Dynamic low resolution images were simulated into different breathing phases (inhale and exhale). Then, the image registration between breathing time points was performed using the B-spline SyN deformable model and using cross-correlation as a similarity metric. The deformation model between different breathing phases were estimated from highly undersampled data. This deformation model was then applied to the high resolution images to obtain high resolution images of different breathing phases. The results indicated that the deformation model could be computed from relatively very low resolution images.


Subject(s)
Magnetic Resonance Imaging , Abdomen , Algorithms , Respiration
SELECTION OF CITATIONS
SEARCH DETAIL