Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 5906, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39003292

RESUMO

As vast histological archives are digitised, there is a pressing need to be able to associate specific tissue substructures and incident pathology to disease outcomes without arduous annotation. Here, we learn self-supervised representations using a Vision Transformer, trained on 1.7 M histology images across 23 healthy tissues in 838 donors from the Genotype Tissue Expression consortium (GTEx). Using these representations, we can automatically segment tissues into their constituent tissue substructures and pathology proportions across thousands of whole slide images, outperforming other self-supervised methods (43% increase in silhouette score). Additionally, we can detect and quantify histological pathologies present, such as arterial calcification (AUROC = 0.93) and identify missing calcification diagnoses. Finally, to link gene expression to tissue morphology, we introduce RNAPath, a set of models trained on 23 tissue types that can predict and spatially localise individual RNA expression levels directly from H&E histology (mean genes significantly regressed = 5156, FDR 1%). We validate RNAPath spatial predictions with matched ground truth immunohistochemistry for several well characterised control genes, recapitulating their known spatial specificity. Together, these results demonstrate how self-supervised machine learning when applied to vast histological archives allows researchers to answer questions about tissue pathology, its spatial organisation and the interplay between morphological tissue variability and gene expression.


Assuntos
Aprendizado de Máquina Supervisionado , Humanos , RNA/genética , RNA/metabolismo , Perfilação da Expressão Gênica/métodos , Especificidade de Órgãos/genética , Processamento de Imagem Assistida por Computador/métodos
2.
bioRxiv ; 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38826408

RESUMO

Magnetic resonance angiography (MRA) performed at ultra-high magnetic field provides a unique opportunity to study the arteries of the living human brain at the mesoscopic level. From this, we can gain new insights into the brain's blood supply and vascular disease affecting small vessels. However, for quantitative characterization and precise representation of human angioarchitecture to, for example, inform blood-flow simulations, detailed segmentations of the smallest vessels are required. Given the success of deep learning-based methods in many segmentation tasks, we here explore their application to high-resolution MRA data, and address the difficulty of obtaining large data sets of correctly and comprehensively labelled data. We introduce VesselBoost, a vessel segmentation package, which utilizes deep learning and imperfect training labels for accurate vasculature segmentation. Combined with an innovative data augmentation technique, which leverages the resemblance of vascular structures, VesselBoost enables detailed vascular segmentations.

3.
J Imaging ; 10(2)2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38392093

RESUMO

The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods-occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT-and using a global technique-neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA