Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Alzheimers Dement ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38940656

ABSTRACT

BACKGROUND: This study investigated the potential of phosphorylated plasma Tau217 ratio (pTau217R) and plasma amyloid beta (Aß) 42/Aß40 in predicting brain amyloid levels measured by positron emission tomography (PET) Centiloid (CL) for Alzheimer's disease (AD) staging and screening. METHODS: Quantification of plasma pTau217R and Aß42/Aß40 employed immunoprecipitation-mass spectrometry. CL prediction models were developed on a cohort of 904 cognitively unimpaired, preclinical and early AD subjects and validated on two independent cohorts. RESULTS: Models integrating pTau217R outperformed Aß42/Aß40 alone, predicting amyloid levels up to 89.1 CL. High area under the receiver operating characteristic curve (AUROC) values (89.3% to 94.7%) were observed across a broad CL range (15 to 90). Utilizing pTau217R-based models for low amyloid levels reduced PET scans by 70.5% to 78.6%. DISCUSSION: pTau217R effectively predicts brain amyloid levels, surpassing cerebrospinal fluid Aß42/Aß40's range. Combining it with plasma Aß42/Aß40 enhances sensitivity for low amyloid detection, reducing unnecessary PET scans and expanding clinical utility. HIGHLIGHTS: Phosphorylated plasma Tau217 ratio (pTau217R) effectively predicts amyloid-PET Centiloid (CL) across a broad spectrum. Integrating pTau217R with Aß42/Aß40 extends the CL prediction upper limit to 89.1 CL. Combined model predicts amyloid status with high accuracy, especially in cognitively unimpaired individuals. This model identifies subjects above or below various CL thresholds with high accuracy. pTau217R-based models significantly reduce PET scans by up to 78.6% for screening out individuals with no/low amyloid.

2.
J Pathol Inform ; 13: 7, 2022.
Article in English | MEDLINE | ID: mdl-35136674

ABSTRACT

BACKGROUND: Training convolutional neural networks using pathology whole slide images (WSIs) is traditionally prefaced by the extraction of a training dataset of image patches. While effective, for large datasets of WSIs, this dataset preparation is inefficient. METHODS: We created a custom pipeline (histo-fetch) to efficiently extract random patches and labels from pathology WSIs for input to a neural network on-the-fly. We prefetch these patches as needed during network training, avoiding the need for WSI preparation such as chopping/tiling. RESULTS & CONCLUSIONS: We demonstrate the utility of this pipeline to perform artificial stain transfer and image generation using the popular networks CycleGAN and ProGAN, respectively. For a large WSI dataset, histo-fetch is 98.6% faster to start training and used 7535x less disk space.

3.
Article in English | MEDLINE | ID: mdl-34366541

ABSTRACT

With the rapid advancement in multiplex tissue staining, computer hardware, and machine learning, computationally-based tools are becoming indispensable for the evaluation of digital histopathology. Historically, standard histochemical staining methods such as hematoxylin and eosin, periodic acid-Schiff, and trichrome have been the gold standard for microscopic tissue evaluation by pathologists, and therefore brightfield microscopy images derived from such stains are primarily used for developing computational pathology tools. However, these histochemical stains are nonspecific in terms of highlighting structures and cell types. In contrast, immunohistochemical stains use antibodies to specifically detect and quantify proteins, which can be used to specifically highlight structures and cell types of interest. Traditionally, such immunofluorescence-based methods are only able to simultaneously stain a limited number of target proteins/antigens, typically up to three channels. Fluorescence-based multiplex immunohistochemistry (mIHC) is a new technology that enables simultaneous localization and quantification of numerous proteins/antigens, allowing for the possibility to detect a wide range of histologic structures and cell types within tissue. However, this method is limited by cost, specialized equipment, technical expertise, and time. In this study, we implemented a deep learning-based pipeline to synthetically generate in silico mIHC images from brightfield images of tissue slides-stained with routinely used histochemical stains, in particular PAS. Our tool was trained using fluorescence-based mIHC images as the ground-truth. The proposed pipeline offers high contrast detection of structures in brightfield imaged tissue sections stained with standard histochemical stains. We demonstrate the performance of our pipeline by computationally detecting multiple compartments in kidney biopsies, including cell nuclei, collagen/fibrosis, distal tubules, proximal tubules, endothelial cells, and leukocytes, from PAS-stained tissue sections. Our work can be extended for other histologic structures and tissue types and can be used as a basis for future automated annotation of histologic structures and cell types without the added cost of actually generating mIHC slides.

4.
Article in English | MEDLINE | ID: mdl-32362707

ABSTRACT

Generative adversarial networks (GANs) have received immense attention in the field of machine learning for their potential to learn high-dimensional and real data distribution. These methods do not rely on any assumptions about the data distribution of the input sample and can generate real-like samples from latent vector space based on unsupervised learning. In the medical field, particularly, in digital pathology expert annotation and availability of a large set of training data is costly and the study of manifestations of various diseases is based on visual examination of stained slides. In clinical practice, various staining information is required to improve the pathological diagnosis process. But when the sampled tissue to be examined is limited, the final diagnosis made by the pathologist is based on limited stain styles. These limitations can be overcome by studying the usability and reliability of generative models in the field of digital pathology. To understand the usability of the generative models, we synthesize in an unsupervised manner, high resolution renal microanatomical structures like renal glomerulus in thin tissue histology images using state-of-art architectures like Deep Convolutional Generative Adversarial Network (DCGAN) and Enhanced Super-Resolution Generative Adversarial Network (ESRGAN). Successful generation of such structures will lead to obtaining a large set of labeled data for further developing supervised algorithms for disease classification and understanding progression. Our study suggests while GAN is able to attain formalin fixed and paraffin embedded tissue image quality, GAN requires further prior knowledge as input to model intrinsic micro-anatomical details, such as capillary wall, urinary pole, nuclei placement, suggesting developing semi-supervised architectures by using these above details as prior information. Also, the generative models can be used to create an artificial effect of staining without physically tampering the histopathological slide. To demonstrate this, we use a CycleGAN network to transform Hematoxylin and eosin (H&E) stain to Periodic acid-Schiff (PAS) stain and Jones methenamine silver (JMS) stain to PAS stain. In this way GAN can be employed to translate different renal pathology stain styles when the relevant staining information is not available in the clinical settings.

SELECTION OF CITATIONS
SEARCH DETAIL