RESUMO
Cryogenic electron tomography (cryo-ET) visualizes the 3D spatial distribution of macromolecules at nanometer resolution inside native cells. However, automated identification of macromolecules inside cellular tomograms is challenged by noise and reconstruction artifacts, as well as the presence of many molecular species in the crowded volumes. Here, we present DeepFinder, a computational procedure that uses artificial neural networks to simultaneously localize multiple classes of macromolecules. Once trained, the inference stage of DeepFinder is faster than template matching and performs better than other competitive deep learning methods at identifying macromolecules of various sizes in both synthetic and experimental datasets. On cellular cryo-ET data, DeepFinder localized membrane-bound and cytosolic ribosomes (roughly 3.2 MDa), ribulose 1,5-bisphosphate carboxylase-oxygenase (roughly 560 kDa soluble complex) and photosystem II (roughly 550 kDa membrane complex) with an accuracy comparable to expert-supervised ground truth annotations. DeepFinder is therefore a promising algorithm for the semiautomated analysis of a wide range of molecular targets in cellular tomograms.
Assuntos
Algoritmos , Microscopia Crioeletrônica/métodos , Aprendizado Profundo , Tomografia com Microscopia Eletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Substâncias Macromoleculares/química , Redes Neurais de Computação , Chlamydomonas reinhardtii/metabolismo , Complexo de Proteína do Fotossistema II/química , Ribossomos/química , Ribulose-Bifosfato Carboxilase/químicaRESUMO
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Assuntos
Aprendizado Profundo , Humanos , Reprodutibilidade dos Testes , Algoritmos , PatologistasRESUMO
BACKGROUND: In optical coherence tomography (OCT) scans of patients with inherited retinal diseases (IRDs), the measurement of the thickness of the outer nuclear layer (ONL) has been well established as a surrogate marker for photoreceptor preservation. Current automatic segmentation tools fail in OCT segmentation in IRDs, and manual segmentation is time-consuming. METHODS AND MATERIAL: Patients with IRD and an available OCT scan were screened for the present study. Additionally, OCT scans of patients without retinal disease were included to provide training data for artificial intelligence (AI). We trained a U-net-based model on healthy patients and applied a domain adaption technique to the IRD patients' scans. RESULTS: We established an AI-based image segmentation algorithm that reliably segments the ONL in OCT scans of IRD patients. In a test dataset, the dice score of the algorithm was 98.7%. Furthermore, we generated thickness maps of the full retinal thickness and the ONL layer for each patient. CONCLUSION: Accurate segmentation of anatomical layers on OCT scans plays a crucial role for predictive models linking retinal structure to visual function. Our algorithm for segmentation of OCT images could provide the basis for further studies on IRDs.
RESUMO
Amyloid-ß (Aß) is thought to play an essential pathogenic role in Alzheimer´s disease (AD). A key enzyme involved in the generation of Aß is the ß-secretase BACE, for which powerful inhibitors have been developed and are currently in use in human clinical trials. However, although BACE inhibition can reduce cerebral Aß levels, whether it also can ameliorate neural circuit and memory impairments remains unclear. Using histochemistry, in vivo Ca2+ imaging, and behavioral analyses in a mouse model of AD, we demonstrate that along with reducing prefibrillary Aß surrounding plaques, the inhibition of BACE activity can rescue neuronal hyperactivity, impaired long-range circuit function, and memory defects. The functional neuronal impairments reappeared after infusion of soluble Aß, mechanistically linking Aß pathology to neuronal and cognitive dysfunction. These data highlight the potential benefits of BACE inhibition for the effective treatment of a wide range of AD-like pathophysiological and cognitive impairments.
Assuntos
Doença de Alzheimer/tratamento farmacológico , Secretases da Proteína Precursora do Amiloide/antagonistas & inibidores , Peptídeos beta-Amiloides/metabolismo , Neurônios/metabolismo , Inibidores de Proteases/farmacologia , Doença de Alzheimer/genética , Doença de Alzheimer/metabolismo , Secretases da Proteína Precursora do Amiloide/genética , Secretases da Proteína Precursora do Amiloide/metabolismo , Peptídeos beta-Amiloides/genética , Animais , Modelos Animais de Doenças , Humanos , Camundongos , Camundongos Transgênicos , Neurônios/patologiaRESUMO
Cell death, such as apoptosis and ferroptosis, play essential roles in the process of development, homeostasis, and pathogenesis of acute and chronic diseases. The increasing number of studies investigating cell death types in various diseases, particularly cancer and degenerative diseases, has raised hopes for their modulation in disease therapies. However, identifying the presence of a particular cell death type is not an obvious task, as it requires computationally intensive work and costly experimental assays. To address this challenge, we present CellDeathPred, a novel deep-learning framework that uses high-content imaging based on cell painting to distinguish cells undergoing ferroptosis or apoptosis from healthy cells. In particular, we incorporate a deep neural network that effectively embeds microscopic images into a representative and discriminative latent space, classifies the learned embedding into cell death modalities, and optimizes the whole learning using the supervised contrastive loss function. We assessed the efficacy of the proposed framework using cell painting microscopy data sets from human HT-1080 cells, where multiple inducers of ferroptosis and apoptosis were used to trigger cell death. Our model confidently separates ferroptotic and apoptotic cells from healthy controls, with an average accuracy of 95% on non-confocal data sets, supporting the capacity of the CellDeathPred framework for cell death discovery.
RESUMO
Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.
Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Animais , Camundongos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , NeuroimagemRESUMO
Deep learning (DL) can accelerate the prediction of prognostic biomarkers from routine pathology slides in colorectal cancer (CRC). However, current approaches rely on convolutional neural networks (CNNs) and have mostly been validated on small patient cohorts. Here, we develop a new transformer-based pipeline for end-to-end biomarker prediction from pathology slides by combining a pre-trained transformer encoder with a transformer network for patch aggregation. Our transformer-based approach substantially improves the performance, generalizability, data efficiency, and interpretability as compared with current state-of-the-art algorithms. After training and evaluating on a large multicenter cohort of over 13,000 patients from 16 colorectal cancer cohorts, we achieve a sensitivity of 0.99 with a negative predictive value of over 0.99 for prediction of microsatellite instability (MSI) on surgical resection specimens. We demonstrate that resection specimen-only training reaches clinical-grade performance on endoscopic biopsy tissue, solving a long-standing diagnostic problem.
Assuntos
Algoritmos , Neoplasias Colorretais , Humanos , Biomarcadores , Biópsia , Instabilidade de Microssatélites , Neoplasias Colorretais/genéticaRESUMO
Annotating microscopy images for nuclei segmentation by medical experts is laborious and time-consuming. To leverage the few existing annotations, also across multiple modalities, we propose a novel microscopy-style augmentation technique based on a generative adversarial network (GAN). Unlike other style transfer methods, it can not only deal with different cell assay types and lighting conditions, but also with different imaging modalities, such as bright-field and fluorescence microscopy. Using disentangled representations for content and style, we can preserve the structure of the original image while altering its style during augmentation. We evaluate our data augmentation on the 2018 Data Science Bowl dataset consisting of various cell assays, lighting conditions, and imaging modalities. With our style augmentation, the segmentation accuracy of the two top-ranked Mask R-CNN-based nuclei segmentation algorithms in the competition increases significantly. Thus, our augmentation technique renders the downstream task more robust to the test data heterogeneity and helps counteract class imbalance without resampling of minority classes.
RESUMO
Accurate brain tissue extraction on magnetic resonance imaging (MRI) data is crucial for analyzing brain structure and function. While several conventional tools have been optimized to handle human brain data, there have been no generalizable methods to extract brain tissues for multimodal MRI data from rodents, nonhuman primates, and humans. Therefore, developing a flexible and generalizable method for extracting whole brain tissue across species would allow researchers to analyze and compare experiment results more efficiently. Here, we propose a domain-adaptive and semi-supervised deep neural network, named the Brain Extraction Net (BEN), to extract brain tissues across species, MRI modalities, and MR scanners. We have evaluated BEN on 18 independent datasets, including 783 rodent MRI scans, 246 nonhuman primate MRI scans, and 4601 human MRI scans, covering five species, four modalities, and six MR scanners with various magnetic field strengths. Compared to conventional toolboxes, the superiority of BEN is illustrated by its robustness, accuracy, and generalizability. Our proposed method not only provides a generalized solution for extracting brain tissue across species but also significantly improves the accuracy of atlas registration, thereby benefiting the downstream processing tasks. As a novel fully automated deep-learning method, BEN is designed as an open-source software to enable high-throughput processing of neuroimaging data across species in preclinical and clinical applications.
Magnetic resonance imaging (MRI) is an ideal way to obtain high-resolution images of the whole brain of rodents and primates (including humans) non-invasively. A critical step in processing MRI data is brain tissue extraction, which consists on removing the signal from the non-neural tissues around the brain, such as the skull or fat, from the images. If this step is done incorrectly, it can lead to images with signals that do not correspond to the brain, which can compromise downstream analysis, and lead to errors when comparing samples from similar species. Although several traditional toolboxes to perform brain extraction are available, most of them focus on human brains, and no standardized methods are available for other mammals, such as rodents and monkeys. To bridge this gap, Yu et al. developed a computational method based on deep learning (a type of machine learning that imitates how humans learn certain types of information) named the Brain Extraction Net (BEN). BEN can extract brain tissues across species, MRI modalities, and scanners to provide a generalizable toolbox for neuroimaging using MRI. Next, Yu et al. demonstrated BEN's functionality in a large-scale experiment involving brain tissue extraction in eighteen different MRI datasets from different species. In these experiments, BEN was shown to improve the robustness and accuracy of processing brain magnetic resonance imaging data. Brain tissue extraction is essential for MRI-based neuroimaging studies, so BEN can benefit both the neuroimaging and the neuroscience communities. Importantly, the tool is an open-source software, allowing other researchers to use it freely. Additionally, it is an extensible tool that allows users to provide their own data and pre-trained networks to further improve BEN's generalization. Yu et al. have also designed interfaces to support other popular neuroimaging processing pipelines and to directly deal with external datasets, enabling scientists to use it to extract brain tissue in their own experiments.
Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Animais , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Neuroimagem/métodos , Primatas , Processamento de Imagem Assistida por Computador/métodosRESUMO
BACKGROUND AND OBJECTIVE: Cryo-electron tomography (cryo-ET) is an imaging technique that enables 3D visualization of the native cellular environment at sub-nanometer resolution, providing unpreceded insights into the molecular organization of cells. However, cryo-electron tomograms suffer from low signal-to-noise ratios and anisotropic resolution, which makes subsequent image analysis challenging. In particular, the efficient detection of membrane-embedded proteins is a problem still lacking satisfactory solutions. METHODS: We present MemBrain - a new deep learning-aided pipeline that automatically detects membrane-bound protein complexes in cryo-electron tomograms. After subvolumes are sampled along a segmented membrane, each subvolume is assigned a score using a convolutional neural network (CNN), and protein positions are extracted by a clustering algorithm. Incorporating rotational subvolume normalization and using a tiny receptive field simplify the task of protein detection and thus facilitate the network training. RESULTS: MemBrain requires only a small quantity of training labels and achieves excellent performance with only a single annotated membrane (F1 score: 0.88). A detailed evaluation shows that our fully trained pipeline outperforms existing classical computer vision-based and CNN-based approaches by a large margin (F1 score: 0.92 vs. max. 0.63). Furthermore, in addition to protein center positions, MemBrain can determine protein orientations, which has not been implemented by any existing CNN-based method to date. We also show that a pre-trained MemBrain program generalizes to tomograms acquired using different cryo-ET methods and depicting different types of cells. CONCLUSIONS: MemBrain is a powerful and annotation-efficient tool for the detection of membrane protein complexes in cryo-ET data, with the potential to be used in a wide range of biological studies. It is generalizable to various kinds of tomograms, making it possible to use pretrained models for different tasks. Its efficiency in terms of required annotations also allows rapid training and fine-tuning of models. The corresponding code, pretrained models, and instructions for operating the MemBrain program can be found at: https://github.com/CellArchLab/MemBrain.
Assuntos
Aprendizado Profundo , Microscopia Crioeletrônica/métodos , Tomografia com Microscopia Eletrônica/métodos , Elétrons , Processamento de Imagem Assistida por Computador/métodos , Proteínas de MembranaRESUMO
Objective. To develop an artificial intelligence method predicting lymph node metastasis (LNM) for patients with colorectal cancer (CRC). Impact Statement. A novel interpretable multimodal AI-based method to predict LNM for CRC patients by integrating information of pathological images and serum tumor-specific biomarkers. Introduction. Preoperative diagnosis of LNM is essential in treatment planning for CRC patients. Existing radiology imaging and genomic tests approaches are either unreliable or too costly. Methods. A total of 1338 patients were recruited, where 1128 patients from one centre were included as the discovery cohort and 210 patients from other two centres were involved as the external validation cohort. We developed a Multimodal Multiple Instance Learning (MMIL) model to learn latent features from pathological images and then jointly integrated the clinical biomarker features for predicting LNM status. The heatmaps of the obtained MMIL model were generated for model interpretation. Results. The MMIL model outperformed preoperative radiology-imaging diagnosis and yielded high area under the curve (AUCs) of 0.926, 0.878, 0.809, and 0.857 for patients with stage T1, T2, T3, and T4 CRC, on the discovery cohort. On the external cohort, it obtained AUCs of 0.855, 0.832, 0.691, and 0.792, respectively (T1-T4), which indicates its prediction accuracy and potential adaptability among multiple centres. Conclusion. The MMIL model showed the potential in the early diagnosis of LNM by referring to pathological images and tumor-specific biomarkers, which is easily accessed in different institutes. We revealed the histomorphologic features determining the LNM prediction indicating the model ability to learn informative latent features.
RESUMO
The respective value of clinical data and CT examinations in predicting COVID-19 progression is unclear, because the CT scans and clinical data previously used are not synchronized in time. To address this issue, we collected 119 COVID-19 patients with 341 longitudinal CT scans and paired clinical data, and we developed an AI system for the prediction of COVID-19 deterioration. By combining features extracted from CT and clinical data with our system, we can predict whether a patient will develop severe symptoms during hospitalization. Complementary to clinical data, CT examinations show significant add-on values for the prediction of COVID-19 progression in the early stage of COVID-19, especially in the 6th to 8th day after the symptom onset, indicating that this is the ideal time window for the introduction of CT examinations. We release our AI system to provide clinicians with additional assistance to optimize CT usage in the clinical workflow.
RESUMO
The segmentation of high-grade gliomas (HGG) using magnetic resonance imaging (MRI) data is clinically meaningful in neurosurgical practice, but a challenging task. Currently, most segmentation methods are supervised learning with labeled training sets. Although these methods work well in most cases, they typically require time-consuming manual labeling and pre-trained models. In this work, we propose an automatically unsupervised segmentation toolbox based on the clustering algorithm and morphological processing, named AUCseg. With our toolbox, the whole tumor was first extracted by clustering on T2-FLAIR images. Then, based on the mask acquired with whole tumor segmentation, the enhancing tumor was segmented on the post-contrast T1-weighted images (T1-CE) using clustering methods. Finally, the necrotic regions were segmented by morphological processing or clustering on T2-weighted images. Compared with K-means, Mini-batch K-means, and Fuzzy C Means (FCM), the Gaussian Mixture Model (GMM) clustering performs the best in our toolbox. We did a multi-sided evaluation of our toolbox in the BraTS2018 dataset and demonstrated that the whole tumor, tumor core, and enhancing tumor can be automatically segmented using default hyper-parameters with Dice score 0.8209, 0.7087, and 0.7254, respectively. The computing time of our toolbox for each case is around 22 seconds, which is at least 3 times faster than other state-of-the-art unsupervised methods. In addition, our toolbox has an option to perform semi-automatic segmentation via manually setup hyper-parameters, which could improve the segmentation performance. Our toolbox, AUCseg, is publicly available on Github. (https://github.com/Haifengtao/AUCseg).
RESUMO
Although the effects of ageing on cardiovascular control and particularly the response to orthostatic stress have been the subject of many studies, the interaction between the cardiovascular and cerebral regulation mechanisms is still not fully understood. Wavelet cross-correlation is used here to assess the coupling and synchronization between low-frequency oscillations (LFOs) observed in cerebral hemodynamics, as measured using cerebral blood flow velocity (CBFV) and cerebral oxygenation (O2Hb), and systemic cardiovascular dynamics, as measured using heart rate (HR) and arterial blood pressure (ABP), in both old and young healthy subjects undergoing head-up tilt table testing. Statistically significant increases in correlation values are found in the interaction of cerebral and cardiovascular LFOs for young subjects (P<0.01 for HR-ABP, P<0.001 for HR-O2Hb and ABP-O2Hb), but not in old subjects under orthostatic stress. The coupling between the cerebrovascular and wider cardiovascular systems in response to orthostatic stress thus appears to be impaired with ageing.
Assuntos
Envelhecimento/fisiologia , Circulação Cerebrovascular , Postura/fisiologia , Adulto , Idoso , Córtex Cerebral/metabolismo , Hemoglobinas/metabolismo , Humanos , Pessoa de Meia-Idade , Oxigênio/metabolismoRESUMO
In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumor hypoxia, but a standardized measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumor. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the clinical annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic, and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high-order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labeling based on a fully-convolutional neural network. Using a data set of 89 weakly annotated pairs of IF and HE images from eight tumors, we show that the quantitative results of methods (3) and (4) above are equally competitive and superior to the naive (1) and baseline (2) methods. All proposed methodologies show high correlation values with respect to the clinical annotations.
Assuntos
Hipóxia Tumoral , Humanos , Microscopia , Redes Neurais de Computação , Aprendizado de Máquina SupervisionadoRESUMO
Quantitative analysis of bioimaging data is often skewed by both shading in space and background variation in time. We introduce BaSiC, an image correction method based on low-rank and sparse decomposition which solves both issues. In comparison to existing shading correction tools, BaSiC achieves high-accuracy with significantly fewer input images, works for diverse imaging conditions and is robust against artefacts. Moreover, it can correct temporal drift in time-lapse microscopy data and thus improve continuous single-cell quantification. BaSiC requires no manual parameter setting and is available as a Fiji/ImageJ plugin.
Assuntos
Microscopia/métodos , Software , Algoritmos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagem com Lapso de TempoRESUMO
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
Assuntos
Corantes/química , Cor , Microscopia , Software , Coloração e RotulagemRESUMO
Many microscopic imaging modalities suffer from the problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artifacts. A typical example of this is the unwanted seam when stitching images to obtain a whole slide image (WSI). Elimination of shading plays an essential role for subsequent image processing such as segmentation, registration, or tracking. In this paper, we propose two new retrospective shading correction algorithms for WSI targeted to two common forms of WSI: multiple image tiles before mosaicking and an already-stitched image. Both methods leverage on recent achievements in matrix rank minimization and sparse signal recovery. We show how the classic shading problem in microscopy can be reformulated as a decomposition problem of low-rank and sparse components, which seeks an optimal separation of the foreground objects of interest and the background illumination field. Additionally, a sparse constraint is introduced in the Fourier domain to ensure the smoothness of the recovered background. Extensive qualitative and quantitative validation on both synthetic and real microscopy images demonstrates superior performance of the proposed methods in shading removal in comparison with a well-established method in ImageJ.