RESUMO
Understanding the molecular and physical complexity of the tissue microenvironment (TiME) in the context of its spatiotemporal organization has remained an enduring challenge. Recent advances in engineering and data science are now promising the ability to study the structure, functions, and dynamics of the TiME in unprecedented detail; however, many advances still occur in silos that rarely integrate information to study the TiME in its full detail. This review provides an integrative overview of the engineering principles underlying chemical, optical, electrical, mechanical, and computational science to probe, sense, model, and fabricate the TiME. In individual sections, we first summarize the underlying principles, capabilities, and scope of emerging technologies, the breakthrough discoveries enabled by each technology and recent, promising innovations. We provide perspectives on the potential of these advances in answering critical questions about the TiME and its role in various disease and developmental processes. Finally, we present an integrative view that appreciates the major scientific and educational aspects in the study of the TiME.
RESUMO
Interpretability is highly desired for deep neural network-based classifiers, especially when addressing high-stake decisions in medical imaging. Commonly used post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations of a given model, leading to ambiguity about which one to choose. To address this problem, a novel decision-theory-inspired approach is investigated to establish a self-interpretable model, given a pre-trained deep binary black-box medical image classifier. This approach involves utilizing a self-interpretable encoder-decoder model in conjunction with a single-layer fully connected network with unity weights. The model is trained to estimate the test statistic of the given trained black-box deep binary classifier to maintain a similar accuracy. The decoder output image, referred to as an equivalency map, is an image that represents a transformed version of the to-be-classified image that, when processed by the fixed fully connected layer, produces the same test statistic value as the original classifier. The equivalency map provides a visualization of the transformed image features that directly contribute to the test statistic value and, moreover, permits quantification of their relative contributions. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative. Detailed quantitative and qualitative analyses have been performed with three different medical image binary classification tasks.
Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado ProfundoRESUMO
Treatment of blood smears with Wright's stain is one of the most helpful tools in detecting white blood cell abnormalities. However, to diagnose leukocyte disorders, a clinical pathologist must perform a tedious, manual process of locating and identifying individual cells. Furthermore, the staining procedure requires considerable preparation time and clinical infrastructure, which is incompatible with point-of-care diagnosis. Thus, rapid and automated evaluations of unlabeled blood smears are highly desirable. In this study, we used color spatial light interference microcopy (cSLIM), a highly sensitive quantitative phase imaging (QPI) technique, coupled with deep learning tools, to localize, classify and segment white blood cells (WBCs) in blood smears. The concept of combining QPI label-free data with AI for the purpose of extracting cellular specificity has recently been introduced in the context of fluorescence imaging as phase imaging with computational specificity (PICS). We employed AI models to first translate SLIM images into brightfield micrographs, then ran parallel tasks of locating and labelling cells using EfficientNet, which is an object detection model. Next, WBC binary masks were created using U-net, a convolutional neural network that performs precise segmentation. After training on digitally stained brightfield images of blood smears with WBCs, we achieved a mean average precision of 75% for localizing and classifying neutrophils, eosinophils, lymphocytes, and monocytes, and an average pixel-wise majority-voting F1 score of 80% for determining the cell class from semantic segmentation maps. Therefore, PICS renders and analyzes synthetically stained blood smears rapidly, at a reduced cost of sample preparation, providing quantitative clinical information.
Assuntos
Leucócitos , Redes Neurais de Computação , Microscopia , Linfócitos , MonócitosRESUMO
An overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed. Recent deep learning models for classification of diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy are also discussed. Important critical insights and future research directions are given.
Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Oftalmopatias/diagnóstico por imagem , Oftalmopatias/diagnóstico , Olho/diagnóstico por imagem , Fundo de Olho , Interpretação de Imagem Assistida por Computador/métodos , Oftalmologia/métodos , HumanosRESUMO
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.