Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 1684, 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38396004

RESUMO

Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.


Assuntos
Redes Neurais de Computação , Hematoxilina , Amarelo de Eosina-(YS) , Coloração e Rotulagem
2.
BME Front ; 2022: 9786242, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37850170

RESUMO

The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies, and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis. HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist, which typically takes one day to prepare in a laboratory, increasing analysis time and associated costs. Here, we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images, matching the standard HER2 IHC staining that is chemically performed on the same tissue sections. The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis, in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images (WSIs) to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts. A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail, membrane clearness, and absence of staining artifacts with respect to their immunohistochemically stained counterparts. This virtual HER2 staining framework bypasses the costly, laborious, and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.

3.
Magn Reson Med ; 87(2): 984-998, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34611937

RESUMO

PURPOSE: To automate the segmentation of the peripheral arteries and veins in the lower extremities based on ferumoxytol-enhanced MR angiography (FE-MRA). METHODS: Our automated pipeline has 2 sequential stages. In the first stage, we used a 3D U-Net with local attention gates, which was trained based on a combination of the Focal Tversky loss with region mutual loss under a deep supervision mechanism to segment the vasculature from the high-resolution FE-MRA datasets. In the second stage, we used time-resolved images to separate the arteries from the veins. Because the ultimate segmentation quality of the arteries and veins relies on the performance of the first stage, we thoroughly evaluated the different aspects of the segmentation network and compared its performance in blood vessel segmentation with currently accepted state-of-the-art networks, including Volumetric-Net, DeepVesselNet-FCN, and Uception. RESULTS: We achieved a competitive F1 = 0.8087 and recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric-Net, DeepVesselNet-FCN, and Uception. For the artery and vein separation stage, we achieved F1 = (0.8274/0.7863) in the calf region, which is the most challenging region in peripheral arteries and veins segmentation. CONCLUSION: Our pipeline is capable of fully automatic vessel segmentation based on FE-MRA without need for human interaction in <4 min. This method improves upon manual segmentation by radiologists, which routinely takes several hours.


Assuntos
Óxido Ferroso-Férrico , Imageamento por Ressonância Magnética , Angiografia , Artérias/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Veias/diagnóstico por imagem
4.
Sci Total Environ ; 802: 149628, 2022 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-34454157

RESUMO

Globally, maize (Zea mays, a C4-plant) and alfalfa (Medicago sativa, a C3-plant) are common and economically important crops. Predicting the response of their water use efficiency, WUE, to changing hydrologic and climatic conditions is vital in helping farmers adapt to a changing climate. In this study, we assessed the effective leaf area index (eLAI - the leaf area most involved in CO2 and H2O exchange) and stomatal conductance in canopy scale in maize and alfalfa fields. In the process we used a theoretically-based photosynthesis C3-C4 model (C3C4PM) and carbon and water vapour fluxes measured by Eddy Covariance towers at our study sites. We found that in our study sites the eLAI was in the range of 25-32% of the observed total LAI in these crops. WUEs were in range of 8-9 mmol/mol. C3C4PM can be used in predictions of stomatal conductance and eLAI responses in C3 and C4 agricultural crops to elevated CO2 concentration and changes in precipitation and temperature under future climate scenarios.


Assuntos
Dióxido de Carbono , Fotossíntese , Produtos Agrícolas , Folhas de Planta , Zea mays
5.
Light Sci Appl ; 10(1): 233, 2021 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-34795202

RESUMO

An invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors. The process is cumbersome and time-consuming, often leading to unnecessary biopsies and scars. Emerging noninvasive optical technologies such as reflectance confocal microscopy (RCM) can provide label-free, cellular-level resolution, in vivo images of skin without performing a biopsy. Although RCM is a useful diagnostic tool, it requires specialized training because the acquired images are grayscale, lack nuclear features, and are difficult to correlate with tissue pathology. Here, we present a deep learning-based framework that uses a convolutional neural network to rapidly transform in vivo RCM images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of the epidermis, dermal-epidermal junction, and superficial dermis layers. The network was trained under an adversarial learning scheme, which takes ex vivo RCM images of excised unstained/label-free tissue as inputs and uses the microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. We show that this trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma, and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features to traditional histology from the same excised tissue. This application of deep learning-based virtual staining to noninvasive imaging technologies may permit more rapid diagnoses of malignant skin neoplasms and reduce invasive skin biopsies.

6.
Nat Commun ; 12(1): 4884, 2021 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-34385460

RESUMO

Pathology is practiced by visual inspection of histochemically stained tissue slides. While the hematoxylin and eosin (H&E) stain is most commonly used, special stains can provide additional contrast to different tissue components. Here, we demonstrate the utility of supervised learning-based computational stain transformation from H&E to special stains (Masson's Trichrome, periodic acid-Schiff and Jones silver stain) using kidney needle core biopsy tissue sections. Based on the evaluation by three renal pathologists, followed by adjudication by a fourth pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis of several non-neoplastic kidney diseases, sampled from 58 unique subjects (P = 0.0095). A second study found that the quality of the computationally generated special stains was statistically equivalent to those which were histochemically stained. This stain-to-stain transformation framework can improve preliminary diagnoses when additional special stains are needed, also providing significant savings in time and cost.


Assuntos
Biópsia com Agulha de Grande Calibre/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Nefropatias/patologia , Rim/patologia , Coloração e Rotulagem/métodos , Algoritmos , Corantes/química , Corantes/classificação , Corantes/normas , Diagnóstico Diferencial , Humanos , Nefropatias/diagnóstico , Patologia Clínica/métodos , Patologia Clínica/normas , Padrões de Referência , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Coloração e Rotulagem/normas
7.
Lab Chip ; 20(23): 4404-4412, 2020 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-32808619

RESUMO

We report a field-portable and cost-effective imaging flow cytometer that uses deep learning and holography to accurately detect Giardia lamblia cysts in water samples at a volumetric throughput of 100 mL h-1. This flow cytometer uses lens free color holographic imaging to capture and reconstruct phase and intensity images of microscopic objects in a continuously flowing sample, and automatically identifies Giardia lamblia cysts in real-time without the use of any labels or fluorophores. The imaging flow cytometer is housed in an environmentally-sealed enclosure with dimensions of 19 cm × 19 cm × 16 cm and weighs 1.6 kg. We demonstrate that this portable imaging flow cytometer coupled to a laptop computer can detect and quantify, in real-time, low levels of Giardia contamination (e.g., <10 cysts per 50 mL) in both freshwater and seawater samples. The field-portable and label-free nature of this method has the potential to allow rapid and automated screening of drinking water supplies in resource limited settings in order to detect waterborne parasites and monitor the integrity of the filters used for water treatment.


Assuntos
Cistos , Aprendizado Profundo , Giardia lamblia , Holografia , Citometria de Fluxo , Humanos
8.
NPJ Digit Med ; 3: 76, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32509973

RESUMO

Sickle cell disease (SCD) is a major public health priority throughout much of the world, affecting millions of people. In many regions, particularly those in resource-limited settings, SCD is not consistently diagnosed. In Africa, where the majority of SCD patients reside, more than 50% of the 0.2-0.3 million children born with SCD each year will die from it; many of these deaths are in fact preventable with correct diagnosis and treatment. Here, we present a deep learning framework which can perform automatic screening of sickle cells in blood smears using a smartphone microscope. This framework uses two distinct, complementary deep neural networks. The first neural network enhances and standardizes the blood smear images captured by the smartphone microscope, spatially and spectrally matching the image quality of a laboratory-grade benchtop microscope. The second network acts on the output of the first image enhancement neural network and is used to perform the semantic segmentation between healthy and sickle cells within a blood smear. These segmented images are then used to rapidly determine the SCD diagnosis per patient. We blindly tested this mobile sickle cell detection method using blood smears from 96 unique patients (including 32 SCD patients) that were imaged by our smartphone microscope, and achieved ~98% accuracy, with an area-under-the-curve of 0.998. With its high accuracy, this mobile and cost-effective method has the potential to be used as a screening tool for SCD and other blood cell disorders in resource-limited settings.

9.
Light Sci Appl ; 9: 78, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32411363

RESUMO

Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.

10.
ACS Photonics ; 7(11): 3023-3034, 2020 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-34368395

RESUMO

Polarized light microscopy provides high contrast to birefringent specimen and is widely used as a diagnostic tool in pathology. However, polarization microscopy systems typically operate by analyzing images collected from two or more light paths in different states of polarization, which lead to relatively complex optical designs, high system costs, or experienced technicians being required. Here, we present a deep learning-based holographic polarization microscope that is capable of obtaining quantitative birefringence retardance and orientation information of specimen from a phase-recovered hologram, while only requiring the addition of one polarizer/analyzer pair to an inline lensfree holographic imaging system. Using a deep neural network, the reconstructed holographic images from a single state of polarization can be transformed into images equivalent to those captured using a single-shot computational polarized light microscope (SCPLM). Our analysis shows that a trained deep neural network can extract the birefringence information using both the sample specific morphological features as well as the holographic amplitude and phase distribution. To demonstrate the efficacy of this method, we tested it by imaging various birefringent samples including, for example, monosodium urate and triamcinolone acetonide crystals. Our method achieves similar results to SCPLM both qualitatively and quantitatively, and due to its simpler optical design and significantly larger field-of-view this method has the potential to expand the access to polarization microscopy and its use for medical diagnosis in resource limited settings.

11.
BME Front ; 2020: 9647163, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-37849966

RESUMO

In an age where digitization is widespread in clinical and preclinical workflows, pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides. Over the last decade, new high throughput digital scanning microscopes have ushered in the era of digital pathology that, along with recent advances in machine vision, have opened up new possibilities for Computer-Aided-Diagnoses. Despite these advances, the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption. Here, we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.

12.
Lab Chip ; 19(17): 2925-2935, 2019 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-31372607

RESUMO

Lack of access to clean water is a major global issue that affects millions of people worldwide. Drinking contaminated water can be extremely hazardous, so it is imperative that it is tested sufficiently. One method commonly used to determine the quality of water is testing for both E. coli and total coliform. Here, we present a cost-effective and automated device which can concurrently test drinking water samples for both E. coli and total coliform using an EPA-approved reagent. Equipped with a Raspberry Pi microcontroller and camera, we perform automated periodic measurements of both the absorption and fluorescence of the water under test over 24 hours. In each test, 100 mL of the water sample is split into a custom designed 40-well plate, where the transmitted blue light and the fluorescent light (under UV excitation) are collected by 520 individual optical fibers. Images of these fiber outputs are then acquired periodically, and digitally processed to determine the presence of the bacteria in each well of the 40-well plate. We demonstrate that this cost-effective device, weighing 1.66 kg, can automatically detect the presence of both E. coli and total coliform in drinking water within ∼16 hours, down to a level of one colony-forming unit (CFU) per 100 mL. Furthermore, due to its automated analysis, this approach is also more sensitive than a manual count performed by an expert, reducing the time needed to determine whether the water under test is safe to drink or not.


Assuntos
Automação , Colorimetria , Escherichia coli/isolamento & purificação , Fluorometria , Fibras Ópticas , Colorimetria/instrumentação , Fluorometria/instrumentação
13.
Sci Rep ; 9(1): 12050, 2019 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-31427691

RESUMO

We report resolution enhancement in scanning electron microscopy (SEM) images using a generative adversarial network. We demonstrate the veracity of this deep learning-based super-resolution technique by inferring unresolved features in low-resolution SEM images and comparing them with the accurately co-registered high-resolution SEM images of the same samples. Through spatial frequency analysis, we also report that our method generates images with frequency spectra matching higher resolution SEM images of the same fields-of-view. By using this technique, higher resolution SEM images can be taken faster, while also reducing both electron charging and damage to the samples.

14.
J Biophotonics ; 12(11): e201900107, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31309728

RESUMO

We report a framework based on a generative adversarial network that performs high-fidelity color image reconstruction using a single hologram of a sample that is illuminated simultaneously by light at three different wavelengths. The trained network learns to eliminate missing-phase-related artifacts, and generates an accurate color transformation for the reconstructed image. Our framework is experimentally demonstrated using lung and prostate tissue sections that are labeled with different histological stains. This framework is envisaged to be applicable to point-of-care histopathology and presents a significant improvement in the throughput of coherent microscopy systems given that only a single hologram of the specimen is required for accurate color imaging.


Assuntos
Aprendizado Profundo , Holografia , Processamento de Imagem Assistida por Computador/métodos , Microscopia , Cor , Humanos , Masculino , Próstata/diagnóstico por imagem
15.
Nat Biomed Eng ; 3(6): 466-477, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31142829

RESUMO

The histological analysis of tissue samples, widely used for disease diagnosis, involves lengthy and laborious tissue preparation. Here, we show that a convolutional neural network trained using a generative adversarial-network model can transform wide-field autofluorescence images of unlabelled tissue sections into images that are equivalent to the bright-field images of histologically stained versions of the same samples. A blind comparison, by board-certified pathologists, of this virtual staining method and standard histological staining using microscopic images of human tissue sections of the salivary gland, thyroid, kidney, liver and lung, and involving different types of stain, showed no major discordances. The virtual-staining method bypasses the typically labour-intensive and costly histological staining procedures, and could be used as a blueprint for the virtual staining of tissue images acquired with other label-free imaging modalities.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Coloração e Rotulagem , Algoritmos , Fluorescência , Humanos , Fígado/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Melaninas/metabolismo , Redes Neurais de Computação , Padrões de Referência
16.
Light Sci Appl ; 8: 25, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30854197

RESUMO

Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram. However, unlike a conventional bright-field microscopy image, the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects. Here, we demonstrate that cross-modality deep learning using a generative adversarial network (GAN) can endow holographic images of a sample volume with bright-field microscopy contrast, combining the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of incoherent bright-field microscopy. We illustrate the performance of this "bright-field holography" method through the snapshot imaging of bioaerosols distributed in 3D, matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope. This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging, and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram, benefiting from the wave-propagation framework of holography.

17.
Sci Rep ; 9(1): 3926, 2019 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-30850721

RESUMO

We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. The capabilities of this approach are experimentally validated by super-resolving complex-valued images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.


Assuntos
Aprendizado Profundo , Holografia/métodos , Aumento da Imagem/métodos , Microscopia/métodos , Desenho de Equipamento , Feminino , Holografia/instrumentação , Holografia/estatística & dados numéricos , Humanos , Pulmão/diagnóstico por imagem , Microscopia/instrumentação , Microscopia/estatística & dados numéricos , Redes Neurais de Computação , Teste de Papanicolaou/métodos , Teste de Papanicolaou/estatística & dados numéricos , Software , Esfregaço Vaginal/métodos , Esfregaço Vaginal/estatística & dados numéricos
18.
Light Sci Appl ; 8: 23, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30728961

RESUMO

Using a deep neural network, we demonstrate a digital staining technique, which we term PhaseStain, to transform the quantitative phase images (QPI) of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained. Through pairs of image data (QPI and the corresponding brightfield images, acquired after staining), we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin, kidney, and liver tissue, matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin, Jones' stain, and Masson's trichrome stain, respectively. This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general, by eliminating the need for histological staining, reducing sample preparation related costs and saving time. Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA