Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Mod Pathol ; 37(6): 100487, 2024 Apr 07.
Article in English | MEDLINE | ID: mdl-38588884

ABSTRACT

Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.

2.
Biomed Opt Express ; 15(4): 2187-2201, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38633074

ABSTRACT

Slide-free imaging techniques have shown great promise in improving the histological workflow. For example, computational high-throughput autofluorescence microscopy by pattern illumination (CHAMP) has achieved high resolution with a long depth of field, which, however, requires a costly ultraviolet laser. Here, simply using a low-cost light-emitting diode (LED), we propose a deep learning-assisted framework of enhanced widefield microscopy, termed EW-LED, to generate results similar to CHAMP (the learning target). Comparing EW-LED and CHAMP, EW-LED reduces the cost by 85×, shortening the image acquisition time and computation time by 36× and 17×, respectively. This framework can be applied to other imaging modalities, enhancing widefield images for better virtual histology.

3.
PNAS Nexus ; 3(4): pgae133, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38601859

ABSTRACT

Deep learning algorithms have been widely used in microscopic image translation. The corresponding data-driven models can be trained by supervised or unsupervised learning depending on the availability of paired data. However, general cases are where the data are only roughly paired such that supervised learning could be invalid due to data unalignment, and unsupervised learning would be less ideal as the roughly paired information is not utilized. In this work, we propose a unified framework (U-Frame) that unifies supervised and unsupervised learning by introducing a tolerance size that can be adjusted automatically according to the degree of data misalignment. Together with the implementation of a global sampling rule, we demonstrate that U-Frame consistently outperforms both supervised and unsupervised learning in all levels of data misalignments (even for perfectly aligned image pairs) in a myriad of image translation applications, including pseudo-optical sectioning, virtual histological staining (with clinical evaluations for cancer diagnosis), improvement of signal-to-noise ratio or resolution, and prediction of fluorescent labels, potentially serving as new standard for image translation.

4.
Elife ; 112022 11 04.
Article in English | MEDLINE | ID: mdl-36331195

ABSTRACT

Rapid multicolor three-dimensional (3D) imaging for centimeter-scale specimens with subcellular resolution remains a challenging but captivating scientific pursuit. Here, we present a fast, cost-effective, and robust multicolor whole-organ 3D imaging method assisted with ultraviolet (UV) surface excitation and vibratomy-assisted sectioning, termed translational rapid ultraviolet-excited sectioning tomography (TRUST). With an inexpensive UV light-emitting diode (UV-LED) and a color camera, TRUST achieves widefield exogenous molecular-specific fluorescence and endogenous content-rich autofluorescence imaging simultaneously while preserving low system complexity and system cost. Formalin-fixed specimens are stained layer by layer along with serial mechanical sectioning to achieve automated 3D imaging with high staining uniformity and time efficiency. 3D models of all vital organs in wild-type C57BL/6 mice with the 3D structure of their internal components (e.g., vessel network, glomeruli, and nerve tracts) can be reconstructed after imaging with TRUST to demonstrate its fast, robust, and high-content multicolor 3D imaging capability. Moreover, its potential for developmental biology has also been validated by imaging entire mouse embryos (~2 days for the embryo at the embryonic day of 15). TRUST offers a fast and cost-effective approach for high-resolution whole-organ multicolor 3D imaging while relieving researchers from the heavy sample preparation workload.


Subject(s)
Histological Techniques , Imaging, Three-Dimensional , Animals , Mice , Mice, Inbred C57BL , Imaging, Three-Dimensional/methods , Tomography, X-Ray Computed , Staining and Labeling
5.
Adv Sci (Weinh) ; 9(2): e2102358, 2022 01.
Article in English | MEDLINE | ID: mdl-34747142

ABSTRACT

Rapid and high-resolution histological imaging with minimal tissue preparation has long been a challenging and yet captivating medical pursuit. Here, the authors propose a promising and transformative histological imaging method, termed computational high-throughput autofluorescence microscopy by pattern illumination (CHAMP). With the assistance of computational microscopy, CHAMP enables high-throughput and label-free imaging of thick and unprocessed tissues with large surface irregularity at an acquisition speed of 10 mm2 /10 s with 1.1-µm lateral resolution. Moreover, the CHAMP image can be transformed into a virtually stained histological image (Deep-CHAMP) through unsupervised learning within 15 s, where significant cellular features are quantitatively extracted with high accuracy. The versatility of CHAMP is experimentally demonstrated using mouse brain/kidney and human lung tissues prepared with various clinical protocols, which enables a rapid and accurate intraoperative/postoperative pathological examination without tissue processing or staining, demonstrating its great potential as an assistive imaging platform for surgeons and pathologists to provide optimal adjuvant treatment.


Subject(s)
Brain/cytology , Histological Techniques/methods , Kidney/cytology , Lung/cytology , Microscopy/methods , Unsupervised Machine Learning , Animals , Humans , Mice , Models, Animal
6.
Photoacoustics ; 25: 100313, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34804794

ABSTRACT

Ultraviolet photoacoustic microscopy (UV-PAM) has been investigated to provide label-free and registration-free volumetric histological images for whole organs, offering new insights into complex biological organs. However, because of the high UV absorption of lipids and pigments in tissue, UV-PAM suffers from low image contrast and shallow image depth, hindering its capability for revealing various microstructures in organs. To improve the UV-PAM imaging contrast and imaging depth, here we propose to implement a state-of-the-art optical clearing technique, CUBIC (clear, unobstructed brain/body imaging cocktails and computational analysis), to wash out the lipids and pigments from tissues. Our results show that the UV-PAM imaging contrast and quality can be significantly improved after tissue clearing. With the cleared tissue, multilayers of cell nuclei can also be extracted from time-resolved PA signals. Tissue clearing-enhanced UV-PAM can provide fine details for organ imaging.

7.
Biomed Opt Express ; 12(9): 5920-5938, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34692225

ABSTRACT

Histopathological examination of tissue sections is the gold standard for disease diagnosis. However, the conventional histopathology workflow requires lengthy and laborious sample preparation to obtain thin tissue slices, causing about a one-week delay to generate an accurate diagnostic report. Recently, microscopy with ultraviolet surface excitation (MUSE), a rapid and slide-free imaging technique, has been developed to image fresh and thick tissues with specific molecular contrast. Here, we propose to apply an unsupervised generative adversarial network framework to translate colorful MUSE images into Deep-MUSE images that highly resemble hematoxylin and eosin staining, allowing easy adaptation by pathologists. By eliminating the needs of all sample processing steps (except staining), a MUSE image with subcellular resolution for a typical brain biopsy (5 mm × 5 mm) can be acquired in 5 minutes, which is further translated into a Deep-MUSE image in 40 seconds, simplifying the standard histopathology workflow dramatically and providing histological images intraoperatively.

SELECTION OF CITATIONS
SEARCH DETAIL
...