Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 43(4): 1388-1399, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38010933

RESUMO

Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.


Assuntos
Microscopia , Coloração e Rotulagem
2.
IEEE J Biomed Health Inform ; 28(3): 1161-1172, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37878422

RESUMO

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzhen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform.


Assuntos
Benchmarking , Neoplasias da Próstata , Masculino , Humanos , Linfócitos , Mama , China
3.
IEEE Trans Med Imaging ; 41(2): 383-393, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34520352

RESUMO

Biomedical microscopy images with high-resolution (HR) and axial information can help analysis and diagnosis. However, obtaining such images usually takes more time and economic costs, which makes it impractical in most scenarios. In this paper, we first propose a novel Self-texture Transfer Super-resolution and Refocusing Network (STSRNet) to reconstruct HR multi-focal plane (MFP) images from a single 2D low-resolution (LR) wide field image without relying on scanning or any special devices. The proposed STSRNet consists of three parts: the backbone module for extracting features, the self-texture transfer module for transferring and fusing features, and the flexible reconstruction module for SR and refocusing. Specifically, the self-texture transfer module is designed for images with self-similarity such as cytological images and it searches for similar textures within the image and transfers to help MFP reconstruction. As for reconstruction module, it is composed of multiple pluggable components, each of which is responsible for a specific focal plane, so as to performs SR and refocusing all focal planes at one time to reduce computation. We conduct extensive experiments on cytological images and the experiments show that MFP images reconstructed by STSRNet have richer details in the axial and horizontal directions than input images. At the same time, the reconstructed MFP images also perform better than single 2D wide field images on high-level tasks. The proposed method provides relatively high-quality MFP images when real MFP images cannot be obtained, which greatly expands the application potential of LR wide-field images. To further promote the development of this field, we released our cytology dataset named RSDC for more researchers to use.


Assuntos
Imageamento por Ressonância Magnética , Microscopia
4.
Nat Commun ; 12(1): 5639, 2021 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-34561435

RESUMO

Computer-assisted diagnosis is key for scaling up cervical cancer screening. However, current recognition algorithms perform poorly on whole slide image (WSI) analysis, fail to generalize for diverse staining and imaging, and show sub-optimal clinical-level verification. Here, we develop a progressive lesion cell recognition method combining low- and high-resolution WSIs to recommend lesion cells and a recurrent neural network-based WSI classification model to evaluate the lesion degree of WSIs. We train and validate our WSI analysis system on 3,545 patient-wise WSIs with 79,911 annotations from multiple hospitals and several imaging instruments. On multi-center independent test sets of 1,170 patient-wise WSIs, we achieve 93.5% Specificity and 95.1% Sensitivity for classifying slides, comparing favourably to the average performance of three independent cytopathologists, and obtain 88.5% true positive rate for highlighting the top 10 lesion cells on 447 positive slides. After deployment, our system recognizes a one giga-pixel WSI in about 1.5 min.


Assuntos
Citodiagnóstico/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer , Neoplasias do Colo do Útero/diagnóstico , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Curva ROC , Reprodutibilidade dos Testes
5.
IEEE Trans Med Imaging ; 39(9): 2920-2930, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32175859

RESUMO

In the cytopathology screening of cervical cancer, high-resolution digital cytopathological slides are critical for the interpretation of lesion cells. However, the acquisition of high-resolution digital slides requires high-end imaging equipment and long scanning time. In the study, we propose a GAN-based progressive multi-supervised super-resolution model called PathSRGAN (pathology super-resolution GAN) to learn the mapping of real low-resolution and high-resolution cytopathological images. With respect to the characteristics of cytopathological images, we design a new two-stage generator architecture with two supervision terms. The generator of the first stage corresponds to a densely-connected U-Net and achieves 4× to 10× super resolution. The generator of the second stage corresponds to a residual-in-residual DenseBlock and achieves 10× to 20× super resolution. The designed generator alleviates the difficulty in learning the mapping from 4× images to 20× images caused by the great numerical aperture difference and generates high quality high-resolution images. We conduct a series of comparison experiments and demonstrate the superiority of PathSRGAN to mainstream CNN-based and GAN-based super-resolution methods in cytopathological images. Simultaneously, the reconstructed high-resolution images by PathSRGAN improve the accuracy of computer-aided diagnosis tasks effectively. It is anticipated that the study will help increase the penetration rate of cytopathology screening in remote and impoverished areas that lack high-end imaging equipment.


Assuntos
Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA