Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Imaging ; 22(1): 94, 2022 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-35596153

RESUMO

BACKGROUND: Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. Recent works have shown highly accurate segmentation and detection of white blood cells from microscopic blood images. However, the classification of the observed cells is still a challenge, in part due to the distribution of the five types that affect the condition of the immune system. METHODS: (i) This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset that includes 6562 real images of the five WBC types. (ii) For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing. RESULTS: (i) W-Net achieves an average accuracy of 97%. In comparison to state-of-the-art methods in the field of WBC classification, we show that W-Net outperforms other CNN- and RNN-based model architectures. Moreover, we show the benefits of using pre-trained W-Net in a transfer learning context when fine-tuned to specific task or accommodating another dataset. (ii) The synthetic WBC images are confirmed by experiments and a domain expert to have a high degree of similarity to the original images. The pre-trained W-Net and the generated WBC dataset are available for the community to facilitate reproducibility and follow up research work. CONCLUSION: This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Contagem de Leucócitos , Leucócitos , Reprodutibilidade dos Testes
2.
Phys Med Biol ; 68(15)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37433302

RESUMO

Objective. Both computed tomography (CT) and magnetic resonance imaging (MRI) images are acquired for high-dose-rate (HDR) prostate brachytherapy patients at our institution. CT is used to identify catheters and MRI is used to segment the prostate. To address scenarios of limited MRI access, we developed a novel generative adversarial network (GAN) to generate synthetic MRI (sMRI) from CT with sufficient soft-tissue contrast to provide accurate prostate segmentation without MRI (rMRI).Approach. Our hybrid GAN, PxCGAN, was trained utilizing 58 paired CT-MRI datasets from our HDR prostate patients. Using 20 independent CT-MRI datasets, the image quality of sMRI was tested using mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). These metrics were compared with the metrics of sMRI generated using Pix2Pix and CycleGAN. The accuracy of prostate segmentation on sMRI was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) on the prostate delineated by three radiation oncologists (ROs) on sMRI versus rMRI. To estimate inter-observer variability (IOV), these metrics between prostate contours delineated by each RO on rMRI and the prostate delineated by treating RO on rMRI (gold standard) were calculated.Main results. Qualitatively, sMRI images show enhanced soft-tissue contrast at the prostate boundary compared with CT scans. For MAE and MSE, PxCGAN and CycleGAN have similar results, while the MAE of PxCGAN is smaller than that of Pix2Pix. PSNR and SSIM of PxCGAN are significantly higher than Pix2Pix and CycleGAN (p < 0.01). The DSC for sMRI versus rMRI is within the range of the IOV, while the HD for sMRI versus rMRI is smaller than the HD for the IOV for all ROs (p ≤ 0.03).Significance. PxCGAN generates sMRI images from treatment-planning CT scans that depict enhanced soft-tissue contrast at the prostate boundary. The accuracy of prostate segmentation on sMRI compared to rMRI is within the segmentation variation on rMRI between different ROs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA