Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Appl Clin Med Phys ; 23(11): e13775, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36168935

RESUMO

PURPOSE: The purpose of this work is to develop and evaluate a novel cycle-contrastive unpaired translation network (cycleCUT) for synthetic computed tomography (sCT) generation from T1-weighted magnetic resonance images (MRI). METHODS: The cycleCUT proposed in this work integrated the contrastive learning module from contrastive unpaired translation network (CUT) into the cycle-consistent generative adversarial network (cycleGAN) framework to effectively achieve unsupervised CT synthesis from MRI. The diagnostic MRI and radiotherapy planning CT images of 24 brain cancer patients were obtained and reshuffled to train the network. For comparison, the traditional cycleGAN and CUT were also implemented. The sCT images were then imported into a treatment planning system to verify their feasibility for radiotherapy planning. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) between the sCT and the corresponding real CT images were calculated. Gamma analysis between sCT- and CT-based dose distributions was also conducted. RESULTS: Quantitative evaluation of an independent test set of six patients showed that the average MAE was 69.62 ± 5.68 Hounsfield Units (HU) for the proposed cycleCUT, significantly (p-value < 0.05) lower than that for cycleGAN (77.02 ± 6.00 HU) and CUT (78.05 ± 8.29). The average PSNR was 28.73 ± 0.46 decibels (dB) for cycleCUT, significantly larger than that for cycleGAN (27.96 ± 0.49 dB) and CUT (27.95 ± 0.69 dB). The average SSIM for cycleCUT (0.918 ± 0.012) was also significantly higher than that for cycleGAN (0.906 ± 0.012) and CUT (0.903 ± 0.015). Regarding gamma analysis, cycleCUT achieved the highest passing rate (97.95 ± 1.24% at the 2%/2 mm criteria and 10% dose threshold) but was not significantly different from the others. CONCLUSION: The proposed cycleCUT could be effectively trained using unaligned image data, and could generate better sCT images than cycleGAN and CUT in terms of HU number accuracy and fine structural details.


Assuntos
Neoplasias Encefálicas , Planejamento da Radioterapia Assistida por Computador , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/radioterapia
2.
Plants (Basel) ; 12(21)2023 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-37960032

RESUMO

Rice blast has caused major production losses in rice, and thus the early detection of rice blast plays a crucial role in global food security. In this study, a semi-supervised contrastive unpaired translation iterative network is specifically designed based on unmanned aerial vehicle (UAV) images for rice blast detection. It incorporates multiple critic contrastive unpaired translation networks to generate fake images with different disease levels through an iterative process of data augmentation. These generated fake images, along with real images, are then used to establish a detection network called RiceBlastYolo. Notably, the RiceBlastYolo model integrates an improved fpn and a general soft labeling approach. The results show that the detection precision of RiceBlastYolo is 99.51% under intersection over union (IOU0.5) conditions and the average precision is 98.75% under IOU0.5-0.9 conditions. The precision and recall rates are respectively 98.23% and 99.99%, which are higher than those of common detection models (YOLO, YOLACT, YOLACT++, Mask R-CNN, and Faster R-CNN). Additionally, external data also verified the ability of the model. The findings demonstrate that our proposed model can accurately identify rice blast under field-scale conditions.

3.
J Med Imaging (Bellingham) ; 10(3): 037502, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37358991

RESUMO

Purpose: The diagnosis and prognosis of breast cancer relies on histopathology image analysis. In this context, proliferation markers, especially Ki67, are increasingly important. The diagnosis using these markers is based on the quantification of proliferation, which implies the counting of Ki67 positive and negative tumoral cells in epithelial regions, thus excluding stromal cells. However, stromal cells are often very difficult to distinguish from negative tumoral cells in Ki67 images and often lead to errors when automatic analysis is used. Approach: We study the use of automatic semantic segmentation based on convolutional neural networks (CNNs) to separate stromal and epithelial areas on Ki67 stained images. CNNs need to be accurately trained with extensive databases with associated ground truth. As such databases are not publicly available, we propose a method to produce them with minimal manual labelling effort. Inspired by the procedure used by pathologists, we have produced the database relying on knowledge transfer from cytokeratin-19 images to Ki67 using an image-to-image (I2I) translation network. Results: The automatically produced stroma masks are manually corrected and used to train a CNN that predicts very accurate stroma masks for unseen Ki67 images. An F-score value of 0.87 is achieved. Examples of effect on the KI67 score show the importance of the stroma segmentation. Conclusions: An I2I translation method has proved very useful for building ground-truth labeling in a task where manual labeling is unfeasible. With reduced correction effort, a dataset can be built to train neural networks for the difficult problem of separating epithelial regions from stroma in stained images where separation is very hard without additional information.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa