Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Comput Biol Med ; 179: 108793, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38955126

RESUMO

Skin tumors are the most common tumors in humans and the clinical characteristics of three common non-melanoma tumors (IDN, SK, BCC) are similar, resulting in a high misdiagnosis rate. The accurate differential diagnosis of these tumors needs to be judged based on pathological images. However, a shortage of experienced dermatological pathologists leads to bias in the diagnostic accuracy of these skin tumors in China. In this paper, we establish a skin pathological image dataset, SPMLD, for three non-melanoma to achieve automatic and accurate intelligent identification for them. Meanwhile, we propose a lesion-area-based enhanced classification network with the KLS module and an attention module. Specifically, we first collect thousands of H&E-stained tissue sections from patients with clinically and pathologically confirmed IDN, SK, and BCC from a single-center hospital. Then, we scan them to construct a pathological image dataset of these three skin tumors. Furthermore, we mark the complete lesion area of the entire pathology image to better learn the pathologist's diagnosis process. In addition, we applied the proposed network for lesion classification prediction on the SPMLD dataset. Finally, we conduct a series of experiments to demonstrate that this annotation and our network can effectively improve the classification results of various networks. The source dataset and code are available at https://github.com/efss24/SPMLD.git.

2.
Med Phys ; 49(10): 6424-6438, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35982470

RESUMO

PURPOSE: Magnetic resonance imaging (MRI) plays an important role in clinical diagnosis, but it is susceptible to metal artifacts. The generative adversarial network GatedConv with gated convolution (GC) and contextual attention (CA) was used to inpaint the metal artifact region in MRI images. METHODS: MRI images containing or near the teeth of 70 patients were collected, and the scanning sequence was a T1-weighted high-resolution isotropic volume examination sequence. A total of 10 000 slices were obtained after data enhancement, of which 8000 slices were used for training. MRI images were normalized to [-1,1]. Based on the randomly generated mask, U-Net, pix2pix, PConv with partial convolution, and GatedConv were used to inpaint the artifact region of MRI images. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the mask were used to compare the results of these methods. The inpainting effect on the test dataset using dental masks was also evaluated. Besides, the artifact area of clinical MRI images was inpainted based on the mask sketched by physicians. Finally, the earring artifacts and artifacts caused by abnormal signal foci were inpainted to verify the generalization of the models. RESULTS: GatedConv could directly and effectively inpaint the incomplete MRI images generated by masks in the image domain. For the results of U-Net, pix2pix, PConv, and GatedConv, the masked MAEs were 0.1638, 0.1812, 0.1688, and 0.1596, respectively, and the masked PSNRs were 18.2136, 17.5692, 18.2258, and 18.3035 dB, respectively. Using dental masks, the results of U-Net, pix2pix, and PConv differed more from the real images in terms of alveolar shape and surrounding tissue compared with GatedConv. GatedConv could inpaint the metal artifact region in clinical MRI images more effectively than the other models, but the increase in the mask area could reduce the inpainting effect. Inpainted MRI images by GatedConv and CT images with metal artifact reduction coincided with alveolar and tissue structure, and GatedConv could successfully inpaint artifacts caused by abnormal signal foci, whereas the other models failed. The ablation study demonstrated that GC and CA increased the reliability of the inpainting performance of GatedConv. CONCLUSION: MRI images are affected by metal, and signal void areas appear near metal. GatedConv can inpaint the MRI metal artifact region in the image domain directly and effectively and improve image quality. Medical image inpainting by GatedConv has potential value for tasks, such as positron emission tomography (PET) attenuation correction in PET/MRI and adaptive radiotherapy of synthetic CT based on MRI.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
3.
J Appl Clin Med Phys ; 23(3): e13516, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34985188

RESUMO

In modern radiotherapy, error reduction in the patients' daily setup error is important for achieving accuracy. In our study, we proposed a new approach for the development of an assist system for the radiotherapy position setup by using augmented reality (AR). We aimed to improve the accuracy of the position setup of patients undergoing radiotherapy and to evaluate the error of the position setup of patients who were diagnosed with head and neck cancer, and that of patients diagnosed with chest and abdomen cancer. We acquired the patient's simulation CT data for the three-dimensional (3D) reconstruction of the external surface and organs. The AR tracking software detected the calibration module and loaded the 3D virtual model. The calibration module was aligned with the Linac isocenter by using room lasers. And then aligned the virtual cube with the calibration module to complete the calibration of the 3D virtual model and Linac isocenter. Then, the patient position setup was carried out, and point cloud registration was performed between the patient and the 3D virtual model, such the patient's posture was consistent with the 3D virtual model. Twenty patients diagnosed with head and neck cancer and 20 patients diagnosed with chest and abdomen cancer in the supine position setup were analyzed for the residual errors of the conventional laser and AR-guided position setup. Results show that for patients diagnosed with head and neck cancer, the difference between the two positioning methods was not statistically significant (P > 0.05). For patients diagnosed with chest and abdomen cancer, the residual errors of the two positioning methods in the superior and inferior direction and anterior and posterior direction were statistically significant (t = -5.80, -4.98, P < 0.05). The residual errors in the three rotation directions were statistically significant (t = -2.29 to -3.22, P < 0.05). The experimental results showed that the AR technology can effectively assist in the position setup of patients undergoing radiotherapy, significantly reduce the position setup errors in patients diagnosed with chest and abdomen cancer, and improve the accuracy of radiotherapy.


Assuntos
Realidade Aumentada , Neoplasias de Cabeça e Pescoço , Radioterapia (Especialidade) , Radioterapia Guiada por Imagem , Calibragem , Humanos , Posicionamento do Paciente , Planejamento da Radioterapia Assistida por Computador/métodos , Erros de Configuração em Radioterapia/prevenção & controle , Radioterapia Guiada por Imagem/métodos
4.
Comput Methods Programs Biomed ; 215: 106600, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34971855

RESUMO

BACKGROUND AND OBJECTIVES: Thyroid nodules are a common disorder of the endocrine system. Segmentation of thyroid nodules on ultrasound images is an important step in the evaluation and diagnosis of nodules and an initial step in computer-aided diagnostic systems. The accuracy and consistency of segmentation remain a challenge due to the low contrast, speckle noise, and low resolution of ultrasound images. Therefore, the study of deep learning-based algorithms for thyroid nodule segmentation is important. This study utilizes soft shape supervision to improve the performance of detection and segmentation of boundaries of nodules. Soft shape supervision can emphasize the boundary features and assist the network in segmenting nodules accurately. METHODS: We propose a dual-path convolution neural network, including region and shape paths, which use DeepLabV3+ as the backbone. Soft shape supervision blocks are inserted between the two paths to implement cross-path attention mechanisms. The blocks enhance the representation of shape features and add them to the region path as auxiliary information. Thus, the network can accurately detect and segment thyroid nodules. RESULTS: We collect 3786 ultrasound images of thyroid nodules to train and test our network. Compared with the ground truth, the test results achieve an accuracy of 95.81% and a DSC of 85.33. The visualization results also suggest that the network has learned clear and accurate boundaries of the nodules. The evaluation metrics and visualization results demonstrate the superior segmentation performance of the network to other classical deep learning-based networks. CONCLUSIONS: The proposed dual-path network can accurately realize automatic segmentation of thyroid nodules on ultrasound images. It can also be used as an initial step in computer-aided diagnosis. It shows superior performance to other classical methods and demonstrates the potential for accurate segmentation of nodules in clinical applications.


Assuntos
Nódulo da Glândula Tireoide , Algoritmos , Diagnóstico por Computador , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia
5.
Med Phys ; 49(1): 144-157, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34766623

RESUMO

PURPOSE: Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct a deep convolutional neural network that can adaptively fuse the information of multiple tumoral-regions (FMRNet) for breast tumor classification using ultrasound (US) images without segmentation masks in the testing stage. METHODS: To sufficiently excavate the potential relationship, we design a fused network and two independent modules to extract and fuse features of multiple regions simultaneously. First, we introduce two enhanced combined-tumoral (EC) region modules, aiming to enhance the combined-tumoral features gradually. Then, we further design a three-branch module for extracting and fusing the features of intratumoral, peritumoral, and combined-tumoral regions, denoted as the intratumoral, peritumoral, and combined-tumoral module. Especially, we design a novel fusion module by introducing a channel attention module to adaptively fuse the features of three regions. The model is evaluated on two public datasets including UDIAT and BUSI with breast tumor ultrasound images. Two independent groups of experiments are performed on two respective datasets using the fivefold stratified cross-validation strategy. Finally, we conduct ablation experiments on two datasets, in which BUSI is used as the training set and UDIAT is used as the testing set. RESULTS: We conduct detailed ablation experiments about the proposed two modules and comparative experiments with other existing representative methods. The experimental results show that the proposed method yields state-of-the-art performance on both two datasets. Especially, in the UDIAT dataset, the proposed FMRNet achieves a high accuracy of 0.945 and a specificity of 0.945, respectively. Moreover, the precision (PRE = 0.909) even dramatically improves by 21.6% on the BUSI dataset compared with the existing method of the best result. CONCLUSION: The proposed FMRNet shows good performance in breast tumor classification with US images, and proves its capability of exploiting and fusing the information of multiple tumoral-regions. Furthermore, the FMRNet has potential value in classifying other types of cancers using multiple tumoral-regions of other kinds of medical images.


Assuntos
Neoplasias da Mama , Mama , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Ultrassonografia , Ultrassonografia Mamária
6.
Radiat Oncol ; 16(1): 202, 2021 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-34649572

RESUMO

OBJECTIVE: To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. METHODS: The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. RESULTS: The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. CONCLUSIONS: High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.


Assuntos
Neoplasias Ósseas/patologia , Neoplasias Pulmonares/patologia , Redes Neurais de Computação , Imagens de Fantasmas , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos , Neoplasias de Tecidos Moles/patologia , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/radioterapia , Tomografia Computadorizada de Feixe Cônico/métodos , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Órgãos em Risco/efeitos da radiação , Prognóstico , Dosagem Radioterapêutica , Neoplasias de Tecidos Moles/diagnóstico por imagem , Neoplasias de Tecidos Moles/radioterapia , Tomografia Computadorizada por Raios X/métodos
7.
Phys Med Biol ; 66(17)2021 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-34330122

RESUMO

A long-standing problem in image-guided radiotherapy is that inferior intraoperative images present a difficult problem for automatic registration algorithms. Particularly for digital radiography (DR) and digitally reconstructed radiograph (DRR), the blurred, low-contrast, and noisy DR makes the multimodal registration of DR-DRR challenging. Therefore, we propose a novel CNN-based method called CrossModalNet to exploit the quality preoperative modality (DRR) for handling the limitations of intraoperative images (DR), thereby improving the registration accuracy. The method consists of two parts: DR-DRR contour predictions and contour-based rigid registration. We have designed the CrossModal Attention Module and CrossModal Refine Module to fully exploit the multiscale crossmodal features and implement the crossmodal interactions during the feature encoding and decoding stages. Then, the predicted anatomical contours of DR-DRR are registered by the classic mutual information method. We collected 2486 patient scans to train CrossModalNet and 170 scans to test its performance. The results show that it outperforms the classic and state-of-the-art methods with 95th percentile Hausdorff distance of 5.82 pixels and registration accuracy of 81.2%. The code is available at https://github.com/lc82111/crossModalNet.


Assuntos
Algoritmos , Radioterapia Guiada por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Imagem Multimodal , Intensificação de Imagem Radiográfica
8.
Quant Imaging Med Surg ; 11(5): 1983-2000, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33936980

RESUMO

BACKGROUND: To investigate the feasibility of using a stacked generative adversarial network (sGAN) to synthesize pseudo computed tomography (CT) images based on ultrasound (US) images. METHODS: The pre-radiotherapy US and CT images of 75 patients with cervical cancer were selected for the training set of pseudo-image synthesis. In the first stage, labeled US images were used as the first conditional GAN input to obtain low-resolution pseudo CT images, and in the second stage, a super-resolution reconstruction GAN was used. The pseudo CT image obtained in the first stage was used as an input, following which a high-resolution pseudo CT image with clear texture and accurate grayscale information was obtained. Five cross validation tests were performed to verify our model. The mean absolute error (MAE) was used to compare each pseudo CT with the same patient's real CT image. Also, another 10 cases of patients with cervical cancer, before radiotherapy, were selected for testing, and the pseudo CT image obtained using the neural style transfer (NSF) and CycleGAN methods were compared with that obtained using the sGAN method proposed in this study. Finally, the dosimetric accuracy of pseudo CT images was verified by phantom experiments. RESULTS: The MAE metric values between the pseudo CT obtained based on sGAN, and the real CT in five-fold cross validation are 66.82±1.59 HU, 66.36±1.85 HU, 67.26±2.37 HU, 66.34±1.75 HU, and 67.22±1.30 HU, respectively. The results of the metrics, namely, normalized mutual information (NMI), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR), between the pseudo CT images obtained using the sGAN method and the ground truth CT (CTgt) images were compared with those of the other two methods via the paired t-test, and the differences were statistically significant. The dice similarity coefficient (DSC) measurement results showed that the pseudo CT images obtained using the sGAN method were more similar to the CTgt images of organs at risk. The dosimetric phantom experiments also showed that the dose distribution between the pseudo CT images synthesized by the new method was similar to that of the CTgt images. CONCLUSIONS: Compared with NSF and CycleGAN methods, the sGAN method can obtain more accurate pseudo CT images, thereby providing a new method for image guidance in radiotherapy for cervical cancer.

9.
Front Oncol ; 11: 603844, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33777746

RESUMO

PURPOSE: To propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans. METHODS: The improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis. RESULTS: The MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm). CONCLUSION: The pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.

10.
Proc Natl Acad Sci U S A ; 114(51): 13513-13518, 2017 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-29203653

RESUMO

Micronutrient deficiencies such as those of vitamin A and iron affect a third of the world's population with consequences such as night blindness, higher child mortality, anemia, poor pregnancy outcomes, and reduced work capacity. Many efforts to prevent or treat these deficiencies are hampered by the lack of adequate, accessible, and affordable diagnostic methods that can enable better targeting of interventions. In this work, we demonstrate a rapid diagnostic test and mobile enabled platform for simultaneously quantifying iron (ferritin), vitamin A (retinol-binding protein), and inflammation (C-reactive protein) status. Our approach, enabled by combining multiple florescent markers and immunoassay approaches on a single test, allows us to provide accurate quantification in 15 min even though the physiological range of the markers of interest varies over five orders of magnitude. We report sensitivities of 88%, 100%, and 80% and specificities of 97%, 100%, and 97% for iron deficiency (ferritin <15 ng/mL or 32 pmol/L), vitamin A deficiency (retinol-binding protein <14.7 µg/mL or 0.70 µmol/L) and inflammation status (C-reactive protein >3.0 µg/mL or 120 nmol/L), respectively. This technology is suitable for point-of-care use in both resource-rich and resource-limited settings and can be read either by a standard laptop computer or through our previously developed NutriPhone technology. If implemented as either a population-level screening or clinical diagnostic tool, we believe this platform can transform nutritional status assessment and monitoring globally.


Assuntos
Anemia Ferropriva/sangue , Técnicas de Diagnóstico Molecular/métodos , Testes Imediatos , Deficiência de Vitamina A/sangue , Biomarcadores/sangue , Proteína C-Reativa/metabolismo , Ferritinas/sangue , Humanos , Imunoensaio/instrumentação , Imunoensaio/métodos , Imunoensaio/normas , Técnicas de Diagnóstico Molecular/instrumentação , Técnicas de Diagnóstico Molecular/normas , Proteínas de Ligação ao Retinol/metabolismo , Smartphone
11.
Sci Rep ; 4: 4137, 2014 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-24553130

RESUMO

Nucleic acid-based diagnostic techniques such as polymerase chain reaction (PCR) are used extensively in medical diagnostics due to their high sensitivity, specificity and quantification capability. In settings with limited infrastructure and unreliable electricity, however, access to such devices is often limited due to the highly specialized and energy-intensive nature of the thermal cycling process required for nucleic acid amplification. Here we integrate solar heating with microfluidics to eliminate thermal cycling power requirements as well as create a simple device infrastructure for PCR. Tests are completed in less than 30 min, and power consumption is reduced to 80 mW, enabling a standard 5.5 Wh iPhone battery to provide 70 h of power to this system. Additionally, we demonstrate a complete sample-to-answer diagnostic strategy by analyzing human skin biopsies infected with Kaposi's Sarcoma herpesvirus (KSHV/HHV-8) through the combination of solar thermal PCR, HotSHOT DNA extraction and smartphone-based fluorescence detection. We believe that exploiting the ubiquity of solar thermal energy as demonstrated here could facilitate broad availability of nucleic acid-based diagnostics in resource-limited areas.


Assuntos
Telefone Celular , Técnicas de Amplificação de Ácido Nucleico/instrumentação , Energia Solar , Animais , DNA Viral/análise , DNA Viral/metabolismo , Herpesvirus Humano 8/genética , Humanos , Camundongos , Microfluídica/instrumentação , Microfluídica/métodos , Técnicas de Amplificação de Ácido Nucleico/métodos , Análise de Sequência com Séries de Oligonucleotídeos , Pele/virologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA