Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Phys Med ; 122: 103381, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38810391

RESUMO

PURPOSE: To propose a novel deep-learning based dosimetry method that allows quick and accurate estimation of organ doses for individual patients, using only their computed tomography (CT) images as input. METHODS: Despite recent advances in medical dosimetry, personalized CT dosimetry remains a labour-intensive process. Current state-of-the-art methods utilize time-consuming Monte Carlo (MC) based simulations for individual organ dose estimation in CT. The proposed method uses conditional generative adversarial networks (cGANs) to substitute MC simulations with fast dose image generation, based on image-to-image translation. The pix2pix architecture in conjunction with a regression model was utilized for the generation of the synthetic dose images. The lungs, heart, breast, bone and skin were manually segmented to estimate and compare organ doses calculated using both the original and synthetic dose images, respectively. RESULTS: The average organ dose estimation error for the proposed method was 8.3% and did not exceed 20% for any of the organs considered. The performance of the method in the clinical environment was also assessed. Using segmentation tools developed in-house, an automatic organ dose calculation pipeline was set up. Calculation of organ doses for heart and lung for each CT slice took about 2 s. CONCLUSIONS: This work shows that deep learning-enabled personalized CT dosimetry is feasible in real-time, using only patient CT images as input.


Assuntos
Aprendizado Profundo , Medicina de Precisão , Radiometria , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Radiometria/métodos , Processamento de Imagem Assistida por Computador/métodos , Estudos de Viabilidade , Doses de Radiação , Método de Monte Carlo , Fatores de Tempo
2.
J Magn Reson Imaging ; 58(4): 1200-1210, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36733222

RESUMO

BACKGROUND: Although susceptibility-weighted imaging (SWI) is the gold standard for visualizing cerebral microbleeds (CMBs) in the brain, the required phase data are not always available clinically. Having a postprocessing tool for generating SWI contrast from T2*-weighted magnitude images is therefore advantageous. PURPOSE: To create synthetic SWI images from clinical T2*-weighted magnitude images using deep learning and evaluate the resulting images in terms of similarity to conventional SWI images and ability to detect radiation-associated CMBs. STUDY TYPE: Retrospective. POPULATION: A total of 145 adults (87 males/58 females; 43.9 years old) with radiation-associated CMBs were used to train (16,093 patches/121 patients), validate (484 patches/4 patients), and test (2420 patches/20 patients) our networks. FIELD STRENGTH/SEQUENCE: 3D T2*-weighted, gradient-echo acquired at 3 T. ASSESSMENT: Structural similarity index (SSIM), peak signal-to-noise-ratio (PSNR), normalized mean-squared-error (nMSE), CMB counts, and line profiles were compared among magnitude, original SWI, and synthetic SWI images. Three blinded raters (J.E.V.M., M.A.M., B.B. with 8-, 6-, and 4-years of experience, respectively) independently rated and classified test-set images. STATISTICAL TESTS: Kruskall-Wallis and Wilcoxon signed-rank tests were used to compare SSIM, PSNR, nMSE, and CMB counts among magnitude, original SWI, and predicted synthetic SWI images. Intraclass correlation assessed interrater variability. P values <0.005 were considered statistically significant. RESULTS: SSIM values of the predicted vs. original SWI (0.972, 0.995, 0.9864) were statistically significantly higher than that of the magnitude vs. original SWI (0.970, 0.994, 0.9861) for whole brain, vascular structures, and brain tissue regions, respectively; 67% (19/28) CMBs detected on original SWI images were also detected on the predicted SWI, whereas only 10 (36%) were detected on magnitude images. Overall image quality was similar between the synthetic and original SWI images, with less artifacts on the former. CONCLUSIONS: This study demonstrated that deep learning can increase the susceptibility contrast present in neurovasculature and CMBs on T2*-weighted magnitude images, without residual susceptibility-induced artifacts. This may be useful for more accurately estimating CMB burden from magnitude images alone. EVIDENCE LEVEL: 3. TECHNICAL EFFICACY: Stage 2.


Assuntos
Aprendizado Profundo , Masculino , Adulto , Feminino , Humanos , Estudos Retrospectivos , Hemorragia Cerebral/diagnóstico por imagem , Sensibilidade e Especificidade , Imageamento por Ressonância Magnética/métodos
3.
J Big Data ; 8(1): 94, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34760433

RESUMO

Large data requirements are often the main hurdle in training neural networks. Convolutional neural network (CNN) classifiers in particular require tens of thousands of pre-labeled images per category to approach human-level accuracy, while often failing to generalized to out-of-domain test sets. The acquisition and labelling of such datasets is often an expensive, time consuming and tedious task in practice. Synthetic data provides a cheap and efficient solution to assemble such large datasets. Using domain randomization (DR), we show that a sufficiently well generated synthetic image dataset can be used to train a neural network classifier that rivals state-of-the-art models trained on real datasets, achieving accuracy levels as high as 88% on a baseline cats vs dogs classification task. We show that the most important domain randomization parameter is a large variety of subjects, while secondary parameters such as lighting and textures are found to be less significant to the model accuracy. Our results also provide evidence to suggest that models trained on domain randomized images transfer to new domains better than those trained on real photos. Model performance appears to remain stable as the number of categories increases.

4.
Cancers (Basel) ; 13(13)2021 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-34206336

RESUMO

Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations "digital tumor signatures" and hypothesize that they can be used, in analogy to radiomics features, to differentiate between lesions and normal liver tissue. Moreover, we conjecture that they can be used for the generation of synthetic data, specifically for the artificial insertion and removal of liver tumor lesions at user-defined spatial locations in CT images. Our approach utilizes an implicit autoencoder, an unsupervised model architecture that combines an autoencoder and two generative adversarial network (GAN)-like components. The model was trained on liver patches from 25 or 57 inhouse abdominal CT scans, depending on the experiment, demonstrating that only minimal data is required for synthetic image generation. The model was evaluated on a publicly available data set of 131 scans. We show that a PCA embedding of the latent representation captures the structure of the data, providing the foundation for the targeted insertion and removal of tumor lesions. To assess the quality of the synthetic images, we conducted two experiments with five radiologists. For experiment 1, only one rater and the ensemble-rater were marginally above the chance level in distinguishing real from synthetic data. For the second experiment, no rater was above the chance level. To illustrate that the "digital signatures" can also be used to differentiate lesion from normal tissue, we employed several machine learning methods. The best performing method, a LinearSVM, obtained 95% (97%) accuracy, 94% (95%) sensitivity, and 97% (99%) specificity, depending on if all data or only normal appearing patches were used for training of the implicit autoencoder. Overall, we demonstrate that the proposed unsupervised learning paradigm can be utilized for the removal and insertion of liver lesions at user defined spatial locations and that the digital signatures can be used to discriminate between lesions and normal liver tissue in abdominal CT scans.

5.
Microsc Res Tech ; 84(12): 3023-3034, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34245203

RESUMO

With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI.


Assuntos
Doença de Alzheimer , Doença de Alzheimer/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons
6.
Med Phys ; 48(4): 1673-1684, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33251619

RESUMO

PURPOSE: Online adaptive radiotherapy would greatly benefit from the development of reliable auto-segmentation algorithms for organs-at-risk and radiation targets. Current practice of manual segmentation is subjective and time-consuming. While deep learning-based algorithms offer ample opportunities to solve this problem, they typically require large datasets. However, medical imaging data are generally sparse, in particular annotated MR images for radiotherapy. In this study, we developed a method to exploit the wealth of publicly available, annotated CT images to generate synthetic MR images, which could then be used to train a convolutional neural network (CNN) to segment the parotid glands on MR images of head and neck cancer patients. METHODS: Imaging data comprised 202 annotated CT and 27 annotated MR images. The unpaired CT and MR images were fed into a 2D CycleGAN network to generate synthetic MR images from the CT images. Annotations of axial slices of the synthetic images were generated by propagating the CT contours. These were then used to train a 2D CNN. We assessed the segmentation accuracy using the real MR images as test dataset. The accuracy was quantified with the 3D Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the approach by a comparison to the interobserver variation determined for the real MR images, as well as to the accuracy when training the 2D CNN to segment the CT images. RESULTS: The determined accuracy (DSC: 0.77±0.07, HD: 18.04±12.59mm, MSD: 2.51±1.47mm) was close to the interobserver variation (DSC: 0.84±0.06, HD: 10.85±5.74mm, MSD: 1.50±0.77mm), as well as to the accuracy when training the 2D CNN to segment the CT images (DSC: 0.81±0.07, HD: 13.00±7.61mm, MSD: 1.87±0.84mm). CONCLUSIONS: The introduced cross-modality learning technique can be of great value for segmentation problems with sparse training data. We anticipate using this method with any nonannotated MRI dataset to generate annotated synthetic MR images of the same type via image style transfer from annotated CT images. Furthermore, as this technique allows for fast adaptation of annotated datasets from one imaging modality to another, it could prove useful for translating between large varieties of MRI contrasts due to differences in imaging protocols within and between institutions.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
7.
Int J Comput Assist Radiol Surg ; 15(9): 1427-1436, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32556953

RESUMO

PURPOSE: In the field of medical image analysis, deep learning methods gained huge attention over the last years. This can be explained by their often improved performance compared to classic explicit algorithms. In order to work well, they need large amounts of annotated data for supervised learning, but these are often not available in the case of medical image data. One way to overcome this limitation is to generate synthetic training data, e.g., by performing simulations to artificially augment the dataset. However, simulations require domain knowledge and are limited by the complexity of the underlying physical model. Another method to perform data augmentation is the generation of images by means of neural networks. METHODS: We developed a new algorithm for generation of synthetic medical images exhibiting speckle noise via generative adversarial networks (GANs). Key ingredient is a speckle layer, which can be incorporated into a neural network in order to add realistic and domain-dependent speckle. We call the resulting GAN architecture SpeckleGAN. RESULTS: We compared our new approach to an equivalent GAN without speckle layer. SpeckleGAN was able to generate ultrasound images with very crisp speckle patterns in contrast to the baseline GAN, even for small datasets of 50 images. SpeckleGAN outperformed the baseline GAN by up to 165 % with respect to the Fréchet Inception distance. For artery layer and lumen segmentation, a performance improvement of up to 4 % was obtained for small datasets, when these were augmented with images by SpeckleGAN. CONCLUSION: SpeckleGAN facilitates the generation of realistic synthetic ultrasound images to augment small training sets for deep learning based image processing. Its application is not restricted to ultrasound images but could be used for every imaging methodology that produces images with speckle such as optical coherence tomography or radar.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Ultrassonografia , Algoritmos , Simulação por Computador , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Humanos , Distribuição Normal , Software
8.
Comput Methods Programs Biomed ; 184: 105268, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31891902

RESUMO

BACKGROUND AND OBJECTIVES: Deep learning models and specifically Convolutional Neural Networks (CNNs) are becoming the leading approach in many computer vision tasks, including medical image analysis. Nevertheless, the CNN training usually requires large sets of supervised data, which are often difficult and expensive to obtain in the medical field. To address the lack of annotated images, image generation is a promising method, which is becoming increasingly popular in the computer vision community. In this paper, we present a new approach to the semantic segmentation of bacterial colonies in agar plate images, based on deep learning and synthetic image generation, to increase the training set size. Indeed, semantic segmentation of bacterial colony is the basis for infection recognition and bacterial counting in Petri plate analysis. METHODS: A convolutional neural network (CNN) is used to separate the bacterial colonies from the background. To face the lack of annotated images, a novel engine is designed - which exploits a generative adversarial network to capture the typical distribution of the bacterial colonies on agar plates - to generate synthetic data. Then, bacterial colony patches are superimposed on existing background images, taking into account both the local appearance of the background and the intrinsic opacity of the bacterial colonies, and a style transfer algorithm is used for further improve visual realism. RESULTS: The proposed deep learning approach has been tested on the only public dataset available with pixel-level annotations for bacterial colony semantic segmentation in agar plates. The role of including synthetic data in the training of a segmentation CNN has been evaluated, showing how comparable performances can be obtained with respect to the use of real images. Qualitative results are also reported for a second public dataset in which the segmentation annotations are not provided. CONCLUSIONS: The use of a small set of real data, together with synthetic images, allows obtaining comparable results with respect to using a complete set of real images. Therefore, the proposed synthetic data generator is able to address the scarcity of biomedical data and provides a scalable and cheap alternative to human ground-truth supervision.


Assuntos
Ágar , Bactérias/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Aprendizado Profundo , Humanos , Redes Neurais de Computação
9.
Cytometry A ; 87(3): 212-26, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25573002

RESUMO

As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images.


Assuntos
Simulação por Computador , Imageamento Tridimensional/métodos , Teste de Papanicolaou/métodos , Reconhecimento Automatizado de Padrão/métodos , Detecção Precoce de Câncer/instrumentação , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Imageamento Tridimensional/instrumentação , Teste de Papanicolaou/instrumentação , Neoplasias do Colo do Útero/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA