Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
IBRO Neurosci Rep ; 16: 57-66, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-39007088

RESUMO

Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.

2.
J Neural Eng ; 20(2)2023 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-36898147

RESUMO

Objective.Event-related potential (ERP) sensitivity to faces is predominantly characterized by an N170 peak that has greater amplitude and shorter latency when elicited by human faces than images of other objects. We aimed to develop a computational model of visual ERP generation to study this phenomenon which consisted of a three-dimensional convolutional neural network (CNN) connected to a recurrent neural network (RNN).Approach.The CNN provided image representation learning, complimenting sequence learning of the RNN for modeling visually-evoked potentials. We used open-access data from ERP Compendium of Open Resources and Experiments (40 subjects) to develop the model, generated synthetic images for simulating experiments with a generative adversarial network, then collected additional data (16 subjects) to validate predictions of these simulations. For modeling, visual stimuli presented during ERP experiments were represented as sequences of images (time x pixels). These were provided as inputs to the model. By filtering and pooling over spatial dimensions, the CNN transformed these inputs into sequences of vectors that were passed to the RNN. The ERP waveforms evoked by visual stimuli were provided to the RNN as labels for supervised learning. The whole model was trained end-to-end using data from the open-access dataset to reproduce ERP waveforms evoked by visual events.Main results.Cross-validation model outputs strongly correlated with open-access (r= 0.98) and validation study data (r= 0.78). Open-access and validation study data correlated similarly (r= 0.81). Some aspects of model behavior were consistent with neural recordings while others were not, suggesting promising albeit limited capacity for modeling the neurophysiology of face-sensitive ERP generation.Significance.The approach developed in this work is potentially of significant value for visual neuroscience research, where it may be adapted for multiple contexts to study computational relationships between visual stimuli and evoked neural activity.


Assuntos
Reconhecimento Facial , Humanos , Potenciais Evocados/fisiologia , Redes Neurais de Computação , Aprendizagem , Estimulação Luminosa/métodos , Eletroencefalografia
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 430-433, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086286

RESUMO

Synthetic medical images have an important role to play in developing data-driven medical image processing systems. Using a relatively small amount of patient data to train generative models that can produce an abundance of additional samples could bridge the gap towards big-data in niche medical domains. These generative models are evaluated in terms of the synthetic data they generate using the Visual Turing Test (VTT), Fréchet Inception Distance (FID), and other metrics. However, these are generally interpreted at the group level, and do not measure the artificiality of individual synthetic images. The present study attempts to address the challenge of automatically identifying artificial images that are obviously-artificial-looking, which may be necessary for filtering out poorly constructed synthetic images that might otherwise deteriorate the performance of assimilating systems. Synthetic computed tomography (CT) images from a progressively-grown generative adversarial network (PGGAN) were evaluated with a VTT and their image embeddings were analyzed for correlation with artificiality. Images categorized as obviously-artificial (≥0. 7 probability of being rated as fake) were classified using a battery of algorithms. The top-performing classifier, a support vector machine, exhibited accuracy of 75.5%, sensitivity of 0.743, and specificity of 0.769. This is an encouraging result that suggests a potential approach for validating synthetic medical image datasets. Clinical Relevance - Next-generation medical AI systems for image processing will utilize synthetic images produced by generative models. This paper presents an approach towards verifying artificial image legibility for quality-control before being deployed for these purposes.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa