Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Int J Comput Assist Radiol Surg ; 16(8): 1243-1254, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34125391

RESUMO

PURPOSE: Intravascular ultrasound (IVUS) imaging is crucial for planning and performing percutaneous coronary interventions. Automatic segmentation of lumen and vessel wall in IVUS images can thus help streamlining the clinical workflow. State-of-the-art results in image segmentation are achieved with data-driven methods like convolutional neural networks (CNNs). These need large amounts of training data to perform sufficiently well but medical image datasets are often rather small. A possibility to overcome this problem is exploiting alternative network architectures like capsule networks. METHODS: We systematically investigated different capsule network architecture variants and optimized the performance on IVUS image segmentation. We then compared our capsule network with corresponding CNNs under varying amounts of training images and network parameters. RESULTS: Contrary to previous works, our capsule network performs best when doubling the number of capsule types after each downsampling stage, analogous to typical increase rates of feature maps in CNNs. Maximum improvements compared to the baseline CNNs are 20.6% in terms of the Dice coefficient and 87.2% in terms of the average Hausdorff distance. CONCLUSION: Capsule networks are promising candidates when it comes to segmentation of small IVUS image datasets. We therefore assume that this also holds for ultrasound images in general. A reasonable next step would be the investigation of capsule networks for few- or even single-shot learning tasks.


Assuntos
Vasos Sanguíneos/crescimento & desenvolvimento , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Ultrassonografia de Intervenção/métodos , Humanos
2.
Int J Comput Assist Radiol Surg ; 15(9): 1427-1436, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32556953

RESUMO

PURPOSE: In the field of medical image analysis, deep learning methods gained huge attention over the last years. This can be explained by their often improved performance compared to classic explicit algorithms. In order to work well, they need large amounts of annotated data for supervised learning, but these are often not available in the case of medical image data. One way to overcome this limitation is to generate synthetic training data, e.g., by performing simulations to artificially augment the dataset. However, simulations require domain knowledge and are limited by the complexity of the underlying physical model. Another method to perform data augmentation is the generation of images by means of neural networks. METHODS: We developed a new algorithm for generation of synthetic medical images exhibiting speckle noise via generative adversarial networks (GANs). Key ingredient is a speckle layer, which can be incorporated into a neural network in order to add realistic and domain-dependent speckle. We call the resulting GAN architecture SpeckleGAN. RESULTS: We compared our new approach to an equivalent GAN without speckle layer. SpeckleGAN was able to generate ultrasound images with very crisp speckle patterns in contrast to the baseline GAN, even for small datasets of 50 images. SpeckleGAN outperformed the baseline GAN by up to 165 % with respect to the Fréchet Inception distance. For artery layer and lumen segmentation, a performance improvement of up to 4 % was obtained for small datasets, when these were augmented with images by SpeckleGAN. CONCLUSION: SpeckleGAN facilitates the generation of realistic synthetic ultrasound images to augment small training sets for deep learning based image processing. Its application is not restricted to ultrasound images but could be used for every imaging methodology that produces images with speckle such as optical coherence tomography or radar.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Ultrassonografia , Algoritmos , Simulação por Computador , Bases de Dados Factuais , Diagnóstico por Computador/métodos , Humanos , Distribuição Normal , Software
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 989-992, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946059

RESUMO

Convolutional neural networks (CNNs) produce promising results when applied to a wide range of medical imaging tasks including the segmentation of tissue structures. However, segmentation is particularly challenging when the target structures are small with respect to the complete image data and exhibit substantial curvature as in the case of coronary arteries in computed tomography angiography (CTA). Therefore, we evaluated the impact of data representation of tubular structures on the segmentation performance of CNNs with U-Net architecture in terms of the resulting Dice coefficients and Hausdorff distances. For this purpose, we considered 2D and 3D input data in cross-sectional and Cartesian representations. We found that the data representation can have a substantial impact on segmentation results with Dice coefficients ranging from 60% to 82% reaching values similar to those of human expert annotations used for training and Hausdorff distances ranging from 1.38 mm to 5.90 mm. Our results indicate that a 3D cross-sectional data representation is preferable for segmentation of thin tubular structures.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Estudos Transversais , Humanos , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...