Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Med Image Anal ; 67: 101822, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33166774

RESUMO

Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos , Aprendizado de Máquina Supervisionado
2.
Int J Comput Assist Radiol Surg ; 15(2): 269-276, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31741286

RESUMO

PURPOSE: Nonlinear multimodal image registration, for example, the fusion of computed tomography (CT) and magnetic resonance imaging (MRI), fundamentally depends on a definition of image similarity. Previous methods that derived modality-invariant representations focused on either global statistical grayscale relations or local structural similarity, both of which are prone to local optima. In contrast to most learning-based methods that rely on strong supervision of aligned multimodal image pairs, we aim to overcome this limitation for further practical use cases. METHODS: We propose a new concept that exploits anatomical shape information and requires only segmentation labels for both modalities individually. First, a shape-constrained encoder-decoder segmentation network without skip connections is jointly trained on labeled CT and MRI inputs. Second, an iterative energy-based minimization scheme is introduced that relies on the capability of the network to generate intermediate nonlinear shape representations. This further eases the multimodal alignment in the case of large deformations. RESULTS: Our novel approach robustly and accurately aligns 3D scans from the multimodal whole-heart segmentation dataset, outperforming classical unsupervised frameworks. Since both parts of our method rely on (stochastic) gradient optimization, it can be easily integrated in deep learning frameworks and executed on GPUs. CONCLUSIONS: We present an integrated approach for weakly supervised multimodal image registration. Achieving promising results due to the exploration of intermediate shape features as registration guidance encourages further research in this direction.


Assuntos
Imageamento Tridimensional/métodos , Imagem Multimodal/métodos , Aprendizado Profundo , Humanos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos
3.
Int J Comput Assist Radiol Surg ; 14(1): 43-52, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30430361

RESUMO

PURPOSE: Deep convolutional neural networks in their various forms are currently achieving or outperforming state-of-the-art results on several medical imaging tasks. We aim to make these developments available to the so far unsolved task of accurate correspondence finding-especially with regard to image registration. METHODS: We propose a two-step hybrid approach to make deep learned features accessible to a discrete optimization-based registration method. In a first step, in order to extract expressive binary local descriptors, we train a deep network architecture on a patch-based landmark retrieval problem as auxiliary task. As second step at runtime within a MRF-regularised dense displacement sampling, their binary nature enables highly efficient similarity computations, thus making them an ideal candidate to replace the so far used handcrafted local feature descriptors during the registration process. RESULTS: We evaluate our approach on finding correspondences between highly non-rigidly deformed lung CT scans from different breathing states. Although the CNN-based descriptors excell at an auxiliary learning task for finding keypoint correspondences, self-similarity-based descriptors yield more accurate registration results. However, a combination of both approaches turns out to generate the most robust features for registration. CONCLUSION: We present a three-dimensional framework for large lung motion estimation based on the combination of CNN-based and handcrafted descriptors efficiently employed in a discrete registration method. Achieving best results by combining learned and handcrafted features encourages further research in this direction.


Assuntos
Pulmão/diagnóstico por imagem , Movimento (Física) , Doença Pulmonar Obstrutiva Crônica/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Cadeias de Markov , Redes Neurais de Computação
4.
Int J Comput Assist Radiol Surg ; 13(9): 1311-1320, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29850978

RESUMO

PURPOSE: Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). METHODS: We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. RESULTS: We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. CONCLUSIONS: We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Máquina de Vetores de Suporte , Tomografia Computadorizada por Raios X/métodos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA