Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(6)2023 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-36991588

RESUMEN

Image registration for temporal ultrasound sequences can be very beneficial for image-guided diagnostics and interventions. Cooperative human-machine systems that enable seamless assistance for both inexperienced and expert users during ultrasound examinations rely on robust, realtime motion estimation. Yet rapid and irregular motion patterns, varying image contrast and domain shifts in imaging devices pose a severe challenge to conventional realtime registration approaches. While learning-based registration networks have the promise of abstracting relevant features and delivering very fast inference times, they come at the potential risk of limited generalisation and robustness for unseen data; in particular, when trained with limited supervision. In this work, we demonstrate that these issues can be overcome by using end-to-end differentiable displacement optimisation. Our method involves a trainable feature backbone, a correlation layer that evaluates a large range of displacement options simultaneously and a differentiable regularisation module that ensures smooth and plausible deformation. In extensive experiments on public and private ultrasound datasets with very sparse ground truth annotation the method showed better generalisation abilities and overall accuracy than a VoxelMorph network with the same feature backbone, while being two times faster at inference.

2.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36264729

RESUMEN

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Asunto(s)
Cavidad Abdominal , Aprendizaje Profundo , Humanos , Algoritmos , Encéfalo/diagnóstico por imagen , Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
3.
Signal Image Video Process ; 17(4): 981-989, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35910403

RESUMEN

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. Supplementary Information: The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

4.
J Med Imaging (Bellingham) ; 9(4): 044001, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35847178

RESUMEN

Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming. Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images. Results: The results show an improvement of ∼ 10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations. Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.

5.
Sensors (Basel) ; 22(3)2022 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-35161851

RESUMEN

Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations-including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Humanos
6.
J Biomed Inform ; 119: 103816, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34022421

RESUMEN

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).


Asunto(s)
COVID-19 , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , SARS-CoV-2 , Semántica , Tomografía Computarizada por Rayos X
7.
MethodsX ; 2: 124-34, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26150980

RESUMEN

Picrosirius red (PSR) staining is a commonly used histological technique to visualize collagen in paraffin-embedded tissue sections. PSR stained collagen appears red in light microscopy. However it is largely unknown that PSR stained collagen also shows a red fluorescence, whereas live cells have a distinct green autofluorescence. Both emission patterns can be detected using standard filter sets as found in conventional fluorescence microscopes. Here we used digital image addition and subtraction to determine the relative area of the pure collagen and live cell content in heart tissue in a semi-automated process using standard software. This procedure, which considers empty spaces (holes) within the section, can be easily adapted to quantify the collagen and live cell areas in healthy or fibrotic tissues as aorta, lung, kidney or liver by semi-automated planimetry exemplified herein for infarcted heart tissue obtained from the mouse myocardial infarction model. •Use of conventional PSR stained paraffin-embedded tissue sections for fluorescence analysis.•PSR and autofluorescence images are used to calculate area of collagen and area of live cells in the tissue; empty spaces (holes) in tissue are considered.•High throughput analysis of collagen and live cell content in tissue for statistical purposes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...