Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38610507

RESUMEN

In cardiac cine imaging, acquiring high-quality data is challenging and time-consuming due to the artifacts generated by the heart's continuous movement. Volumetric, fully isotropic data acquisition with high temporal resolution is, to date, intractable due to MR physics constraints. To assess whole-heart movement under minimal acquisition time, we propose a deep learning model that reconstructs the volumetric shape of multiple cardiac chambers from a limited number of input slices while simultaneously optimizing the slice acquisition orientation for this task. We mimic the current clinical protocols for cardiac imaging and compare the shape reconstruction quality of standard clinical views and optimized views. In our experiments, we show that the jointly trained model achieves accurate high-resolution multi-chamber shape reconstruction with errors of <13 mm HD95 and Dice scores of >80%, indicating its effectiveness in both simulated cardiac cine MRI and clinical cardiac MRI with a wide range of pathological shape variations.


Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Aprendizaje Profundo , Volumen Cardíaco , Corazón/diagnóstico por imagen , Artefactos
2.
Sensors (Basel) ; 23(6)2023 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-36991588

RESUMEN

Image registration for temporal ultrasound sequences can be very beneficial for image-guided diagnostics and interventions. Cooperative human-machine systems that enable seamless assistance for both inexperienced and expert users during ultrasound examinations rely on robust, realtime motion estimation. Yet rapid and irregular motion patterns, varying image contrast and domain shifts in imaging devices pose a severe challenge to conventional realtime registration approaches. While learning-based registration networks have the promise of abstracting relevant features and delivering very fast inference times, they come at the potential risk of limited generalisation and robustness for unseen data; in particular, when trained with limited supervision. In this work, we demonstrate that these issues can be overcome by using end-to-end differentiable displacement optimisation. Our method involves a trainable feature backbone, a correlation layer that evaluates a large range of displacement options simultaneously and a differentiable regularisation module that ensures smooth and plausible deformation. In extensive experiments on public and private ultrasound datasets with very sparse ground truth annotation the method showed better generalisation abilities and overall accuracy than a VoxelMorph network with the same feature backbone, while being two times faster at inference.

3.
Sensors (Basel) ; 22(3)2022 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-35161851

RESUMEN

Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations-including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Humanos
4.
J Biomed Inform ; 119: 103816, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34022421

RESUMEN

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).


Asunto(s)
COVID-19 , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , SARS-CoV-2 , Semántica , Tomografía Computarizada por Rayos X
5.
Sensors (Basel) ; 20(5)2020 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-32143297

RESUMEN

Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.

6.
Neuroimage ; 157: 561-574, 2017 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-28602815

RESUMEN

Diffusion MRI is an exquisitely sensitive probe of tissue microstructure, and is currently the only non-invasive measure of the brain's fibre architecture. As this technique becomes more sophisticated and microstructurally informative, there is increasing value in comparing diffusion MRI with microscopic imaging in the same tissue samples. This study compared estimates of fibre orientation dispersion in white matter derived from diffusion MRI to reference measures of dispersion obtained from polarized light imaging and histology. Three post-mortem brain specimens were scanned with diffusion MRI and analyzed with a two-compartment dispersion model. The specimens were then sectioned for microscopy, including polarized light imaging estimates of fibre orientation and histological quantitative estimates of myelin and astrocytes. Dispersion estimates were correlated on region - and voxel-wise levels in the corpus callosum, the centrum semiovale and the corticospinal tract. The region-wise analysis yielded correlation coefficients of r = 0.79 for the diffusion MRI and histology comparison, while r = 0.60 was reported for the comparison with polarized light imaging. In the corpus callosum, we observed a pattern of higher dispersion at the midline compared to its lateral aspects. This pattern was present in all modalities and the dispersion profiles from microscopy and diffusion MRI were highly correlated. The astrocytes appeared to have minor contribution to dispersion observed with diffusion MRI. These results demonstrate that fibre orientation dispersion estimates from diffusion MRI represents the tissue architecture well. Dispersion models might be improved by more faithfully incorporating an informed mapping based on microscopy data.


Asunto(s)
Astrocitos , Imagen de Difusión por Resonancia Magnética/métodos , Técnicas Histológicas/métodos , Microscopía/métodos , Vaina de Mielina , Bancos de Tejidos , Sustancia Blanca/diagnóstico por imagen , Anciano , Anciano de 80 o más Años , Imagen de Difusión por Resonancia Magnética/normas , Técnicas Histológicas/normas , Humanos , Masculino , Microscopía/normas , Microscopía de Polarización/métodos , Microscopía de Polarización/normas , Persona de Mediana Edad
7.
Int J Comput Assist Radiol Surg ; 19(4): 713-721, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38233597

RESUMEN

PURPOSE: Navigation guidance is a key requirement for a multitude of lung interventions using video bronchoscopy. State-of-the-art solutions focus on lung biopsies using electromagnetic tracking and intraoperative image registration w.r.t. preoperative CT scans for guidance. The requirement of patient-specific CT scans hampers the utilization of navigation guidance for other applications such as intensive care units. METHODS: This paper addresses bronchoscope tracking by solely incorporating video data. In contrast to state-of-the-art approaches, we entirely omit the use of electromagnetic tracking and patient-specific CT scans to avoid changes in clinical workflows and additional hardware requirements in intensive care units. Guidance is enabled by means of topological bronchoscope localization w.r.t. a generic airway model. Particularly, we take maximally advantage of anatomical constraints of airway trees being sequentially traversed. This is realized by incorporating sequences of CNN-based airway likelihoods into a hidden Markov model. RESULTS: Our approach is evaluated based on multiple experiments inside a lung phantom model. With the consideration of temporal context and use of anatomical knowledge for regularization, we are able to improve the accuracy up to to 0.98 compared to 0.81 (weighted F1: 0.98 compared to 0.81) for a classification based on individual frames. CONCLUSION: We combine CNN-based single image classification of airway segments with anatomical constraints and temporal HMM-based inference for the first time. Our approach shows first promising results in vision-based guidance for bronchoscopy interventions in the absence of electromagnetic tracking and patient-specific CT scans.


Asunto(s)
Algoritmos , Broncoscopía , Humanos , Broncoscopía/métodos , Imagenología Tridimensional/métodos , Broncoscopios , Tomografía Computarizada por Rayos X/métodos , Fenómenos Electromagnéticos
8.
Signal Image Video Process ; 17(4): 981-989, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35910403

RESUMEN

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. Supplementary Information: The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

9.
Med Image Anal ; 89: 102887, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37453235

RESUMEN

3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain. Our method comprises two complementary adaptation strategies based on prior knowledge about human anatomy. First, we guide the learning process in the target domain by constraining predictions to the space of anatomically plausible poses. To this end, we embed the prior knowledge into an anatomical loss function that penalizes asymmetric limb lengths, implausible bone lengths, and implausible joint angles. Second, we propose to filter pseudo labels for self-training according to their anatomical plausibility and incorporate the concept into the Mean Teacher paradigm. We unify both strategies in a point cloud-based framework applicable to unsupervised and source-free domain adaptation. Evaluation is performed for in-bed pose estimation under two adaptation scenarios, using the public SLP dataset and a newly created dataset. Our method consistently outperforms various state-of-the-art domain adaptation methods, surpasses the baseline model by 31%/66%, and reduces the domain gap by 65%/82%. Source code is available at https://github.com/multimodallearning/da-3dhpe-anatomy.


Asunto(s)
Aprendizaje , Programas Informáticos , Humanos
10.
Med Image Anal ; 83: 102628, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36283200

RESUMEN

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen
11.
J Med Imaging (Bellingham) ; 9(4): 044001, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35847178

RESUMEN

Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming. Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images. Results: The results show an improvement of ∼ 10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations. Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.

12.
Comput Biol Med ; 146: 105555, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35533459

RESUMEN

The construction of three-dimensional multi-modal tissue maps provides an opportunity to spur interdisciplinary innovations across temporal and spatial scales through information integration. While the preponderance of effort is allocated to the cellular level and explore the changes in cell interactions and organizations, contextualizing findings within organs and systems is essential to visualize and interpret higher resolution linkage across scales. There is a substantial normal variation of kidney morphometry and appearance across body size, sex, and imaging protocols in abdominal computed tomography (CT). A volumetric atlas framework is needed to integrate and visualize the variability across scales. However, there is no abdominal and retroperitoneal organs atlas framework for multi-contrast CT. Hence, we proposed a high-resolution CT retroperitoneal atlas specifically optimized for the kidney organ across non-contrast CT and early arterial, late arterial, venous and delayed contrast-enhanced CT. We introduce a deep learning-based volume interest extraction method by localizing the 2D slices with a representative score and crop within the range of the abdominal interest. An automated two-stage hierarchal registration pipeline is then performed to register abdominal volumes to a high-resolution CT atlas template with DEEDS affine and non-rigid registration. To generate and evaluate the atlas framework, multi-contrast modality CT scans of 500 subjects (without reported history of renal disease, age: 15-50 years, 250 males & 250 females) were processed. PDD-Net with affine registration achieved the best overall mean DICE for portal venous phase multi-organs label transfer with the registration pipeline (0.540 ± 0.275, p < 0.0001 Wilcoxon signed-rank test) comparing to the other registration tools. It also demonstrated the best performance with the median DICE over 0.8 in transferring the kidney information to the atlas space. DEEDS perform constantly with stable transferring performance in all phases average mapping including significant clear boundary of kidneys with contrastive characteristics, while PDD-Net only demonstrates a stable kidney registration in the average mapping of early and late arterial, and portal venous phase. The variance mappings demonstrate the low intensity variance in the kidney regions with DEEDS across all contrast phases and with PDD-Net across late arterial and portal venous phase. We demonstrate a stable generalizability of the atlas template for integrating the normal kidney variation from small to large, across contrast modalities and populations with great variability of demographics. The linkage of atlas and demographics provided a better understanding of the variation of kidney anatomy across populations.


Asunto(s)
Radiografía Abdominal , Tomografía Computarizada por Rayos X , Adolescente , Adulto , Femenino , Humanos , Riñón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Adulto Joven
13.
Artículo en Inglés | MEDLINE | ID: mdl-36303577

RESUMEN

The Human BioMolecular Atlas Program (HuBMAP) provides an opportunity to contextualize findings across cellular to organ systems levels. Constructing an atlas target is the primary endpoint for generalizing anatomical information across scales and populations. An initial target of HuBMAP is the kidney organ and arterial phase contrast-enhanced computed tomography (CT) provides distinctive appearance and anatomical context on the internal substructure of kidney organs such as renal context, medulla, and pelvicalyceal system. With the confounding effects of demographics and morphological characteristics of the kidney across large-scale imaging surveys, substantial variation is demonstrated with the internal substructure morphometry and the intensity contrast due to the variance of imaging protocols. Such variability increases the level of difficulty to localize the anatomical features of the kidney substructure in a well-defined spatial reference for clinical analysis. In order to stabilize the localization of kidney substructures in the context of this variability, we propose a high-resolution CT kidney substructure atlas template. Briefly, we introduce a deep learning preprocessing technique to extract the volumetric interest of the abdominal regions and further perform a deep supervised registration pipeline to stably adapt the anatomical context of the kidney internal substructure. To generate and evaluate the atlas template, arterial phase CT scans of 500 control subjects are de-identified and registered to the atlas template with a complete end-to-end pipeline. With stable registration to the abdominal wall and kidney organs, the internal substructure of both left and right kidneys are substantially localized in the high-resolution atlas space. The atlas average template successfully demonstrated the contextual details of the internal structure and was applicable to generalize the morphological variation of internal substructure across patients.

14.
IEEE Trans Med Imaging ; 40(9): 2246-2257, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33872144

RESUMEN

In the last two years learning-based methods have started to show encouraging results in different supervised and unsupervised medical image registration tasks. Deep neural networks enable (near) real time applications through fast inference times and have tremendous potential for increased registration accuracies by task-specific learning. However, estimation of large 3D deformations, for example present in inhale to exhale lung CT or interpatient abdominal MRI registration, is still a major challenge for the widely adopted U-Net-like network architectures. Even when using multi-level strategies, current state-of-the-art DL registration results do not yet reach the high accuracy of conventional frameworks. To overcome the problem of large deformations for deep learning approaches, in this work, we present GraphRegNet, a sparse keypoint-based geometric network for dense deformable medical image registration. Similar to the successful 2D optical flow estimation of FlowNet or PWC-Net we leverage discrete dense displacement maps to facilitate the registration process. In order to cope with enormously increasing memory requirements when working with displacement maps in 3D medical volumes and to obtain a well-regularised and accurate deformation field we 1) formulate the registration task as the prediction of displacement vectors on a sparse irregular grid of distinctive keypoints and 2) introduce our efficient GraphRegNet for displacement regularisation, a combination of convolutional and graph neural network layers in a unified architecture. In our experiments on exhale to inhale lung CT registration we demonstrate substantial improvements (TRE below 1.4 mm) over other deep learning methods. Our code is publicly available at https://github.com/multimodallearning/graphregnet.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Pulmón/diagnóstico por imagen , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X
15.
Comput Methods Programs Biomed ; 211: 106374, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34601186

RESUMEN

BACKGROUND AND OBJECTIVE: Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS: In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS: Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION: We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Movimiento (Física) , Ultrasonografía , Ultrasonografía Intervencional
16.
Med Image Anal ; 67: 101822, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33166774

RESUMEN

Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Humanos , Aprendizaje Automático Supervisado
17.
Int J Comput Assist Radiol Surg ; 16(12): 2079-2087, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34420184

RESUMEN

PURPOSE: Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. METHODS: We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient's volumetric surface without a cover. Second, the patient's weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. RESULTS: We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to [Formula: see text] and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to [Formula: see text]. CONCLUSION: We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.


Asunto(s)
Nube Computacional , Aprendizaje Automático , Humanos
18.
Sci Rep ; 11(1): 5718, 2021 03 11.
Artículo en Inglés | MEDLINE | ID: mdl-33707527

RESUMEN

Recent dose reduction techniques have made retrospective computed tomography (CT) scans more applicable and extracting myocardial function from cardiac computed tomography (CCT) images feasible. However, hyperparameters of generic image intensity-based registration techniques, which are used for tracking motion, have not been systematically optimised for this modality. There is limited work on their validation for measuring regional strains from retrospective gated CCT images and open-source software for motion analysis is not widely available. We calculated strain using our open-source platform by applying an image registration warping field to a triangulated mesh of the left ventricular endocardium. We optimised hyperparameters of two registration methods to track the wall motion. Both methods required a single semi-automated segmentation of the left ventricle cavity at end-diastolic phase. The motion was characterised by the circumferential and longitudinal strains, as well as local area change throughout the cardiac cycle from a dataset of 24 patients. The derived motion was validated against manually annotated anatomical landmarks and the calculation of strains were verified using idealised problems. Optimising hyperparameters of registration methods allowed tracking of anatomical measurements with a mean error of 6.63% across frames, landmarks, and patients, comparable to an intra-observer error of 7.98%. Both registration methods differentiated between normal and dyssynchronous contraction patterns based on circumferential strain ([Formula: see text], [Formula: see text]). To test whether a typical 10 temporal frames sampling of retrospective gated CCT datasets affects measuring cardiac mechanics, we compared motion tracking results from 10 and 20 frames datasets and found a maximum error of [Formula: see text]. Our findings show that intensity-based registration techniques with optimal hyperparameters are able to accurately measure regional strains from CCT in a very short amount of time. Furthermore, sufficient sensitivity can be achieved to identify heart failure patients and left ventricle mechanics can be quantified with 10 reconstructed temporal frames. Our open-source platform will support increased use of CCT for quantifying cardiac mechanics.

19.
Artículo en Inglés | MEDLINE | ID: mdl-34354322

RESUMEN

The Human BioMolecular Atlas Program (HuBMAP) seeks to create a molecular atlas at the cellular level of the human body to spur interdisciplinary innovations across spatial and temporal scales. While the preponderance of effort is allocated towards cellular and molecular scale mapping, differentiating and contextualizing findings within tissues, organs and systems are essential for the HuBMAP efforts. The kidney is an initial organ target of HuBMAP, and constructing a framework (or atlas) for integrating information across scales is needed for visualizing and integrating information. However, there is no abdominal atlas currently available in the public domain. Substantial variation in healthy kidneys exists with sex, body size, and imaging protocols. With the integration of clinical archives for secondary research use, we are able to build atlases based on a diverse population and clinically relevant protocols. In this study, we created a computed tomography (CT) phase-specific atlas for the abdomen, which is optimized for the kidney organ. A two-stage registration pipeline was used by registering extracted abdominal volume of interest from body part regression, to a high-resolution CT. Affine and non-rigid registration were performed to all scans hierarchically. To generate and evaluate the atlas, multiphase CT scans of 500 control subjects (age: 15 - 50, 250 males, 250 females) are registered to the atlas target through the complete pipeline. The abdominal body and kidney registration are shown to be stable with the variance map computed from the result average template. Both left and right kidneys are substantially localized in the high-resolution target space, which successfully demonstrated the sharp details of its anatomical characteristics across each phase. We illustrated the applicability of the atlas template for integrating across normal kidney variation from 64 cm3 to 302 cm3.

20.
Artículo en Inglés | MEDLINE | ID: mdl-34531633

RESUMEN

A major goal of lung cancer screening is to identify individuals with particular phenotypes that are associated with high risk of cancer. Identifying relevant phenotypes is complicated by the variation in body position and body composition. In the brain, standardized coordinate systems (e.g., atlases) have enabled separate consideration of local features from gross/global structure. To date, no analogous standard atlas has been presented to enable spatial mapping and harmonization in chest computational tomography (CT). In this paper, we propose a thoracic atlas built upon a large low dose CT (LDCT) database of lung cancer screening program. The study cohort includes 466 male and 387 female subjects with no screening detected malignancy (age 46-79 years, mean 64.9 years). To provide spatial mapping, we optimize a multi-stage inter-subject non-rigid registration pipeline for the entire thoracic space. Briefly, with 50 scans of 50 randomly selected female subjects as fine tuning dataset, we search for the optimal configuration of the non-rigid registration module in a range of adjustable parameters including: registration searching radius, degree of keypoint dispersion, regularization coefficient and similarity patch size, to minimize the registration failure rate approximated by the number of samples with low Dice similarity score (DSC) for lung and body segmentation. We evaluate the optimized pipeline on a separate cohort (100 scans of 50 female and 50 male subjects) relative to two baselines with alternative non-rigid registration module: the same software with default parameters and an alternative software. We achieve a significant improvement in terms of registration success rate based on manual QA. For the entire study cohort, the optimized pipeline achieves a registration success rate of 91.7%. The application validity of the developed atlas is evaluated in terms of discriminative capability for different anatomic phenotypes, including body mass index (BMI), chronic obstructive pulmonary disease (COPD), and coronary artery calcification (CAC).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA