Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; PP2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38829753

RESUMEN

Registering pre-operative modalities, such as magnetic resonance imaging or computed tomography, to ultrasound images is crucial for guiding clinicians during surgeries and biopsies. Recently, deep-learning approaches have been proposed to increase the speed and accuracy of this registration problem. However, all of these approaches need expensive supervision from the ultrasound domain. In this work, we propose a multitask generative framework that needs weak supervision only from the pre-operative imaging domain during training. To perform a deformable registration, the proposed framework translates a magnetic resonance image to the ultrasound domain while preserving the structural content. To demonstrate the efficacy of the proposed method, we tackle the registration problem of pre-operative 3D MR to transrectal ultrasonography images as necessary for targeted prostate biopsies. We use an in-house dataset of 600 patients, divided into 540 for training, 30 for validation, and the remaining for testing. An expert manually segmented the prostate in both modalities for validation and test sets to assess the performance of our framework. The proposed framework achieves a 3.58 mm target registration error on the expert-selected landmarks, 89.2% in the Dice score, and 1.81 mm 95th percentile Hausdorff distance on the prostate masks in the test set. Our experiments demonstrate that the proposed generative model successfully translates magnetic resonance images into the ultrasound domain. The translated image contains the structural content and fine details due to an ultrasound-specific two-path design of the generative model. The proposed framework enables training learning-based registration methods while only weak supervision from the pre-operative domain is available.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38748052

RESUMEN

PURPOSE: Ultrasound (US) imaging, while advantageous for its radiation-free nature, is challenging to interpret due to only partially visible organs and a lack of complete 3D information. While performing US-based diagnosis or investigation, medical professionals therefore create a mental map of the 3D anatomy. In this work, we aim to replicate this process and enhance the visual representation of anatomical structures. METHODS: We introduce a point cloud-based probabilistic deep learning (DL) method to complete occluded anatomical structures through 3D shape completion and choose US-based spine examinations as our application. To enable training, we generate synthetic 3D representations of partially occluded spinal views by mimicking US physics and accounting for inherent artifacts. RESULTS: The proposed model performs consistently on synthetic and patient data, with mean and median differences of 2.02 and 0.03 in Chamfer Distance (CD), respectively. Our ablation study demonstrates the importance of US physics-based data generation, reflected in the large mean and median difference of 11.8 CD and 9.55 CD, respectively. Additionally, we demonstrate that anatomical landmarks, such as the spinous process (with reconstruction CD of 4.73) and the facet joints (mean distance to ground truth (GT) of 4.96 mm), are preserved in the 3D completion. CONCLUSION: Our work establishes the feasibility of 3D shape completion for lumbar vertebrae, ensuring the preservation of level-wise characteristics and successful generalization from synthetic to real data. The incorporation of US physics contributes to more accurate patient data completions. Notably, our method preserves essential anatomical landmarks and reconstructs crucial injections sites at their correct locations.

3.
Int J Comput Assist Radiol Surg ; 19(5): 861-869, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38270811

RESUMEN

PURPOSE: The detection and treatment of abdominal aortic aneurysm (AAA), a vascular disorder with life-threatening consequences, is challenging due to its lack of symptoms until it reaches a critical size. Abdominal ultrasound (US) is utilized for diagnosis; however, its inherent low image quality and reliance on operator expertise make computed tomography (CT) the preferred choice for monitoring and treatment. Moreover, CT datasets have been effectively used for training deep neural networks for aorta segmentation. In this work, we demonstrate how leveraging CT labels can be used to improve segmentation in ultrasound and hence save manual annotations. METHODS: We introduce CACTUSS: a common anatomical CT-US space that inherits properties from both CT and ultrasound modalities to produce an image in intermediate representation (IR) space. CACTUSS acts as a virtual third modality between CT and US to address the scarcity of annotated ultrasound training data. The generation of IR images is facilitated by re-parametrizing a physics-based US simulator. In CACTUSS we use IR images as training data for ultrasound segmentation, eliminating the need for manual labeling. In addition, an image-to-image translation network is employed for the model's application on real B-modes. RESULTS: The model's performance is evaluated quantitatively for the task of aorta segmentation by comparison against a fully supervised method in terms of Dice Score and diagnostic metrics. CACTUSS outperforms the fully supervised network in segmentation and meets clinical requirements for AAA screening and diagnosis. CONCLUSION: CACTUSS provides a promising approach to improve US segmentation accuracy by leveraging CT labels, reducing the need for manual annotations. We generate IRs that inherit properties from both modalities while preserving the anatomical structure and are optimized for the task of aorta segmentation. Future work involves integrating CACTUSS into robotic ultrasound platforms for automated screening and conducting clinical feasibility studies.


Asunto(s)
Aneurisma de la Aorta Abdominal , Tomografía Computarizada por Rayos X , Ultrasonografía , Humanos , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos , Aorta Abdominal/diagnóstico por imagen , Imagen Multimodal/métodos
4.
Med Phys ; 51(3): 2044-2056, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37708456

RESUMEN

BACKGROUND: Ultrasound (US) has demonstrated to be an effective guidance technique for lumbar spine injections, enabling precise needle placement without exposing the surgeon or the patient to ionizing radiation. However, noise and acoustic shadowing artifacts make US data interpretation challenging. To mitigate these problems, many authors suggested using computed tomography (CT)-to-US registration to align the spine in pre-operative CT to intra-operative US data, thus providing localization of spinal landmarks. PURPOSE: In this paper, we propose a deep learning (DL) pipeline for CT-to-US registration and address the problem of a need for annotated medical data for network training. Firstly, we design a data generation method to generate paired CT-US data where the spine is deformed in a physically consistent manner. Secondly, we train a point cloud (PC) registration network using anatomy-aware losses to enforce anatomically consistent predictions. METHODS: Our proposed pipeline relies on training the network on realistic generated data. In our data generation method, we model the properties of the joints and disks between vertebrae based on biomechanical measurements in previous studies. We simulate the supine and prone position deformation by applying forces on the spine models. We choose the spine models from 35 patients in VerSe dataset. Each spine is deformed 10 times to create a noise-free data with ground-truth segmentation at hand. In our experiments, we use one-leave-out cross-validation strategy to measure the performance and the stability of the proposed method. For each experiment, we choose generated PCs from three spines as the test set. From the remaining, data from 3 spines act as the validation set and we use the rest of the data for training the algorithm. To train our network, we introduce anatomy-aware losses and constraints on the movement to match the physics of the spine, namely, rigidity loss and bio-mechanical loss. We define rigidity loss based on the fact that each vertebra can only transform rigidly while the disks and the surrounding tissue are deformable. Second, by using bio-mechanical loss we stop the network from inferring extreme movements by penalizing the force needed to get to a certain pose. RESULTS: To validate the effectiveness of our fully automated data generation pipeline, we qualitatively assess the fidelity of the generated data. This assessment involves verifying the realism of the spinal deformation and subsequently confirming the plausibility of the simulated ultrasound images. Next, we demonstrate that the introduction of the anatomy-aware losses brings us closer to state-of-the-art (SOTA) and yields a reduction of 0.25 mm in terms of target registration error (TRE) compared to using only mean squared error (MSE) loss on the generated dataset. Furthermore, by using the proposed losses, the rigidity loss in inference decreases which shows that the inferred deformation respects the rigidity of the vertebrae and only introduces deformations in the soft tissue area to compensate the difference to the target PC. We also show that our results are close to the SOTA for the simulated US dataset with TRE of 3.89 mm and 3.63 mm for the proposed method and SOTA respectively. In addition, we show that our method is more robust against errors in the initialization in comparison to SOTA and significantly achieves better results (TRE of 4.88 mm compared to 5.66 mm) in this experiment. CONCLUSIONS: In conclusion, we present a pipeline for spine CT-to-US registration and explore the potential benefits of utilizing anatomy-aware losses to enhance registration results. Additionally, we propose a fully automatic method to synthesize paired CT-US data with physically consistent deformations, which offers the opportunity to generate extensive datasets for network training. The generated dataset and the source code for data generation and registration pipeline can be accessed via https://github.com/mfazampour/medphys_ct_us_registration.


Asunto(s)
Columna Vertebral , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Columna Vertebral/diagnóstico por imagen , Algoritmos , Vértebras Lumbares , Programas Informáticos , Radiación Ionizante , Procesamiento de Imagen Asistido por Computador/métodos
6.
Sci Rep ; 12(1): 14153, 2022 08 19.
Artículo en Inglés | MEDLINE | ID: mdl-35986015

RESUMEN

Segmentation of abdominal Computed Tomography (CT) scan is essential for analyzing, diagnosing, and treating visceral organ diseases (e.g., hepatocellular carcinoma). This paper proposes a novel neural network (Res-PAC-UNet) that employs a fixed-width residual UNet backbone and Pyramid Atrous Convolutions, providing a low disk utilization method for precise liver CT segmentation. The proposed network is trained on medical segmentation decathlon dataset using a modified surface loss function. Additionally, we evaluate its quantitative and qualitative performance; the Res16-PAC-UNet achieves a Dice coefficient of 0.950 ± 0.019 with less than half a million parameters. Alternatively, the Res32-PAC-UNet obtains a Dice coefficient of 0.958 ± 0.015 with an acceptable parameter count of approximately 1.2 million.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
7.
Artículo en Inglés | MEDLINE | ID: mdl-25570137

RESUMEN

Manifold learning algorithms are proposed to be used in image processing based on their ability in preserving data structures while reducing the dimension and the exposure of data structure in lower dimension. Multi-modal images have the same structure and can be registered together as monomodal images if only structural information is shown. As a result, manifold learning is able to transform multi-modal images to mono-modal ones and subsequently do the registration using mono-modal methods. Based on this application, in this paper novel similarity measures are proposed for multi-modal images in which Laplacian eigenmaps are employed as manifold learning algorithm and are tested against rigid registration of PET/MR images. Results show the feasibility of using manifold learning as a way of calculating the similarity between multimodal images.


Asunto(s)
Algoritmos , Tomografía de Emisión de Positrones , Encéfalo/anatomía & histología , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Radiografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA