Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(4)2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38400229

RESUMO

The multimodal and multidomain registration of medical images have gained increasing recognition in clinical practice as a powerful tool for fusing and leveraging useful information from different imaging techniques and in different medical fields such as cardiology and orthopedics. Image registration could be a challenging process, and it strongly depends on the correct tuning of registration parameters. In this paper, the robustness and accuracy of a landmarks-based approach have been presented for five cardiac multimodal image datasets. The study is based on 3D Slicer software and it is focused on the registration of a computed tomography (CT) and 3D ultrasound time-series of post-operative mitral valve repair. The accuracy of the method, as a function of the number of landmarks used, was performed by analysing root mean square error (RMSE) and fiducial registration error (FRE) metrics. The validation of the number of landmarks resulted in an optimal number of 10 landmarks. The mean RMSE and FRE values were 5.26 ± 3.17 and 2.98 ± 1.68 mm, respectively, showing comparable performances with respect to the literature. The developed registration process was also tested on a CT orthopaedic dataset to assess the possibility of reconstructing the damaged jaw portion for a pre-operative planning setting. Overall, the proposed work shows how 3D Slicer and registration by landmarks can provide a useful environment for multimodal/unimodal registration.


Assuntos
Ortopedia , Tomografia Computadorizada por Raios X/métodos , Pulmão , Software , Coração , Imageamento Tridimensional/métodos , Algoritmos
2.
Neuroimage ; 276: 120198, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37245561

RESUMO

Magnetic Resonance Imaging (MRI) resolution continues to improve, making it important to understand the cellular basis for different MRI contrast mechanisms. Manganese-enhanced MRI (MEMRI) produces layer-specific contrast throughout the brain enabling in vivo visualization of cellular cytoarchitecture, particularly in the cerebellum. Due to the unique geometry of the cerebellum, especially near the midline, 2D MEMRI images can be acquired from a relatively thick slice by averaging through areas of uniform morphology and cytoarchitecture to produce very high-resolution visualization of sagittal planes. In such images, MEMRI hyperintensity is uniform in thickness throughout the anterior-posterior axis of sagittal sections and is centrally located in the cerebellar cortex. These signal features suggested that the Purkinje cell layer, which houses the cell bodies of the Purkinje cells and the Bergmann glia, is the source of hyperintensity. Despite this circumstantial evidence, the cellular source of MRI contrast has been difficult to define. In this study, we quantified the effects of selective ablation of Purkinje cells or Bergmann glia on cerebellar MEMRI signal to determine whether signal could be assigned to one cell type. We found that the Purkinje cells, not the Bergmann glia, are the primary of source of the enhancement in the Purkinje cell layer. This cell-ablation strategy should be useful for determining the cell specificity of other MRI contrast mechanisms.


Assuntos
Cerebelo , Manganês , Humanos , Manganês/metabolismo , Cerebelo/patologia , Células de Purkinje/metabolismo , Células de Purkinje/patologia , Neuroglia/metabolismo , Imageamento por Ressonância Magnética/métodos
3.
Anal Bioanal Chem ; 411(19): 4849-4859, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30790022

RESUMO

This paper describes a workflow towards the reconstruction of the three-dimensional elemental distribution profile within human cervical carcinoma cells (HeLa), at a spatial resolution down to 1 µm, employing state-of-the-art laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) instrumentation. The suspended cells underwent a series of fixation/embedding protocols and were stained with uranyl acetate and an Ir-based DNA intercalator. A priori, laboratory-based absorption micro-computed tomography (µ-CT) was applied to acquire a reference frame of the morphology of the cells and their spatial distribution before sectioning. After CT analysis, a trimmed 300 × 300 × 300 µm3 block was sectioned into a sequential series of 132 sections with a thickness of 2 µm, which were subjected to LA-ICP-MS imaging. A pixel acquisition rate of 250 pixels s-1 was achieved, through a bidirectional scanning strategy. After acquisition, the two-dimensional elemental images were reconstructed using the timestamps in the laser log file. The synchronization of the data required an improved optimization algorithm, which forces the pixels of scans in different ablation directions to be spatially coherent in the direction orthogonal to the scan direction. The volume was reconstructed using multiple registration approaches. Registration using the section outline itself as a fiducial marker resulted into a volume which was in good agreement with the morphology visualized in the µ-CT volume. The 3D µ-CT volume could be registered to the LA-ICP-MS volume, consisting of 2.9 × 107 voxels, and the nucleus dimensions in 3D space could be derived.


Assuntos
Espectrometria de Massas/métodos , Análise de Célula Única/métodos , Células HeLa , Humanos , Microtomografia por Raio-X
4.
Skin Res Technol ; 21(3): 319-26, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25382317

RESUMO

BACKGROUND/PURPOSE: Computational skin analysis is revolutionizing modern dermatology. Patterns extracted from image sequences enable algorithmic evaluation. Stacking multiple images to analyze pattern variation implicitly assumes that the images are aligned per-pixel. However, breathing and involuntary motion of the patient causes significant misalignment. Alignment algorithms designed for multimodal and time-lapse skin images can solve this problem. Sequences from multi-modal imaging capture unique appearance features in each modality. Time-lapse image sequences capture skin appearance change over time. METHODS: Multimodal skin images have been acquired under five different modalities: three in reflectance (visible, parallel-polarized, and cross-polarized) and two in fluorescence mode (UVA and blue light excitation). For time-lapse imagery, 39 images of acne lesions over a 3-month period have been collected. The method detects micro-level features like pores, wrinkles, and other skin texture markings in the acquired images. Images are automatically registered to subpixel accuracy. RESULTS: The proposed registration approach precisely aligns multimodal and time-lapse images. Subsurface recovery from multimodal images has misregistration artefacts that can be eliminated using this approach. Registered time-lapse imaging captures the evolution of appearance of skin regions with time. CONCLUSION: Misalignment in skin imaging has significant impact on any quantitative or qualitative image evaluation. Micro-level features can be used to obtain highly accurate registration. Multimodal images can be organized with maximal overlap for successful registration. The resulting point-to-point alignment improves the quality of skin image analysis.


Assuntos
Acne Vulgar/patologia , Dermoscopia/métodos , Iluminação/métodos , Imagem Multimodal/métodos , Técnica de Subtração , Imagem com Lapso de Tempo/métodos , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
5.
J Dent ; 150: 105387, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39362299

RESUMO

OBJECTIVES: To (1) construct a virtual patient (VP) using facial scan, intraoral scan, and low-dose computed tomography scab based on an Artificial intelligence (AI)-approach, (2) quantitatively compare it with AI-refined and semi-automatic registration, and (3) qualitatively evaluate user satisfaction when using virtual patient as a communication tool in clinical practice. MATERIALS AND METHODS: A dataset of 20 facial scans, intraoral scans, and low-dose computed tomography scans was imported into the Virtual Patient Creator platform to create an automated virtual patient. The accuracy of the virtual patients created using different approaches was further analyzed in the Mimics software. The accuracy (% of corrections required), consistency, and time efficiency of the AI-driven virtual patient registration were then compared with the AI-refined and semi-automatic registration (clinical reference). User satisfaction was assessed through a survey of 35 dentists and 25 laypersons who rated the virtual patient's realism and usefulness for treatment planning and communication on a 5-point scale. RESULTS: The accuracy for AI-driven, AI-refined, and semi-automatic registration virtual patient was 85 %, 85 %, and 100 % for the upper and middle thirds of the face, and 30 %, 30 %, and 35 % for the lower third. Registration consistency was 1, 1 and 0.99, and the average time was 26.5, 30.8, and 385 s, respectively (18-fold time reduction with AI). The inferior facial third exhibited the highest registration mismatch between facial scan and computed tomography. User satisfaction with the virtual patient was consistently high among both dentists and laypersons, with most responses indicating very high satisfaction regarding realism and usefulness as a communication tool. CONCLUSION: The AI-driven registration can provide clinically accurate, fast, and consistent virtual patient creation using facial scans, intraoral scans, and low-dose computed tomography scans, enabling interpersonal communication. CLINICAL SIGNIFICANCE: Using AI for automated segmentation and registration of maxillofacial structures leads to clinically efficient and accurate VP creation, opening the doors for its widespread use in diagnosis, treatment planning, and interprofessional and professional-patient communication.

6.
Comput Med Imaging Graph ; 108: 102260, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37343325

RESUMO

PURPOSE: Multimodal registration is a key task in medical image analysis. Due to the large differences of multimodal images in intensity scale and texture pattern, it is a great challenge to design distinctive similarity metrics to guide deep learning-based multimodal image registration. Besides, since the limitation of the small receptive field, existing deep learning-based methods are mainly suitable for small deformation, but helpless for large deformation. To address the above issues, we present an unsupervised multimodal image registration method based on the multiscale integrated spatial-weight module and dual similarity guidance. METHODS: In this method, a U-shape network with our multiscale integrated spatial-weight module is embedded into a multi-resolution image registration architecture to achieve end-to-end large deformation registration, where the spatial-weight module can effectively highlight the regions with large deformation and aggregate discriminative features, and the multi-resolution architecture further helps to solve the optimization problem of the network in a coarse-to-fine pattern. Furthermore, we introduce a special loss function based on dual similarity, which represents both global gray-scale similarity and local feature similarity, to optimize the unsupervised multimodal registration network. RESULTS: We verified the effectiveness of the proposed method on liver CT-MR images. Experimental results indicate that the proposed method achieves the optimal DSC value and TRE value of 92.70 ± 1.75(%) and 6.52 ± 2.94(mm), compared with other state-of-the-art registration algorithms. CONCLUSION: The proposed method can accurately estimate the large deformation field by aggregating multiscale features, and achieve higher registration accuracy and fast registration speed. Comparative experiments also demonstrate the effectiveness and generalization ability of the algorithm.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Fígado/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
7.
bioRxiv ; 2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37745386

RESUMO

3D standard reference brains serve as key resources to understand the spatial organization of the brain and promote interoperability across different studies. However, unlike the adult mouse brain, the lack of standard 3D reference atlases for developing mouse brains has hindered advancement of our understanding of brain development. Here, we present a multimodal 3D developmental common coordinate framework (DevCCF) spanning mouse embryonic day (E) 11.5, E13.5, E15.5, E18.5, and postnatal day (P) 4, P14, and P56 with anatomical segmentations defined by a developmental ontology. At each age, the DevCCF features undistorted morphologically averaged atlas templates created from Magnetic Resonance Imaging and co-registered high-resolution templates from light sheet fluorescence microscopy. Expert-curated 3D anatomical segmentations at each age adhere to an updated prosomeric model and can be explored via an interactive 3D web-visualizer. As a use case, we employed the DevCCF to unveil the emergence of GABAergic neurons in embryonic brains. Moreover, we integrated the Allen CCFv3 into the P56 template with stereotaxic coordinates and mapped spatial transcriptome cell-type data with the developmental ontology. In summary, the DevCCF is an openly accessible resource that can be used for large-scale data integration to gain a comprehensive understanding of brain development.

8.
Comput Biol Med ; 143: 105234, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35093845

RESUMO

Gastric cancer is the second leading cause of cancer-related deaths worldwide. Early diagnosis significantly increases the chances of survival; therefore, improved assisted exploration and screening techniques are necessary. Previously, we made use of an augmented multi-spectral endoscope by inserting an optical probe into the instrumentation channel. However, the limited field of view and the lack of markings left by optical biopsies on the tissue complicate the navigation and revisit of the suspect areas probed in-vivo. In this contribution two innovative tools are introduced to significantly increase the traceability and monitoring of patients in clinical practice: (i) video mosaicing to build a more comprehensive and panoramic view of large gastric areas; (ii) optical biopsy targeting and registration with the endoscopic images. The proposed optical flow-based mosaicing technique selects images that minimize texture discontinuities and is robust despite the lack of texture and illumination variations. The optical biopsy targeting is based on automatic tracking of a free-marker probe in the endoscopic view using deep learning to dynamically estimate its pose during exploration. The accuracy of pose estimation is sufficient to ensure a precise overlapping of the standard white-light color image and the hyperspectral probe image, assuming that the small target area of the organ is almost flat. This allows the mapping of all spatio-temporally tracked biopsy sites onto the panoramic mosaic. Experimental validations are carried out from videos acquired on patients in hospital. The proposed technique is purely software-based and therefore easily integrable into clinical practice. It is also generic and compatible to any imaging modality that connects to a fiberscope.

9.
J Med Imaging (Bellingham) ; 8(2): 025001, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33681409

RESUMO

Purpose: We present a markerless vision-based method for on-the-fly three-dimensional (3D) pose estimation of a fiberscope instrument to target pathologic areas in the endoscopic view during exploration. Approach: A 2.5-mm-diameter fiberscope is inserted through the endoscope's operating channel and connected to an additional camera to perform complementary observation of a targeted area such as a multimodal magnifier. The 3D pose of the fiberscope is estimated frame-by-frame by maximizing the similarity between its silhouette (automatically detected in the endoscopic view using a deep learning neural network) and a cylindrical shape bound to a kinematic model reduced to three degrees-of-freedom. An alignment of the cylinder axis, based on Plücker coordinates from the straight edges detected in the image, makes convergence faster and more reliable. Results: The performance on simulations has been validated with a virtual trajectory mimicking endoscopic exploration and on real images of a chessboard pattern acquired with different endoscopic configurations. The experiments demonstrated a good accuracy and robustness of the proposed algorithm with errors of 0.33 ± 0.68 mm in distance position and 0.32 ± 0.11 deg in axis orientation for the 3D pose estimation, which reveals its superiority over previous approaches. This allows multimodal image registration with sufficient accuracy of < 3 pixels . Conclusion: Our pose estimation pipeline was executed on simulations and patterns; the results demonstrate the robustness of our method and the potential of fiber-optical instrument image-based tracking for pose estimation and multimodal registration. It can be fully implemented in software and therefore easily integrated into a routine clinical environment.

10.
Comput Methods Programs Biomed ; 211: 106374, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34601186

RESUMO

BACKGROUND AND OBJECTIVE: Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS: In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS: Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION: We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Movimento (Física) , Ultrassonografia , Ultrassonografia de Intervenção
11.
Phys Med Biol ; 66(17)2021 08 31.
Artigo em Inglês | MEDLINE | ID: mdl-34298532

RESUMO

Purpose.To develop a method that enables computed tomography (CT) to magnetic resonance (MR) image registration of complex deformations typically encountered in rotating joints such as the knee joint.Methods.We propose a workflow, denoted quaternion interpolated registration (QIR), consisting of three steps, which makes use of prior knowledge of tissue properties to initialise deformable registration. In the first step, the rigid skeletal components were individually registered. Next, the deformation of soft tissue was estimated using a dual quaternion-based interpolation method. In the final step, the registration was fine-tuned with a rigidity-constrained deformable registration step. The method was applied to paired, unregistered CT and MR images of the knee of 92 patients. It was compared to registration using B-Splines (BS) and B-Splines with a rigidity penalty (BSRP). Registration accuracy was evaluated using mutual information, and by calculating Dice similarity coefficient (DSC), mean absolute surface distance (MASD) and 95th percentile Hausdorff distance (HD95) on bone, and DSC on water and fat dominated tissue. To evaluate the rigidity of bone in the registration, the Jacobian determinant (JD) was calculated.Results.QIR achieved improved results with 0.93, 0.76 mm and 1.88 mm on the DSC, MASD and HD95 metrics on bone, compared to 0.87, 1.40 mm and 4.99 mm for method and 0.87, 1.40 mm and 3.56 mm for the BSRP method. The average DSC of water and fat was 0.77 and 0.86 for the QIR, 0.75 and 0.84 for BS and 0.74 and 0.84 for BSRP. Comparison of the median JD and median interquartile (IQR) ranges of the JD indicated that the QIR (1.00 median, 0.03 IQR) resulted in higher rigidity in the rigid skeletal tissues compared to the BS (0.98 median, 0.19 IQR) and BSRP (1.00 median, 0.05 IQR) methods.Conclusion.This study showed that QIR could improve the outcome of complex registration problems, encountered in joints involving rigid and non-rigid bodies such as occur in the knee, as compared to a conventional registration approach.


Assuntos
Articulação do Joelho , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Articulação do Joelho/diagnóstico por imagem
12.
Med Image Anal ; 71: 102041, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33823397

RESUMO

Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by intraprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT often suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy extractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage annotated datasets already existing in a source modality and propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations.


Assuntos
Carcinoma Hepatocelular , Quimioembolização Terapêutica , Neoplasias Hepáticas , Tomografia Computadorizada de Feixe Cônico Espiral , Algoritmos , Tomografia Computadorizada de Feixe Cônico , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem
13.
Int J Med Robot ; 17(6): e2316, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34312966

RESUMO

OBJECTIVE: We propose a robust and accurate knee joint modelling method with bone and cartilage structures to enable accurate surgical guidance for knee surgery. METHODS: A multimodality registration strategy is proposed to fuse magnetic resonance (MR) and computed tomography (CT) images of the femur and tibia separately to remove spatial inconsistency caused by knee bending in CT/MR scans. Automatic segmentation of the femur, tibia and cartilages is carried out with region of interest clustering and intensity analysis based on the multimodal fusion of images. RESULTS: Experimental results show that the registration error is 1.13 ± 0.30 mm. The Dice similarity coefficient values of the proposed segmentation method of the femur, tibia, femoral and tibial cartilages are 0.969, 0.966, 0.910 and 0.872, respectively. CONCLUSIONS: This study demonstrates the feasibility and effectiveness of multimodality-based registration and segmentation methods for knee joint modelling. The proposed method can provide users with 3D anatomical models of the femur, tibia, and cartilages with few human inputs.


Assuntos
Artroplastia do Joelho , Cartilagem Articular , Cartilagem Articular/diagnóstico por imagem , Cartilagem Articular/cirurgia , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Humanos , Joelho , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/cirurgia , Imageamento por Ressonância Magnética , Tíbia/diagnóstico por imagem , Tíbia/cirurgia
14.
Med Image Comput Comput Assist Interv ; 12263: 222-232, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33283210

RESUMO

Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies. In this paper, we propose a novel translation-based unsupervised deformable image registration method. Distinct from other translation-based methods that attempt to convert the multimodal problem (e.g., CT-to-MR) into a unimodal problem (e.g., MR-to-MR) via image-to-image translation, our method leverages the deformation fields estimated from both: (i) the translated MR image and (ii) the original CT image in a dual-stream fashion, and automatically learns how to fuse them to achieve better registration performance. The multimodal registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.

15.
Comput Methods Programs Biomed ; 183: 105062, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31522089

RESUMO

BACKGROUND AND OBJECTIVE: In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition and assess implant failure. In this work, we combine CT and MRI for joint bone and muscle segmentation and we propose a novel Intramuscular Fat Fraction estimation method for the quantification of muscle atrophy. METHODS: Our multimodal framework is able to segment healthy and pathological musculoskeletal structures as well as implants, and develops into three steps. First, input images are pre-processed to improve the low quality of clinically acquired images and to reduce the noise associated with metal artefact. Subsequently, CT and MRI are non-linearly aligned using a novel approach which imposes rigidity constraints on bony structures to ensure realistic deformation. Finally, taking advantage of a multimodal atlas we created for this task, a multi-atlas based segmentation delineates pelvic bones, abductor muscles and implants on both modalities jointly. From the obtained segmentation, a multimodal estimation of the Intramuscular Fat Fraction can be automatically derived. RESULTS: Evaluation of the segmentation in a leave-one-out cross-validation study on 22 hip sides resulted in an average Dice score of 0.90 for skeletal and 0.84 for muscular structures. Our multimodal Intramuscular Fat Fraction was benchmarked on 27 different cases against a standard radiological score, showing stronger association than a single modality approach in a one-way ANOVA F-test analysis. CONCLUSIONS: The proposed framework represents a promising tool to support image analysis in hip arthroplasty, being robust to the presence of implants and associated image artefacts. By allowing for the automated extraction of a muscle atrophy imaging biomarker, it could quantitatively inform the decision-making process about patient's management.


Assuntos
Tecido Adiposo/patologia , Artroplastia de Quadril/efeitos adversos , Articulação do Quadril/diagnóstico por imagem , Músculos/patologia , Atrofia Muscular/diagnóstico por imagem , Adulto , Idoso , Algoritmos , Feminino , Prótese de Quadril , Humanos , Interpretação de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Imagem Multimodal , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
16.
Breast ; 49: 281-290, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31986378

RESUMO

Breast cancer image fusion consists of registering and visualizing different sets of a patient synchronized torso and radiological images into a 3D model. Breast spatial interpretation and visualization by the treating physician can be augmented with a patient-specific digital breast model that integrates radiological images. But the absence of a ground truth for a good correlation between surface and radiological information has impaired the development of potential clinical applications. A new image acquisition protocol was designed to acquire breast Magnetic Resonance Imaging (MRI) and 3D surface scan data with surface markers on the patient's breasts and torso. A patient-specific digital breast model integrating the real breast torso and the tumor location was created and validated with a MRI/3D surface scan fusion algorithm in 16 breast cancer patients. This protocol was used to quantify breast shape differences between different modalities, and to measure the target registration error of several variants of the MRI/3D scan fusion algorithm. The fusion of single breasts without the biomechanical model of pose transformation had acceptable registration errors and accurate tumor locations. The performance of the fusion algorithm was not affected by breast volume. Further research and virtual clinical interfaces could lead to fast integration of this fusion technology into clinical practice.


Assuntos
Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos , Adulto , Mama/diagnóstico por imagem , Simulação por Computador , Feminino , Humanos , Pessoa de Meia-Idade , Modelos Anatômicos
17.
Med Image Anal ; 54: 76-87, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30836308

RESUMO

Breast magnetic resonance imaging (MRI) and X-ray mammography are two image modalities widely used for early detection and diagnosis of breast diseases in women. The combination of these modalities, traditionally done using intensity-based registration algorithms, leads to a more accurate diagnosis and treatment, due to the capability of co-localizing lesions and susceptibles areas between the two image modalities. In this work, we present the first attempt to register breast MRI and X-ray mammographic images using intensity gradients as the similarity measure. Specifically, a patient-specific biomechanical model of the breast, extracted from the MRI image, is used to mimic the mammographic acquisition. The intensity gradients of the glandular tissue are directly projected from the 3D MRI volume to the 2D mammographic space, and two different gradient-based metrics are tested to lead the registration, the normalized cross-correlation of the scalar gradient values and the gradient correlation of the vectoral gradients. We compare these two approaches to an intensity-based algorithm, where the MRI volume is transformed to a synthetic computed tomography (pseudo-CT) image using the partial volume effect obtained by the glandular tissue segmentation performed by means of an Expectation-Maximization algorithm. This allows us to obtain the digitally reconstructed radiographies by a direct intensity projection. The best results are obtained using the scalar gradient approach along with a transversal isotropic material model, obtaining a target registration error (TRE), in millimeters, of 5.65 ±â€¯2.76 for CC- and of 7.83 ±â€¯3.04 for MLO-mammograms, while the TRE is 7.33 ±â€¯3.62 in the 3D MRI. We also evaluate the effect of the glandularity of the breast as well as the landmark position on the TRE, obtaining moderated correlation values (0.65 and 0.77 respectively), concluding that these aspects need to be considered to increase the accuracy in further approaches.


Assuntos
Mama/diagnóstico por imagem , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Mamografia , Imagem Multimodal , Algoritmos , Pontos de Referência Anatômicos , Artefatos , Neoplasias da Mama/diagnóstico por imagem , Meios de Contraste , Feminino , Humanos , Imageamento Tridimensional
18.
Phys Imaging Radiat Oncol ; 12: 10-16, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33458289

RESUMO

BACKGROUND AND PURPOSE: Ultrasound (US) is a non-invasive, non-radiographic imaging technique with high spatial and temporal resolution that can be used for localizing soft-tissue structures and tumors in real-time during radiotherapy (RT) (inter- and intra-fraction). A comprehensive approach incorporating an in-house 3D-US system within RT is presented. This system is easier to adopt into existing treatment protocols than current US based systems, with the aim of providing millimeter intra-fraction alignment errors and sensitivity to track intra-fraction bladder movement. MATERIALS AND METHODS: An in-house integrated US manipulator and platform was designed to relate the computed tomographic (CT) scanner, 3D-US and linear accelerator coordinate systems. An agar-based phantom with measured speed of sound and densities consistent with tissues surrounding the bladder was rotated (0-45°) and translated (up to 55 mm) relative to the US and CT coordinate systems to validate this device. After acquiring and integrating CT and US images into the treatment planning system, US-to-US and US-to-CT images were co-registered to re-align the phantom relative to the linear accelerator. RESULTS: Statistical errors from US-to-US registrations for various patient orientations ranged from 0.1 to 1.7 mm for x, y, and z translation components, and 0.0-1.1° for rotational components. Statistical errors from US-to-CT registrations were 0.3-1.2 mm for the x, y and z translational components and 0.1-2.5° for the rotational components. CONCLUSIONS: An ultrasound-based platform was designed, constructed and tested on a CT/US tissue-equivalent phantom to track bladder displacement with a statistical uncertainty to correct and track inter- and intra-fractional displacements of the bladder during radiation treatments.

19.
Med Biol Eng Comput ; 56(11): 2151-2161, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29862470

RESUMO

An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.


Assuntos
Encéfalo/fisiologia , Imagem Multimodal/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
20.
Med Phys ; 45(1): e6-e31, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29148579

RESUMO

Breast magnetic resonance imaging (MRI) and x-ray mammography are two image modalities widely used for the early detection and diagnosis of breast diseases in women. The combination of these modalities leads to a more accurate diagnosis and treatment of breast diseases. The aim of this paper is to review the registration between breast MRI and x-ray mammographic images using patient-specific finite element-based biomechanical models. Specifically, a biomechanical model is obtained from the patient's MRI volume and is subsequently used to mimic the mammographic acquisition. Due to the different patient positioning and movement restrictions applied in each image modality, the finite element analysis provides a realistic physics-based approach to perform the breast deformation. In contrast with other reviews, we do not only expose the overall process of compression and registration but we also include main ideas, describe challenges, and provide an overview of the used software in each step of the process. Extracting an accurate description from the MR images and preserving the stability during the finite element analysis require an accurate knowledge about the algorithms used, as well as the software and underlying physics. The wide perspective offered makes the paper suitable not only for expert researchers but also for graduate students and clinicians. We also include several medical applications in the paper, with the aim to fill the gap between the engineering and clinical performance.


Assuntos
Mama/diagnóstico por imagem , Análise de Elementos Finitos , Imageamento por Ressonância Magnética , Mamografia , Modelagem Computacional Específica para o Paciente , Humanos , Imageamento Tridimensional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA