Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 161
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38730187

RESUMEN

PURPOSE: Online C-arm calibration with a mobile fiducial cage plays an essential role in various image-guided interventions. However, it is challenging to develop a fully automatic approach, which requires not only an accurate detection of fiducial projections but also a robust 2D-3D correspondence establishment. METHODS: We propose a novel approach for online C-arm calibration with a mobile fiducial cage. Specifically, a novel mobile calibration cage embedded with 16 fiducials is designed, where the fiducials are arranged to form 4 line patterns with different cross-ratios. Then, an auto-context-based detection network (ADNet) is proposed to perform an accurate and robust detection of 2D projections of those fiducials in acquired C-arm images. Subsequently, we present a cross-ratio consistency-based 2D-3D correspondence establishing method to automatically match the detected 2D fiducial projections with those 3D fiducials, allowing for an accurate online C-arm calibration. RESULTS: We designed and conducted comprehensive experiments to evaluate the proposed approach. For automatic detection of 2D fiducial projections, the proposed ADNet achieved a mean point-to-point distance of 0.65 ± 1.33 pixels. Additionally, the proposed C-arm calibration approach achieved a mean re-projection error of 1.01 ± 0.63 pixels and a mean point-to-line distance of 0.22 ± 0.12  mm. When the proposed C-arm calibration approach was applied to downstream tasks involving landmark and surface model reconstruction, sub-millimeter accuracy was achieved. CONCLUSION: In summary, we developed a novel approach for online C-arm calibration. Both qualitative and quantitative results of comprehensive experiments demonstrated the accuracy and robustness of the proposed approach. Our approach holds potentials for various image-guided interventions.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38662561

RESUMEN

In a clinical setting, the acquisition of certain medical image modality is often unavailable due to various considerations such as cost, radiation, etc. Therefore, unpaired cross-modality translation techniques, which involve training on the unpaired data and synthesizing the target modality with the guidance of the acquired source modality, are of great interest. Previous methods for synthesizing target medical images are to establish one-shot mapping through generative adversarial networks (GANs). As promising alternatives to GANs, diffusion models have recently received wide interests in generative tasks. In this paper, we propose a target-guided diffusion model (TGDM) for unpaired cross-modality medical image translation. For training, to encourage our diffusion model to learn more visual concepts, we adopted a perception prioritized weight scheme (P2W) to the training objectives. For sampling, a pre-trained classifier is adopted in the reverse process to relieve modality-specific remnants from source data. Experiments on both brain MRI-CT and prostate MRI-US datasets demonstrate that the proposed method achieves a visually realistic result that mimics a vivid anatomical section of the target organ. In addition, we have also conducted a subjective assessment based on the synthesized samples to further validate the clinical value of TGDM.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38568402

RESUMEN

PURPOSE: Segmentation of ossified ligamentum flavum (OLF) plays a crucial role in developing computer-assisted, image-guided systems for decompressive thoracic laminectomy. Manual segmentation is time-consuming, tedious, and label-intensive. It also suffers from inter- and intra-observer variability. Automatic segmentation is highly desired. METHODS: A two-stage, localization context-aware framework is developed for automatic segmentation of ossified ligamentum flavum. In the first stage, localization heatmaps of OLFs are obtained via incremental regression. In the second stage, the obtained heatmaps are then treated as the localization context for a segmentation U-Net. Our framework can directly map a whole volumetic data to its volume-wise labels. RESULTS: We designed and conducted comprehensive experiments on datasets of 100 patients to evaluate the performance of the proposed method. Our method achieved an average Dice similarity coefficient of 61.2 ± 7.6%, an average surface distance of 1.1 ± 0.5 mm, and an average positive predictive value of 62.0 ± 12.8%. CONCLUSION: To the best knowledge of the authors, this is the first study aiming for automatic segmentation of ossified ligamentum flavum. Results from the comprehensive experiments demonstrate the superior performance of the proposed method over the state-of-the-art methods.

4.
Int J Comput Assist Radiol Surg ; 19(3): 507-517, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38236477

RESUMEN

PURPOSE: Multimodal articulated image registration (MAIR) is a challenging problem because the resulting transformation needs to maintain rigidity for bony structures while allowing elastic deformation for surrounding soft tissues. Existing deep learning-based methods ignore the articulated structures and consider it as a pure deformable registration problem, leading to suboptimal results. METHODS: We propose a novel weakly supervised anatomy-aware multimodal articulated image registration network, referred as MAIRNet, to solve the challenging problem. The architecture of MAIRNet comprises of two branches: a non-learnable polyrigid registration branch to estimate an initial velocity field, and a learnable deformable registration branch to learn an increment. These two branches work together to produce a velocity field that can be integrated to generate the final displacement field. RESULTS: We designed and conducted comprehensive experiments on three datasets to evaluate the performance of the proposed method. Specifically, on the hip dataset, our method achieved, respectively, an average dice of 90.8%, 92.4% and 91.3% for the pelvis, the right femur, and the left femur. On the lumbar spinal dataset, our method obtained, respectively, an average dice of 86.1% and 85.9% for the L4 and the L5 vertebrae. On the thoracic spinal dataset, our method achieved, respectively, an average dice of 76.7%, 79.5%, 82.9%, 85.5% and 85.7% for the five thoracic vertebrae ranging from T6 to T10. CONCLUSION: In summary, we developed a novel approach for multimodal articulated image registration. Comprehensive experiments conducted on three typical yet challenging datasets demonstrated the efficacy of the present approach. Our method achieved better results than the state-of-the-art approaches.


Asunto(s)
Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Pelvis , Huesos , Fémur , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
5.
Arthroscopy ; 40(3): 745-751, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-37419221

RESUMEN

PURPOSE: To investigate the differences in the prevalence of ligamentum teres (LT) tears and other radiographic measurements in borderline dysplasia of the hip (BDDH) with/without microinstability and to evaluate the associations between these imaging findings and the prevalence of microinstability in patients with BDDH. METHODS: This was a retrospective study of symptomatic patients with BDDH (18° ≤ lateral center-edge angle <25°) treated with arthroscopy in our hospital between January 2016 and December 2021. These patients were divided into the BDDH with microinstability (mBDDH) group and the stable BDDH (nBDDH) group. The radiographic parameters associated with hip joint stability, such as the state of LT, acetabular versions, femoral neck version, Tönnis angle, combined anteversions, and anterior/posterior acetabular coverage, were reviewed and analyzed. RESULTS: There were 54 patients (49 female/5 male, 26.7 ± 6.9 years) in the mBDDH group and 81 patients (74 female/7 male, 27.2 ± 7.7 years) in the nBDDH group. The mBDDH group had greater LT tear (43/54 vs 5/81) and general laxity rates, increased femoral neck version, acetabular version and combined anteversion (52.4 ± 5.9 vs 41.5 ± 7.1 at 3-o'clock level) than the nBDDH group. Binary logistic regression showed that LT tears (odds ratio 6.32, 95% confidence interval 1.38-28.8; P = .02; R2 = .458) and combined anteversion at the 3-o'clock level (odds ratio 1.42, 95% confidence interval 1.09-1.84; P < .01; R2 = .458) were independent predictors of microinstability in patients with BDDH. The cutoff value of combined anteversion at 3-o'clock level was 49.5°. In addition, LT tear was correlated with increased combined anteversion at 3-o'clock level in patients with BDDH (P < .01, η2 = 0.29). CONCLUSIONS: LT tears and increased combined anteversion at the 3-o'clock level on the acetabular clockface were associated with hip microinstability in patients with BDDH, suggesting that patients with BDDH and LT tears might have a greater prevalence of anterior microinstability. LEVEL OF EVIDENCE: Level III, case‒control study.


Asunto(s)
Articulación de la Cadera , Ligamentos Redondos , Humanos , Masculino , Femenino , Estudios Retrospectivos , Estudios de Casos y Controles , Articulación de la Cadera/diagnóstico por imagen , Articulación de la Cadera/cirugía , Acetábulo/diagnóstico por imagen , Acetábulo/cirugía
6.
Comput Med Imaging Graph ; 111: 102322, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38157671

RESUMEN

Deformable medical image registration plays an important role in many clinical applications. It aims to find a dense deformation field to establish point-wise correspondences between a pair of fixed and moving images. Recently, unsupervised deep learning-based registration methods have drawn more and more attention because of fast inference at testing stage. Despite remarkable progress, existing deep learning-based methods suffer from several limitations including: (a) they often overlook the explicit modeling of feature correspondences due to limited receptive fields; (b) the performance on image pairs with large spatial displacements is still limited since the dense deformation field is regressed from features learned by local convolutions; and (c) desirable properties, including topology-preservation and the invertibility of transformation, are often ignored. To address above limitations, we propose a novel Convolutional Neural Network (CNN) consisting of a Siamese Multi-scale Interactive-representation LEarning (SMILE) encoder and a Hierarchical Diffeomorphic Deformation (HDD) decoder. Specifically, the SMILE encoder aims for effective feature representation learning and spatial correspondence establishing while the HDD decoder seeks to regress the dense deformation field in a coarse-to-fine manner. We additionally propose a novel Local Invertible Loss (LIL) to encourage topology-preservation and local invertibility of the regressed transformation while keeping high registration accuracy. Extensive experiments conducted on two publicly available brain image datasets demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches. Specifically, on the Neurite-OASIS dataset, our method achieved an average DSC of 0.815 and an average ASSD of 0.633 mm.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Encéfalo , Procesamiento de Imagen Asistido por Computador/métodos
7.
Comput Med Imaging Graph ; 110: 102314, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37988845

RESUMEN

In this paper, we address the problem of estimating remaining surgery duration (RSD) from surgical video frames. We propose a Bayesian long short-term memory (LSTM) network-based Deep Negative Correlation Learning approach called BD-Net for accurate regression of RSD prediction as well as estimation of prediction uncertainty. Our method aims to extract discriminative visual features from surgical video frames and model the temporal dependencies among frames to improve the RSD prediction accuracy. To this end, we propose to train an ensemble of Bayesian LSTMs on top of a backbone network by the way of deep negative correlation learning (DNCL). More specifically, we deeply learn a pool of decorrelated Bayesian regressors with sound generalization capabilities through managing their intrinsic diversities. BD-Net is simple and efficient. After training, it can produce both RSD prediction and uncertainty estimation in a single inference run. We demonstrate the efficacy of BD-Net on publicly available datasets of two different types of surgeries: one containing 101 cataract microscopic surgeries with short durations and the other containing 80 cholecystectomy laparoscopic surgeries with relatively longer durations. Experimental results on both datasets demonstrate that the proposed BD-Net achieves better results than the state-of-the-art (SOTA) methods. A reference implementation of our method can be found at: https://github.com/jywu511/BD-Net.


Asunto(s)
Aprendizaje , Teorema de Bayes , Incertidumbre
8.
Comput Methods Programs Biomed ; 240: 107729, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37531690

RESUMEN

BACKGROUND AND OBJECTIVE: Deep learning-based approaches are excellent at learning from large amounts of data, but can be poor at generalizing the learned knowledge to testing datasets with domain shift, i.e., when there exists distribution discrepancy between the training dataset (source domain) and the testing dataset (target domain). In this paper, we investigate unsupervised domain adaptation (UDA) techniques to train a cross-domain segmentation method which is robust to domain shift, eliminating the requirement of any annotations on the target domain. METHODS: To this end, we propose an Entropy-guided Disentangled Representation Learning, referred as EDRL, for UDA in semantic segmentation. Concretely, we synergistically integrate image alignment via disentangled representation learning with feature alignment via entropy-based adversarial learning into one network, which is trained end-to-end. We additionally introduce a dynamic feature selection mechanism via soft gating, which helps to further enhance the task-specific feature alignment. We validate the proposed method on two publicly available datasets: the CT-MR dataset and the multi-sequence cardiac MR (MS-CMR) dataset. RESULTS: On both datasets, our method achieved better results than the state-of-the-art (SOTA) methods. Specifically, on the CT-MR dataset, our method achieved an average DSC of 84.8% when taking CT as the source domain and MR as the target domain, and an average DSC of 84.0% when taking MR as the source domain and CT as the target domain. CONCLUSIONS: Results from comprehensive experiments demonstrate the efficacy of the proposed EDRL model for cross-domain medical image segmentation.


Asunto(s)
Corazón , Semántica , Entropía , Procesamiento de Imagen Asistido por Computador
9.
Med Image Anal ; 89: 102888, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37451133

RESUMEN

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Asunto(s)
Inteligencia Artificial , Cirugía Asistida por Computador , Humanos , Endoscopía , Algoritmos , Cirugía Asistida por Computador/métodos , Instrumentos Quirúrgicos
10.
Comput Biol Med ; 160: 106995, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37187134

RESUMEN

Despite the significant performance improvement on multi-organ segmentation with supervised deep learning-based methods, the label-hungry nature hinders their applications in practical disease diagnosis and treatment planning. Due to the challenges in obtaining expert-level accurate, densely annotated multi-organ datasets, label-efficient segmentation, such as partially supervised segmentation trained on partially labeled datasets or semi-supervised medical image segmentation, has attracted increasing attention recently. However, most of these methods suffer from the limitation that they neglect or underestimate the challenging unlabeled regions during model training. To this end, we propose a novel Context-aware Voxel-wise Contrastive Learning method, referred as CVCL, to take full advantage of both labeled and unlabeled information in label-scarce datasets for a performance improvement on multi-organ segmentation. Experimental results demonstrate that our proposed method achieves superior performance than other state-of-the-art methods.

11.
IEEE Trans Med Imaging ; 42(11): 3256-3268, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37227905

RESUMEN

When developing context-aware systems, automatic surgical phase recognition and tool presence detection are two essential tasks. There exist previous attempts to develop methods for both tasks but majority of the existing methods utilize a frame-level loss function (e.g., cross-entropy) which does not fully leverage the underlying semantic structure of a surgery, leading to sub-optimal results. In this paper, we propose multi-task learning-based, LAtent Space-constrained Transformers, referred as LAST, for automatic surgical phase recognition and tool presence detection. Our design features a two-branch transformer architecture with a novel and generic way to leverage video-level semantic information during network training. This is done by learning a non-linear compact presentation of the underlying semantic structure information of surgical videos through a transformer variational autoencoder (VAE) and by encouraging models to follow the learned statistical distributions. In other words, LAST is of structure-aware and favors predictions that lie on the extracted low dimensional data manifold. Validated on two public datasets of the cholecystectomy surgery, i.e., the Cholec80 dataset and the M2cai16 dataset, our method achieves better results than other state-of-the-art methods. Specifically, on the Cholec80 dataset, our method achieves an average accuracy of 93.12±4.71%, an average precision of 89.25±5.49%, an average recall of 90.10±5.45% and an average Jaccard of 81.11 ±7.62% for phase recognition, and an average mAP of 95.15±3.87% for tool presence detection. Similar superior performance is also observed when LAST is applied to the M2cai16 dataset.


Asunto(s)
Cirugía Asistida por Computador , Colecistectomía
12.
Int J Comput Assist Radiol Surg ; 18(6): 989-999, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37027083

RESUMEN

PURPOSE: Accurate three-dimensional (3D) models play crucial roles in computer assisted planning and interventions. MR or CT images are frequently used to derive 3D models but have the disadvantages that they are expensive or involving ionizing radiation (e.g., CT acquisition). An alternative method based on calibrated 2D biplanar X-ray images is highly desired. METHODS: A point cloud network, referred as LatentPCN, is developed for reconstruction of 3D surface models from calibrated biplanar X-ray images. LatentPCN consists of three components: an encoder, a predictor, and a decoder. During training, a latent space is learned to represent shape features. After training, LatentPCN maps sparse silhouettes generated from 2D images to a latent representation, which is taken as the input to the decoder to derive a 3D bone surface model. Additionally, LatentPCN allows for estimation of a patient-specific reconstruction uncertainty. RESULTS: We designed and conducted comprehensive experiments on datasets of 25 simulated cases and 10 cadaveric cases to evaluate the performance of LatentLCN. On these two datasets, the mean reconstruction errors achieved by LatentLCN were 0.83 mm and 0.92 mm, respectively. A correlation between large reconstruction errors and high uncertainty in the reconstruction results was observed. CONCLUSION: LatentPCN can reconstruct patient-specific 3D surface models from calibrated 2D biplanar X-ray images with high accuracy and uncertainty estimation. The sub-millimeter reconstruction accuracy on cadaveric cases demonstrates its potential for surgical navigation applications.


Asunto(s)
Imagenología Tridimensional , Cirugía Asistida por Computador , Humanos , Imagenología Tridimensional/métodos , Rayos X , Cadáver
13.
Med Image Anal ; 86: 102803, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37004378

RESUMEN

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Asunto(s)
Benchmarking , Laparoscopía , Humanos , Algoritmos , Quirófanos , Flujo de Trabajo , Aprendizaje Profundo
14.
J Orthop Res ; 41(8): 1746-1753, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36691861

RESUMEN

In this paper, we present and evaluate HipRecon, a noncommercial software package that simultaneously calculates pelvic tilt and rotation from an anteroposterior pelvis radiograph. We asked: What is the (1) accuracy and precision, (2) robustness, and (3) intra-/interobserver reliability/reproducibility of HipRecon to analyze both pelvic tilt and rotation on conventional AP pelvis radiographs? (4) How does the prediction of pelvic tilt on AP pelvis radiographs using HipRecon compare to established measurement methods? We compared the actual pelvic tilt of 20 adult human cadaveric pelvises with the calculated pelvic orientation based on an AP pelvis radiograph using HipRecon software. The pelvises were mounted on a radiolucent fixture and a total of 380 AP pelvis radiographs with different configurations were acquired. In addition, we investigated the correlation between actual tilt and the tilt calculated using HipRecon and seven other established measurement methods. The calculated software accuracy was 0.2 ± 2.0° (-3.6-4.1) for pelvic tilt and 0.0 ± 1.2° (-2.2-2.3, p = 0.39) for pelvic rotation. The Bland-Altman analysis showed values that were evenly and randomly spread in both directions. HipRecon showed excellent consistency for the measurement of pelvic tilt and rotation (intraobserver intraclass-correlation coefficient [ICC]: 0.99 [95% CI: 0.99-0.99] and interobserver ICC 0.99 [95% CI: 0.99-0.99]). Of all eight analyzed methods, the highest correlation coefficient was found for HipRecon (r = 0.98, p < 0.001). In the future, HipRecon could be used to detect changes in patient-specific pelvic orientation, helping to improve clinical understanding and decision-making in pathologies of the hip.


Asunto(s)
Pelvis , Postura , Adulto , Humanos , Reproducibilidad de los Resultados , Rotación , Radiografía , Pelvis/diagnóstico por imagen , Acetábulo/diagnóstico por imagen
15.
J Hip Preserv Surg ; 10(3-4): 214-219, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38162264

RESUMEN

Patients with developmental dysplasia of the hip (DDH) are believed to present with increased anterior pelvic tilt to compensate for reduced anterior femoral head coverage. If true, pelvic tilt in dysplastic patients should be high preoperatively and decrease after correction with periacetabular osteotomy (PAO). To date, the evolution of pelvic tilt in long-term follow-up after PAO has not been reported. We therefore asked the following questions: (i) is there a difference in pelvic tilt between patients with DDH and an asymptomatic control group? (ii) How does pelvic tilt evolve during long-term follow-up after Bernese PAO compared with before surgery? This study is a therapeutic study with the level of evidence III. We retrospectively compared preoperative pelvic tilt in 64 dysplastic patients (71 hips) with an asymptomatic control group of 20 patients (20 hips). In addition, immediate postoperative and long-term follow-up (at 18 ± 8 [range 7-34 years) pelvic tilt was assessed and compared. Dysplastic patients had a significantly higher mean preoperative pelvic tilt than controls [2.3 ± 5.3° (-11.2° to 16.4°) versus 1.1 ± 3.0° (-4.9 to 5.9), P = 0.006]. Mean pelvic tilt postoperatively was 1.5 ± 5.3° (-11.2 to 17.0º, P = 0.221) and at long-term follow-up was 0.4 ± 5.7° (range -9.9° to 20.9°, P = 0.002). Dysplastic hips undergoing PAO show a statistically significant decrease in pelvic tilt during long-term follow-up. However, given the large interindividual variability in pelvic tilt, the observed differences may not achieve clinical significance.

16.
Sensors (Basel) ; 22(23)2022 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-36502167

RESUMEN

In robot-assisted ultrasound-guided needle biopsy, it is essential to conduct calibration of the ultrasound probe and to perform hand-eye calibration of the robot in order to establish a link between intra-operatively acquired ultrasound images and robot-assisted needle insertion. Based on a high-precision optical tracking system, novel methods for ultrasound probe and robot hand-eye calibration are proposed. Specifically, we first fix optically trackable markers to the ultrasound probe and to the robot, respectively. We then design a five-wire phantom to calibrate the ultrasound probe. Finally, an effective method taking advantage of steady movement of the robot but without an additional calibration frame or the need to solve the AX=XB equation is proposed for hand-eye calibration. After calibrations, our system allows for in situ definition of target lesions and aiming trajectories from intra-operatively acquired ultrasound images in order to align the robot for precise needle biopsy. Comprehensive experiments were conducted to evaluate accuracy of different components of our system as well as the overall system accuracy. Experiment results demonstrated the efficacy of the proposed methods.


Asunto(s)
Robótica , Mano/diagnóstico por imagen , Extremidad Superior , Biopsia con Aguja , Ultrasonografía
17.
Sensors (Basel) ; 22(21)2022 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-36366144

RESUMEN

Pedicle screw insertion with robot assistance dramatically improves surgical accuracy and safety when compared with manual implantation. In developing such a system, hand-eye calibration is an essential component that aims to determine the transformation between a position tracking and robot-arm systems. In this paper, we propose an effective hand-eye calibration method, namely registration-based hand-eye calibration (RHC), which estimates the calibration transformation via point set registration without the need to solve the AX=XB equation. Our hand-eye calibration method consists of tool-tip pivot calibrations in two-coordinate systems, in addition to paired-point matching, where the point pairs are generated via the steady movement of the robot arm in space. After calibration, our system allows for robot-assisted, image-guided pedicle screw insertion. Comprehensive experiments are conducted to verify the efficacy of the proposed hand-eye calibration method. A mean distance deviation of 0.70 mm and a mean angular deviation of 0.68° are achieved by our system when the proposed hand-eye calibration method is used. Further experiments on drilling trajectories are conducted on plastic vertebrae as well as pig vertebrae. A mean distance deviation of 1.01 mm and a mean angular deviation of 1.11° are observed when the drilled trajectories are compared with the planned trajectories on the pig vertebrae.


Asunto(s)
Tornillos Pediculares , Procedimientos Quirúrgicos Robotizados , Cirugía Asistida por Computador , Porcinos , Animales , Procedimientos Quirúrgicos Robotizados/métodos , Calibración , Mano/cirugía , Cirugía Asistida por Computador/métodos
18.
Med Image Anal ; 82: 102607, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36075148

RESUMEN

Despite remarkable success of deep learning, distribution divergence remains a challenge that hinders the performance of many tasks in medical image analysis. Large distribution gap may deteriorate the knowledge transfer across different domains or feature subspaces. To achieve better distribution alignment, we propose a novel module named Instance to Prototype Earth Mover's Distance (I2PEMD), where shared class-specific prototypes are progressively learned to narrow the distribution gap across different domains or feature subspaces, and Earth Mover's Distance (EMD) is calculated to take into consideration the cross-class relationships during embedding alignment. We validate the effectiveness of the proposed I2PEMD on two different tasks: multi-modal medical image segmentation and semi-supervised classification. Specifically, in multi-modal medical image segmentation, I2PEMD is explicitly utilized as a distribution alignment regularization term to supervise the model training process, while in semi-supervised classification, I2PEMD works as an alignment measure to sort and cherry-pick the unlabeled data for more accurate and robust pseudo-labeling. Results from comprehensive experiments demonstrate the efficacy of the present method.


Asunto(s)
Algoritmos , Reconocimiento de Normas Patrones Automatizadas , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados
19.
Zhongguo Xiu Fu Chong Jian Wai Ke Za Zhi ; 36(8): 915-922, 2022 Aug 15.
Artículo en Chino | MEDLINE | ID: mdl-35979779

RESUMEN

Objective: To review and evaluate the basic principles and advantages of orthopedic robot-assisted technology, research progress, clinical applications, and limitations in the field of traumatic orthopedics, especially in fracture reduction robots. Methods: An extensive review of research literature on the principles of robot-assisted technology and fracture reduction robots was conducted to analyze the technical advantages and clinical efficacy and shortcomings, and to discuss the future development trends in this field. Results: Orthopedic surgical robots can assist orthopedists in intuitive preoperative planning, precise intraoperative control, and minimally invasive operations. It greatly expands the ability of doctors to evaluate and treat orthopedic trauma. Trauma orthopedic surgery robot has achieved a breakthrough from basic research to clinical application, and the preliminary results show that the technology can significantly improve surgical precision and reduce surgical trauma. However, there are still problems such as insufficient evaluation of effectiveness, limited means of technology realization, and narrow clinical indications that need to be solved. Conclusion: Robot-assisted technology has a broad application prospect in traumatic orthopedics, but the current development is still in the initial stage. It is necessary to strengthen the cooperative medical-industrial research, the construction of doctors' communication platform, standardized training and data sharing in order to continuously promote the development of robot-assisted technology in traumatic orthopedics and better play its clinical application value.


Asunto(s)
Ortopedia , Robótica , Fijación de Fractura , Procedimientos Quirúrgicos Mínimamente Invasivos
20.
Medicina (Kaunas) ; 58(6)2022 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-35744095

RESUMEN

Background and Objectives: Even after the 'death' of Lewinnek's safe zone, the orientation of the prosthetic cup in total hip arthroplasty is crucial for success. Accurate cup placement can be achieved with surgical navigation systems. The literature lacks study cohorts with large numbers of hips because postoperative computer tomography is required for the reproducible evaluation of the acetabular component position. To overcome this limitation, we used a validated software program, HipMatch, to accurately assess the cup orientation based on an anterior-posterior pelvic X-ray. The aim of this study were to (1) determine the intraoperative 'individual adjustment' of the cup positioning compared to the widely suggested target values of 40° of inclination and 15° of anteversion, and evaluate the (2) 'accuracy', (3) 'precision', and (4) robustness, regarding systematic errors, of an image-free navigation system in routine clinical use. Material and Methods: We performed a retrospective, accuracy study in a single surgeon case series of 367 navigated primary total hip arthroplasties (PiGalileoTM, Smith+Nephew) through an anterolateral approach performed between January 2011 and August 2018. The individual adjustments were defined as the differences between the target cup orientation (40° of inclination, 15° of anteversion) and the intraoperative registration with the navigation software. The accuracy was the difference between the intraoperative captured cup orientation and the actual postoperative cup orientation determined by HipMatch. The precision was analyzed by the standard deviation of the difference between the intraoperative registered and the actual cup orientation. The outliers were detected using the Tukey method. Results: Compared to the target value (40° inclination, 15° anteversion), the individual adjustments showed that the cups are impacted in higher inclination (mean 3.2° ± 1.6°, range, (−2)−18°) and higher anteversion (mean 5.0° ± 7.0°, range, (−15)−23°) (p < 0.001). The accuracy of the navigated cup placement was −1.7° ± 3.0°, ((−15)−11°) for inclination, and −4.9° ± 6.2° ((−28)−18°) for anteversion (p < 0.001). Precision of the system was higher for inclination (standard deviation SD 3.0°) compared to anteversion (SD 6.2°) (p < 0.001). We found no difference in the prevalence of outliers for inclination (1.9% (7 out of 367)) compared to anteversion (1.63% (6 out of 367), p = 0.78). The Bland-Altman analysis showed that the differences between the intraoperatively captured final position and the postoperatively determined actual position were spread evenly and randomly for inclination and anteversion. Conclusion: The evaluation of an image-less navigation system in this large study cohort provides accurate and reliable intraoperative feedback. The accuracy and the precision were inferior compared to CT-based navigation systems particularly regarding the anteversion. However the assessed values are certainly within a clinically acceptable range. This use of image-less navigation offers an additional tool to address challenging hip prothesis in the context of the hip−spine relationship to achieve adequate placement of the acetabular components with a minimum of outliers.


Asunto(s)
Artroplastia de Reemplazo de Cadera , Prótesis de Cadera , Cirugía Asistida por Computador , Acetábulo/diagnóstico por imagen , Acetábulo/cirugía , Humanos , Estudios Retrospectivos , Cirugía Asistida por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...