Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
BMC Musculoskelet Disord ; 23(1): 701, 2022 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869451

RESUMEN

BACKGROUND: Safe and accurate execution of surgeries to date mainly rely on preoperative plans generated based on preoperative imaging. Frequent intraoperative interaction with such patient images during the intervention is needed, which is currently a cumbersome process given that such images are generally displayed on peripheral two-dimensional (2D) monitors and controlled through interface devices that are outside the sterile filed. This study proposes a new medical image control concept based on a Brain Computer Interface (BCI) that allows for hands-free and direct image manipulation without relying on gesture recognition methods or voice commands. METHOD: A software environment was designed for displaying three-dimensional (3D) patient images onto external monitors, with the functionality of hands-free image manipulation based on the user's brain signals detected by the BCI device (i.e., visually evoked signals). In a user study, ten orthopedic surgeons completed a series of standardized image manipulation tasks to navigate and locate predefined 3D points in a Computer Tomography (CT) image using the developed interface. Accuracy was assessed as the mean error between the predefined locations (ground truth) and the navigated locations by the surgeons. All surgeons rated the performance and potential intraoperative usability in a standardized survey using a five-point Likert scale (1 = strongly disagree to 5 = strongly agree). RESULTS: When using the developed interface, the mean image control error was 15.51 mm (SD: 9.57). The user's acceptance was rated with a Likert score of 4.07 (SD: 0.96) while the overall impressions of the interface was rated as 3.77 (SD: 1.02) by the users. We observed a significant correlation between the users' overall impression and the calibration score they achieved. CONCLUSIONS: The use of the developed BCI, that allowed for a purely brain-guided medical image control, yielded promising results, and showed its potential for future intraoperative applications. The major limitation to overcome was noted as the interaction delay.


Asunto(s)
Interfaces Cerebro-Computador , Estudios de Factibilidad , Humanos , Programas Informáticos , Tomografía Computarizada por Rayos X , Interfaz Usuario-Computador
2.
Sensors (Basel) ; 18(8)2018 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-30060589

RESUMEN

Measuring the volume of bird eggs is a very important task for the poultry industry and ornithological research due to the high revenue generated by the industry. In this paper, we describe a prototype of a new metrological system comprising a 3D range camera, Microsoft Kinect (Version 2) and a point cloud post-processing algorithm for the estimation of the egg volume. The system calculates the egg volume directly from the egg shape parameters estimated from the least-squares method in which the point clouds of eggs captured by the Kinect are fitted to novel geometric models of an egg in a 3D space. Using the models, the shape parameters of an egg are estimated along with the egg's position and orientation simultaneously under the least-squares criterion. Four sets of experiments were performed to verify the functionality and the performance of the system, while volumes estimated from the conventional water displacement method and the point cloud captured by a survey-grade laser scanner serve as references. The results suggest that the method is straightforward, feasible and reliable with an average egg volume estimation accuracy 93.3% when compared to the reference volumes. As a prototype, the software part of the system was implemented in a post-processing mode. However, as the proposed processing techniques is computationally efficient, the prototype can be readily transformed into a real-time egg volume system.


Asunto(s)
Algoritmos , Aves , Tamaño de la Célula , Sistemas de Computación , Huevos , Programas Informáticos , Animales , Femenino , Aves de Corral
3.
Med Image Anal ; 98: 103322, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39197301

RESUMEN

In this study, we address critical barriers hindering the widespread adoption of surgical navigation in orthopedic surgeries due to limitations such as time constraints, cost implications, radiation concerns, and integration within the surgical workflow. Recently, our work X23D showed an approach for generating 3D anatomical models of the spine from only a few intraoperative fluoroscopic images. This approach negates the need for conventional registration-based surgical navigation by creating a direct intraoperative 3D reconstruction of the anatomy. Despite these strides, the practical application of X23D has been limited by a significant domain gap between synthetic training data and real intraoperative images. In response, we devised a novel data collection protocol to assemble a paired dataset consisting of synthetic and real fluoroscopic images captured from identical perspectives. Leveraging this unique dataset, we refined our deep learning model through transfer learning, effectively bridging the domain gap between synthetic and real X-ray data. We introduce an innovative approach combining style transfer with the curated paired dataset. This method transforms real X-ray images into the synthetic domain, enabling the in-silico-trained X23D model to achieve high accuracy in real-world settings. Our results demonstrated that the refined model can rapidly generate accurate 3D reconstructions of the entire lumbar spine from as few as three intraoperative fluoroscopic shots. The enhanced model reached a sufficient accuracy, achieving an 84% F1 score, equating to the benchmark set solely by synthetic data in previous research. Moreover, with an impressive computational time of just 81.1 ms, our approach offers real-time capabilities, vital for successful integration into active surgical procedures. By investigating optimal imaging setups and view angle dependencies, we have further validated the practicality and reliability of our system in a clinical environment. Our research represents a promising advancement in intraoperative 3D reconstruction. This innovation has the potential to enhance intraoperative surgical planning, navigation, and surgical robotics.


Asunto(s)
Imagenología Tridimensional , Vértebras Lumbares , Humanos , Imagenología Tridimensional/métodos , Fluoroscopía , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/cirugía , Cirugía Asistida por Computador/métodos , Aprendizaje Profundo
4.
Int J Comput Assist Radiol Surg ; 19(9): 1843-1853, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38573567

RESUMEN

PURPOSE: Three-dimensional (3D) preoperative planning has become the gold standard for orthopedic surgeries, primarily relying on CT-reconstructed 3D models. However, in contrast to standing radiographs, a CT scan is not part of the standard protocol but is usually acquired for preoperative planning purposes only. Additionally, it is costly, exposes the patients to high doses of radiation and is acquired in a non-weight-bearing position. METHODS: In this study, we develop a deep-learning based pipeline to facilitate 3D preoperative planning for high tibial osteotomies, based on 3D models reconstructed from low-dose biplanar standing EOS radiographs. Using digitally reconstructed radiographs, we train networks to localize the clinically required landmarks, separate the two legs in the sagittal radiograph and finally reconstruct the 3D bone model. Finally, we evaluate the accuracy of the reconstructed 3D models for the particular application case of preoperative planning, with the aim of eliminating the need for a CT scan in specific cases, such as high tibial osteotomies. RESULTS: The mean Dice coefficients for the tibial reconstructions were 0.92 and 0.89 for the right and left tibia, respectively. The reconstructed models were successfully used for clinical-grade preoperative planning in a real patient series of 52 cases. The mean differences to ground truth values for mechanical axis and tibial slope were 0.52° and 4.33°, respectively. CONCLUSIONS: We contribute a novel framework for the 2D-3D reconstruction of bone models from biplanar standing EOS radiographs and successfully use them in automated clinical-grade preoperative planning of high tibial osteotomies. However, achieving precise reconstruction and automated measurement of tibial slope remains a significant challenge.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Osteotomía , Cuidados Preoperatorios , Tibia , Humanos , Imagenología Tridimensional/métodos , Osteotomía/métodos , Tibia/cirugía , Tibia/diagnóstico por imagen , Cuidados Preoperatorios/métodos , Femenino , Masculino , Adulto , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos
5.
Med Image Anal ; 99: 103345, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39293187

RESUMEN

Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of the anatomy. Robotic surgery systems have been proposed to improve placement accuracy. Despite remarkable advances, current robotic systems still lack advanced mechanisms for continuous updating of surgical plans during procedures, which hinders attaining higher levels of robotic autonomy. These systems adhere to conventional rigid registration concepts, relying on the alignment of preoperative planning to the intraoperative anatomy. In this paper, we propose a safe deep reinforcement learning (DRL) planning approach (SafeRPlan) for robotic spine surgery that leverages intraoperative observation for continuous path planning of pedicle screw placement. The main contributions of our method are (1) the capability to ensure safe actions by introducing an uncertainty-aware distance-based safety filter; (2) the ability to compensate for incomplete intraoperative anatomical information, by encoding a-priori knowledge of anatomical structures with neural networks pre-trained on pre-operative images; and (3) the capability to generalize over unseen observation noise thanks to the novel domain randomization techniques. Planning quality was assessed by quantitative comparison with the baseline approaches, gold standard (GS) and qualitative evaluation by expert surgeons. In experiments with human model datasets, our approach was capable of achieving over 5% higher safety rates compared to baseline approaches, even under realistic observation noise. To the best of our knowledge, SafeRPlan is the first safety-aware DRL planning approach specifically designed for robotic spine surgery.

6.
Med Image Anal ; 91: 103027, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37992494

RESUMEN

Established surgical navigation systems for pedicle screw placement have been proven to be accurate, but still reveal limitations in registration or surgical guidance. Registration of preoperative data to the intraoperative anatomy remains a time-consuming, error-prone task that includes exposure to harmful radiation. Surgical guidance through conventional displays has well-known drawbacks, as information cannot be presented in-situ and from the surgeon's perspective. Consequently, radiation-free and more automatic registration methods with subsequent surgeon-centric navigation feedback are desirable. In this work, we present a marker-less approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner. A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models, which then is refined for each vertebra individually and updated in real-time with GPU acceleration while handling surgeon occlusions. An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system. The registration method was verified on a public dataset with a median of 100% successful registrations, a median target registration error of 2.7 mm, a median screw trajectory error of 1.6°and a median screw entry point error of 2.3 mm. Additionally, the whole pipeline was validated in an ex-vivo surgery, yielding a 100% screw accuracy and a median target registration error of 1.0 mm. Our results meet clinical demands and emphasize the potential of RGB-D data for fully automatic registration approaches in combination with augmented reality guidance.


Asunto(s)
Tornillos Pediculares , Fusión Vertebral , Cirugía Asistida por Computador , Humanos , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/cirugía , Fusión Vertebral/métodos
7.
J Imaging ; 9(2)2023 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-36826963

RESUMEN

Translational research is aimed at turning discoveries from basic science into results that advance patient treatment. The translation of technical solutions into clinical use is a complex, iterative process that involves different stages of design, development, and validation, such as the identification of unmet clinical needs, technical conception, development, verification and validation, regulatory matters, and ethics. For this reason, many promising technical developments at the interface of technology, informatics, and medicine remain research prototypes without finding their way into clinical practice. Augmented reality is a technology that is now making its breakthrough into patient care, even though it has been available for decades. In this work, we explain the translational process for Medical AR devices and present associated challenges and opportunities. To the best knowledge of the authors, this concept paper is the first to present a guideline for the translation of medical AR research into clinical practice.

8.
J Imaging ; 9(9)2023 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-37754944

RESUMEN

In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions.

9.
J Imaging ; 8(10)2022 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-36286365

RESUMEN

Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.

10.
Front Surg ; 9: 952539, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35990097

RESUMEN

Accurate tissue differentiation during orthopedic and neurological surgeries is critical, given that such surgeries involve operations on or in the vicinity of vital neurovascular structures and erroneous surgical maneuvers can lead to surgical complications. By now, the number of emerging technologies tackling the problem of intraoperative tissue classification methods is increasing. Therefore, this systematic review paper intends to give a general overview of existing technologies. The review was done based on the PRISMA principle and two databases: PubMed and IEEE Xplore. The screening process resulted in 60 full-text papers. The general characteristics of the methodology from extracted papers included data processing pipeline, machine learning methods if applicable, types of tissues that can be identified with them, phantom used to conduct the experiment, and evaluation results. This paper can be useful in identifying the problems in the current status of the state-of-the-art intraoperative tissue classification methods and designing new enhanced techniques.

11.
Front Surg ; 8: 771275, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35155547

RESUMEN

BACKGROUND: There is a trend toward minimally invasive and more automated procedures in orthopedic surgery. An important aspect in the further development of these techniques is the quantitative assessment of the surgical approach. The aim of this scoping review is to deliver a structured overview on the currently used methods for quantitative analysis of a surgical approaches' invasiveness in orthopedic procedures. The compiled metrics presented in the herein study can serve as the basis for digitization of surgery and advanced computational methods that focus on optimizing surgical procedures. METHODS: We performed a blinded literature search in November 2020. In-vivo and ex-vivo studies that quantitatively assess the invasiveness of the surgical approach were included with a special focus on radiological methods. We excluded studies using exclusively one or multiple of the following parameters: risk of reoperation, risk of dislocation, risk of infection, risk of patient-reported nerve injury, rate of thromboembolic event, function, length of stay, blood loss, pain, operation time. RESULTS: The final selection included 51 articles. In the included papers, approaches to 8 different anatomical structures were investigated, the majority of which examined procedures of the hip (57%) and the spine (29%). The different modalities to measure the invasiveness were categorized into three major groups "biological" (23 papers), "radiological" (25), "measured in-situ" (14) and their use "in-vivo" or "ex-vivo" was analyzed. Additionally, we explain the basic principles of each modality and match it to the anatomical structures it has been used on. DISCUSSION: An ideal metric used to quantify the invasiveness of a surgical approach should be accurate, cost-effective, non-invasive, comprehensive and integratable into the clinical workflow. We find that the radiological methods best meet such criteria. However, radiological metrics can be more prone to confounders such as coexisting pathologies than in-situ measurements but are non-invasive and possible to perform in-vivo. Additionally, radiological metrics require substantial expertise and are not cost-effective. Owed to their high accuracy and low invasiveness, radiological methods are, in our opinion, the best suited for computational applications optimizing surgical procedures. The key to quantify a surgical approach's invasiveness lies in the integration of multiple metrics.

12.
Int J Med Robot ; 17(2): e2228, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33462965

RESUMEN

BACKGROUND: Two-dimensional (2D)-3D registration is challenging in the presence of implant projections on intraoperative images, which can limit the registration capture range. Here, we investigate the use of deep-learning-based inpainting for removing implant projections from the X-rays to improve the registration performance. METHODS: We trained deep-learning-based inpainting models that can fill in the implant projections on X-rays. Clinical datasets were collected to evaluate the inpainting based on six image similarity measures. The effect of X-ray inpainting on capture range of 2D-3D registration was also evaluated. RESULTS: The X-ray inpainting significantly improved the similarity between the inpainted images and the ground truth. When applying inpainting before the 2D-3D registration process, we demonstrated significant recovery of the capture range by up to 85%. CONCLUSION: Applying deep-learning-based inpainting on X-ray images masked by implants can markedly improve the capture range of the associated 2D-3D registration task.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Humanos , Imagenología Tridimensional , Columna Vertebral , Tomografía Computarizada por Rayos X , Rayos X
13.
J Imaging ; 7(9)2021 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-34460800

RESUMEN

Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these procedures. However, developing such methods requires vast amount of data. To this end, a multi-modal approach that enables acquisition of large clinical data, tailored to pedicle screw placement, using RGB-D sensors and a co-calibrated high-end optical tracking system was developed. The resulting dataset comprises RGB-D recordings of pedicle screw placement along with individually tracked ground truth poses and shapes of spine levels L1-L5 from ten cadaveric specimens. Besides a detailed description of our setup, quantitative and qualitative outcome measures are provided. We found a mean target registration error of 1.5 mm. The median deviation between measured and ground truth bone surface was 2.4 mm. In addition, a surgeon rated the overall alignment based on 10% random samples as 5.8 on a scale from 1 to 6. Generation of labeled RGB-D data for orthopedic interventions with satisfactory accuracy is feasible, and its publication shall promote future development of data-driven artificial intelligence methods for fast and reliable intraoperative registration.

14.
Insights Imaging ; 12(1): 44, 2021 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-33825985

RESUMEN

OBJECTIVES: 3D preoperative planning of lower limb osteotomies has become increasingly important in light of modern surgical technologies. However, 3D models are usually reconstructed from Computed Tomography data acquired in a non-weight-bearing posture and thus neglecting the positional variations introduced by weight-bearing. We developed a registration and planning pipeline that allows for 3D preoperative planning and subsequent 3D assessment of anatomical deformities in weight-bearing conditions. METHODS: An intensity-based algorithm was used to register CT scans with long-leg standing radiographs and subsequently transform patient-specific 3D models into a weight-bearing state. 3D measurement methods for the mechanical axis as well as the joint line convergence angle were developed. The pipeline was validated using a leg phantom. Furthermore, we evaluated our methods clinically by applying it to the radiological data from 59 patients. RESULTS: The registration accuracy was evaluated in 3D and showed a maximum translational and rotational error of 1.1 mm (mediolateral direction) and 1.2° (superior-inferior axis). Clinical evaluation proved feasibility on real patient data and resulted in significant differences for 3D measurements when the effects of weight-bearing were considered. Mean differences were 2.1 ± 1.7° and 2.0 ± 1.6° for the mechanical axis and the joint line convergence angle, respectively. 37.3 and 40.7% of the patients had differences of 2° or more in the mechanical axis or joint line convergence angle between weight-bearing and non-weight-bearing states. CONCLUSIONS: Our presented approach provides a clinically feasible approach to preoperatively fuse 2D weight-bearing and 3D non-weight-bearing data in order to optimize the surgical correction.

15.
Front Surg ; 8: 776945, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35145990

RESUMEN

Modern operating rooms are becoming increasingly advanced thanks to the emerging medical technologies and cutting-edge surgical techniques. Current surgeries are transitioning into complex processes that involve information and actions from multiple resources. When designing context-aware medical technologies for a given intervention, it is of utmost importance to have a deep understanding of the underlying surgical process. This is essential to develop technologies that can correctly address the clinical needs and can adapt to the existing workflow. Surgical Process Modeling (SPM) is a relatively recent discipline that focuses on achieving a profound understanding of the surgical workflow and providing a model that explains the elements of a given surgery as well as their sequence and hierarchy, both in quantitative and qualitative manner. To date, a significant body of work has been dedicated to the development of comprehensive SPMs for minimally invasive baroscopic and endoscopic surgeries, while such models are missing for open spinal surgeries. In this paper, we provide SPMs common open spinal interventions in orthopedics. Direct video observations of surgeries conducted in our institution were used to derive temporal and transitional information about the surgical activities. This information was later used to develop detailed SPMs that modeled different primary surgical steps and highlighted the frequency of transitions between the surgical activities made within each step. Given the recent emersion of advanced techniques that are tailored to open spinal surgeries (e.g., artificial intelligence methods for intraoperative guidance and navigation), we believe that the SPMs provided in this study can serve as the basis for further advancement of next-generation algorithms dedicated to open spinal interventions that require a profound understanding of the surgical workflow (e.g., automatic surgical activity recognition and surgical skill evaluation). Furthermore, the models provided in this study can potentially benefit the clinical community through standardization of the surgery, which is essential for surgical training.

16.
Int J Comput Assist Radiol Surg ; 15(10): 1597-1609, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32696220

RESUMEN

PURPOSE: C-arms are portable X-ray devices used to generate radiographic images in orthopedic surgical procedures. Evidence suggests that scouting images, which are used to aid in C-arm positioning, result in increased operation time and excess radiation exposure. C-arms are also primarily used qualitatively to view images, with limited quantitative functionality. Various techniques have been proposed to improve positioning, reduce radiation exposure, and provide quantitative measuring tools, all of which require accurate C-arm position tracking. While external stereo camera systems can be used for this purpose, they are typically considered too obtrusive. This paper therefore presents the development and verification of a low-profile, real-time C-arm base-tracking system using computer vision techniques. METHODS: The proposed tracking system, called OPTIX (On-board Position Tracking for Intraoperative X-rays), uses a single downward-facing camera mounted to the base of a C-arm. Relative motion tracking and absolute position recovery algorithms were implemented to track motion using the visual texture in operating room floors. The accuracy of the system was evaluated in a simulated operating room mounted on a real C-arm. RESULTS: The relative tracking algorithm measured relative translation position changes with errors of less than 0.75% of the total distance travelled, and orientation with errors below 5% of the cumulative rotation. With an error-correction step incorporated, OPTIX achieved C-arm repositioning with translation errors of less than [Formula: see text]  mm and rotation errors of less than [Formula: see text]. A display based on the OPTIX measurements enabled consistent C-arm repositioning within 5 mm of a previously stored reference position. CONCLUSION: The system achieved clinically relevant accuracies and could result in a reduced need for scout images when re-acquiring a previous position. We believe that, if implemented in an operating room, OPTIX has the potential to reduce both operating time and harmful radiation exposure to patients and surgical staff.


Asunto(s)
Imagenología Tridimensional/instrumentación , Procedimientos Ortopédicos/instrumentación , Radiografía/instrumentación , Algoritmos , Humanos , Imagenología Tridimensional/métodos , Monitoreo Intraoperatorio/instrumentación , Rotación
17.
Int J Comput Assist Radiol Surg ; 14(10): 1725-1739, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31292926

RESUMEN

PURPOSE: Although multiple algorithms have been reported that focus on improving the accuracy of 2D-3D registration techniques, there has been relatively little attention paid to quantifying their capture range. In this paper, we analyze the capture range for a number of variant formulations of the 2D-3D registration problem in the context of pedicle screw insertion surgery. METHODS: We tested twelve 2D-3D registration techniques for capture range under different clinically realistic conditions. A registration was considered as successful if its error was less than 2 mm and 2° in 95% of the cases. We assessed the sensitivity of capture range to a variety of clinically realistic parameters including: X-ray contrast, number and configuration of X-rays, presence or absence of implants in the image, inter-subject variability, intervertebral motion and single-level vs multi-level registration. RESULTS: Gradient correlation + Powell optimizer had the widest capture range and the least sensitivity to X-ray contrast. The combination of 4 AP + lateral X-rays had the widest capture range (725 mm2). The presence of implant projections significantly reduced the registration capture range (up to 84%). Different spine shapes resulted in minor variations in registration capture range (SD 78 mm2). Intervertebral angulations of less than 1.5° had modest effects on the capture range. CONCLUSIONS: This paper assessed capture range of a number of variants of intensity-based 2D-3D registration algorithms in clinically realistic situations (for the use in pedicle screw insertion surgery). We conclude that a registration approach based on the gradient correlation similarity and the Powell's optimization algorithm, using a minimum of two C-arm images, is likely sufficiently robust for the proposed application.


Asunto(s)
Tornillos Pediculares , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Imagenología Tridimensional/métodos
18.
Int J Comput Assist Radiol Surg ; 13(8): 1269-1282, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29808466

RESUMEN

PURPOSE: Pedicle screw fixation is a challenging procedure with a concerning rates of reoperation. After insertion of the screws is completed, the most common intraoperative verification approach is to acquire anterior-posterior and lateral radiographic images, based on which the surgeons try to visually assess the correctness of insertion. Given the limited accuracy of the existing verification techniques, we identified the need for an accurate and automated pedicle screw assessment system that can verify the screw insertion intraoperatively. For doing so, this paper offers a framework for automatic segmentation and pose estimation of pedicle screws based on deep learning principles. METHODS: Segmentation of pedicle screw X-ray projections was performed by a convolutional neural network. The network could isolate the input X-rays into three classes: screw head, screw shaft and background. Once all the screw shafts were segmented, knowledge about the spatial configuration of the acquired biplanar X-rays was used to identify the correspondence between the projections. Pose estimation was then performed to estimate the 6 degree-of-freedom pose of each screw. The performance of the proposed pose estimation method was tested on a porcine specimen. RESULTS: The developed machine learning framework was capable of segmenting the screw shafts with 93% and 83% accuracy when tested on synthetic X-rays and on clinically realistic X-rays, respectively. The pose estimation accuracy of this method was shown to be [Formula: see text] and [Formula: see text] on clinically realistic X-rays. CONCLUSIONS: The proposed system offers an accurate and fully automatic pedicle screw segmentation and pose assessment framework. Such a system can help to provide an intraoperative pedicle screw insertion assessment protocol with minimal interference with the existing surgical routines.


Asunto(s)
Tornillos Pediculares , Fluoroscopía/métodos , Humanos , Aprendizaje Automático , Reoperación , Fusión Vertebral/métodos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
19.
Int J Comput Assist Radiol Surg ; 13(8): 1257-1267, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29633081

RESUMEN

PURPOSE: Pedicle screw malplacement, leading to neurological symptoms, vascular injury, and premature implant loosening, is not uncommon and difficult to reliably detect intraoperatively with current techniques. We propose a new intraoperative post-placement pedicle screw position assessment system that can therefore allow surgeons to correct breaches during the procedure. Our objectives were to assess the accuracy and robustness of this proposed screw location system and to compare its performance to that of 2D planar radiography. METHODS: The proposed system uses two intraoperative X-ray shots acquired with a standard fluoroscopic C-arm and processed using 2D/3D registration methods to provide a 3D visualization of the vertebra and screw superimposed on one another. Point digitization and CT imaging of the residual screw tunnel were used to assess accuracy in five synthetic lumbar vertebral models (10 screws in total). Additionally, the accuracy was evaluated with and without correcting for image distortion and for various screw lengths, screw materials, breach directions, and vertebral levels. RESULTS: The proposed method is capable of localizing the implanted screws with less than 2 mm of translational error (RMSE: 0.7 and 0.8 mm for the screw head and tip, respectively) and less than [Formula: see text] angular error (RMSE: [Formula: see text]), with minimal change to the errors if image distortion is not corrected. Breaches and their anatomical locations were all correctly visualized and identified for a variety of screw lengths, screw materials, breach locations, and vertebral levels, demonstrating the robustness of this approach. In contrast, one breach, one non-breach, and the anatomical location of three screws were misclassified with 2D X-ray. CONCLUSION: We have demonstrated an accurate and low-radiation technique for localizing pedicle screws post-implantation that requires only two X-rays. This intraoperative feedback of screw location and direction may allow the surgeon to correct malplaced screws intraoperatively, thereby reducing postoperative complications and reoperation rates.


Asunto(s)
Fluoroscopía/métodos , Vértebras Lumbares/diagnóstico por imagen , Tornillos Pediculares , Fusión Vertebral/métodos , Humanos , Vértebras Lumbares/cirugía , Masculino , Reoperación , Cirugía Asistida por Computador/métodos
20.
Proc Inst Mech Eng H ; 231(12): 1140-1151, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29039259

RESUMEN

This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).


Asunto(s)
Movimiento (Física) , Instrumentos Quirúrgicos , Rotación , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA