RESUMEN
BACKGROUND AND OBJECTIVE: As part of spinal fusion surgery, shaping the rod implant to align with the anatomy is a tedious, error-prone, and time-consuming manual process. Inadequately contoured rod implants introduce stress on the screw-bone interface of the pedicle screws, potentially leading to screw loosening or even pull-out. METHODS: We propose the first fully automated solution to the rod bending problem by leveraging the advantages of augmented reality and robotics. Augmented reality not only enables the surgeons to intraoperatively digitize the screw positions but also provides a human-computer interface to the wirelessly integrated custom-built rod bending machine. Furthermore, we introduce custom-built test rigs to quantify per screw absolute tensile/compressive residual forces on the screw-bone interface. Besides residual forces, we have evaluated the required bending times and reducer engagements, and compared our method to the freehand gold standard. RESULTS: We achieved a significant reduction of the average absolute residual forces from for the freehand gold standard to (p=0.0015) using the bending machine. Moreover, our bending machine reduced the average time to instrumentation per screw from to . Reducer engagements per rod were significantly decreased from an average of 1.00±1.14 to 0.11±0.32 (p=0.0037). CONCLUSION: The combination of augmented reality and robotics has the potential to improve surgical outcomes while minimizing the dependency on individual surgeon skill and dexterity.
Asunto(s)
Tornillos Pediculares , Fusión Vertebral , Humanos , Ensayo de Materiales , Vértebras Lumbares/cirugía , Fenómenos BiomecánicosRESUMEN
Established surgical navigation systems for pedicle screw placement have been proven to be accurate, but still reveal limitations in registration or surgical guidance. Registration of preoperative data to the intraoperative anatomy remains a time-consuming, error-prone task that includes exposure to harmful radiation. Surgical guidance through conventional displays has well-known drawbacks, as information cannot be presented in-situ and from the surgeon's perspective. Consequently, radiation-free and more automatic registration methods with subsequent surgeon-centric navigation feedback are desirable. In this work, we present a marker-less approach that automatically solves the registration problem for lumbar spinal fusion surgery in a radiation-free manner. A deep neural network was trained to segment the lumbar spine and simultaneously predict its orientation, yielding an initial pose for preoperative models, which then is refined for each vertebra individually and updated in real-time with GPU acceleration while handling surgeon occlusions. An intuitive surgical guidance is provided thanks to the integration into an augmented reality based navigation system. The registration method was verified on a public dataset with a median of 100% successful registrations, a median target registration error of 2.7 mm, a median screw trajectory error of 1.6°and a median screw entry point error of 2.3 mm. Additionally, the whole pipeline was validated in an ex-vivo surgery, yielding a 100% screw accuracy and a median target registration error of 1.0 mm. Our results meet clinical demands and emphasize the potential of RGB-D data for fully automatic registration approaches in combination with augmented reality guidance.
Asunto(s)
Tornillos Pediculares , Fusión Vertebral , Cirugía Asistida por Computador , Humanos , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/cirugía , Fusión Vertebral/métodosRESUMEN
The instrumentation of spinal fusion surgeries includes pedicle screw placement and rod implantation. While several surgical navigation approaches have been proposed for pedicle screw placement, less attention has been devoted towards the guidance of patient-specific adaptation of the rod implant. We propose a marker-free and intuitive Augmented Reality (AR) approach to navigate the bending process required for rod implantation. A stereo neural network is trained from the stereo video streams of the Microsoft HoloLens in an end-to-end fashion to determine the location of corresponding pedicle screw heads. From the digitized screw head positions, the optimal rod shape is calculated, translated into a set of bending parameters, and used for guiding the surgeon with a novel navigation approach. In the AR-based navigation, the surgeon is guided step-by-step in the use of the surgical tools to achieve an optimal result. We have evaluated the performance of our method on human cadavers against two benchmark methods, namely conventional freehand bending and marker-based bending navigation in terms of bending time and rebending maneuvers. We achieved an average bending time of 231s with 0.6 rebending maneuvers per rod compared to 476s (3.5 rebendings) and 348s (1.1 rebendings) obtained by our freehand and marker-based benchmarks, respectively.
Asunto(s)
Realidad Aumentada , Enfermedades de la Columna Vertebral , Fusión Vertebral , Cirugía Asistida por Computador , Biomarcadores , Humanos , Vértebras Lumbares , Redes Neurales de la Computación , Fusión Vertebral/métodos , Cirugía Asistida por Computador/métodosRESUMEN
BACKGROUND: Existing surgical navigation approaches of the rod bending procedure in spinal fusion rely on optical tracking systems that determine the location of placed pedicle screws using a hand-held marker. METHODS: We propose a novel, marker-less surgical navigation proof-of-concept to bending rod implants. Our method combines augmented reality with on-device machine learning to generate and display a virtual template of the optimal rod shape without touching the instrumented anatomy. Performance was evaluated on lumbosacral spine phantoms against a pointer-based navigation benchmark approach and ground truth data obtained from computed tomography. RESULTS: Our method achieved a mean error of 1.83 ± 1.10 mm compared to 1.87 ± 1.31 mm measured in the marker-based approach, while only requiring 21.33 ± 8.80 s as opposed to 36.65 ± 7.49 s attained by the pointer-based method. CONCLUSION: Our results suggests that the combination of augmented reality and machine learning has the potential to replace conventional pointer-based navigation in the future.
Asunto(s)
Realidad Aumentada , Tornillos Pediculares , Cirugía Asistida por Computador , Humanos , Aprendizaje Automático , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugíaRESUMEN
Modern operating rooms are becoming increasingly advanced thanks to the emerging medical technologies and cutting-edge surgical techniques. Current surgeries are transitioning into complex processes that involve information and actions from multiple resources. When designing context-aware medical technologies for a given intervention, it is of utmost importance to have a deep understanding of the underlying surgical process. This is essential to develop technologies that can correctly address the clinical needs and can adapt to the existing workflow. Surgical Process Modeling (SPM) is a relatively recent discipline that focuses on achieving a profound understanding of the surgical workflow and providing a model that explains the elements of a given surgery as well as their sequence and hierarchy, both in quantitative and qualitative manner. To date, a significant body of work has been dedicated to the development of comprehensive SPMs for minimally invasive baroscopic and endoscopic surgeries, while such models are missing for open spinal surgeries. In this paper, we provide SPMs common open spinal interventions in orthopedics. Direct video observations of surgeries conducted in our institution were used to derive temporal and transitional information about the surgical activities. This information was later used to develop detailed SPMs that modeled different primary surgical steps and highlighted the frequency of transitions between the surgical activities made within each step. Given the recent emersion of advanced techniques that are tailored to open spinal surgeries (e.g., artificial intelligence methods for intraoperative guidance and navigation), we believe that the SPMs provided in this study can serve as the basis for further advancement of next-generation algorithms dedicated to open spinal interventions that require a profound understanding of the surgical workflow (e.g., automatic surgical activity recognition and surgical skill evaluation). Furthermore, the models provided in this study can potentially benefit the clinical community through standardization of the surgery, which is essential for surgical training.
RESUMEN
Three-dimensional (3D) computer-assisted corrective osteotomy has become the state-of-the-art for surgical treatment of complex bone deformities. Despite available technologies, the automatic generation of clinically acceptable, ready-to-use preoperative planning solutions is currently not possible for such pathologies. Multiple contradicting and mutually dependent objectives have to be considered, as well as clinical and technical constraints, which generally require iterative manual adjustments. This leads to unnecessary surgeon efforts and unbearable clinical costs, hindering also the quality of patient treatment due to the reduced number of solutions that can be investigated in a clinically acceptable timeframe. In this paper, we propose an optimization framework for the generation of ready-to-use preoperative planning solutions in a fully automatic fashion. An automatic diagnostic assessment using patient-specific 3D models is performed for 3D malunion quantification and definition of the optimization parameters' range. Afterward, clinical objectives are translated into the optimization module, and controlled through tailored fitness functions based on a weighted and multi-staged optimization approach. The optimization is based on a genetic algorithm capable of solving multi-objective optimization problems with non-linear constraints. The framework outputs a complete preoperative planning solution including position and orientation of the osteotomy plane, transformation to achieve the bone reduction, and position and orientation of the fixation plate and screws. A qualitative validation was performed on 36 consecutive cases of radius osteotomy where solutions generated by the optimization algorithm (OA) were compared against the gold standard solutions generated by experienced surgeons (Gold Standard; GS). Solutions were blinded and presented to 6 readers (4 surgeons, 2 planning engineers), who voted OA solutions to be better in 55% of the time. The quantitative evaluation was based on different error measurements, showing average improvements with respect to the GS from 20% for the reduction alignment and up to 106% for the position of the fixation screws. Notably, our algorithm was able to generate feasible clinical solutions which were not possible to obtain with the current state-of-the-art method.
Asunto(s)
Algoritmos , Antebrazo/diagnóstico por imagen , Antebrazo/cirugía , Imagenología Tridimensional , Osteotomía/métodos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X , Puntos Anatómicos de Referencia , Antebrazo/anatomía & histología , Humanos , Modelación Específica para el PacienteRESUMEN
BACKGROUND: Inaccurate meniscus allograft size is still an important problem of the currently used sizing methods. The purpose of this study was to evaluate a new three-dimensional (3D) meniscus-sizing method to increase the accuracy of the selected allografts. METHODS: 3D triangular surface models were generated from 280 menisci based on 50 bilateral and 40 unilateral knee joint magnetic resonance imaging (MRI) scans. These models served as an imaginary meniscus allograft tissue bank. Meniscus sizing and allograft selection was simulated for all 50 bilateral knee joints by (1) the closest mean surface distance (MeSD) (3D-MRI sizing with contralateral meniscus), (2) the smallest meniscal width/length difference in MRI (2D-MRI sizing with contralateral meniscus), and (3) conventional radiography as proposed by Pollard (2D-radiograph (RX) sizing with ipsilateral tibia plateau). 3D shape and meniscal width, length, and height were compared between the original meniscus and the selected meniscus using the three sizing methods. RESULTS: Allograft selection by MeSD (3D MRI) was superior for all measurement parameters. In particular, the 3D shape was significantly improved (p < 0.001), while the mean differences in meniscal width, length, and height were only slightly better than the allograft selected by the other methods. Outliers were reduced by up to 55% (vs. 2D MRI) and 83% (vs. 2D RX) for the medial meniscus and 39% (vs. 2D MRI) and 56% (vs. 2D RX) for the lateral meniscus. CONCLUSION: 3D-MRI sizing by MeSD using the contralateral meniscus as a reconstruction template can significantly improve meniscus allograft selection. Sizing using conventional radiography should probably not be recommended. TRIAL REGISTRATION: Kantonale Ethikkommission Zürich had given the approval for the study (BASEC-No. 2018-00856).
Asunto(s)
Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Meniscos Tibiales/diagnóstico por imagen , Meniscos Tibiales/trasplante , Trasplante Homólogo/métodos , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto JovenRESUMEN
PURPOSE: In spinal fusion surgery, imprecise placement of pedicle screws can result in poor surgical outcome or may seriously harm a patient. Patient-specific instruments and optical systems have been proposed for improving precision through surgical navigation compared to freehand insertion. However, existing solutions are expensive and cannot provide in situ visualizations. Recent technological advancement enabled the production of more powerful and precise optical see-through head-mounted displays for the mass market. The purpose of this laboratory study was to evaluate whether such a device is sufficiently precise for the navigation of lumbar pedicle screw placement. METHODS: A novel navigation method, tailored to run on the Microsoft HoloLens, was developed. It comprises capturing of the intraoperatively reachable surface of vertebrae to achieve registration and tool tracking with real-time visualizations without the need of intraoperative imaging. For both surface sampling and navigation, 3D printable parts, equipped with fiducial markers, were employed. Accuracy was evaluated within a self-built setup based on two phantoms of the lumbar spine. Computed tomography (CT) scans of the phantoms were acquired to carry out preoperative planning of screw trajectories in 3D. A surgeon placed the guiding wire for the pedicle screw bilaterally on ten vertebrae guided by the navigation method. Postoperative CT scans were acquired to compare trajectory orientation (3D angle) and screw insertion points (3D distance) with respect to the planning. RESULTS: The mean errors between planned and executed screw insertion were [Formula: see text] for the screw trajectory orientation and 2.77±1.46 mm for the insertion points. The mean time required for surface digitization was 125±27 s. CONCLUSIONS: First promising results under laboratory conditions indicate that precise lumbar pedicle screw insertion can be achieved by combining HoloLens with our proposed navigation method. As a next step, cadaver experiments need to be performed to confirm the precision on real patient anatomy.
Asunto(s)
Vértebras Lumbares/cirugía , Tornillos Pediculares , Fusión Vertebral/métodos , Cirugía Asistida por Computador/métodos , Humanos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodosRESUMEN
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.