Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
Adv Sci (Weinh) ; 11(7): e2305495, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38072667

ABSTRACT

Magnetic resonance imaging (MRI) demonstrates clear advantages over other imaging modalities in neurosurgery with its ability to delineate critical neurovascular structures and cancerous tissue in high-resolution 3D anatomical roadmaps. However, its application has been limited to interventions performed based on static pre/post-operative imaging, where errors accrue from stereotactic frame setup, image registration, and brain shift. To leverage the powerful intra-operative functions of MRI, e.g., instrument tracking, monitoring of physiological changes and tissue temperature in MRI-guided bilateral stereotactic neurosurgery, a multi-stage robotic positioner is proposed. The system positions cannula/needle instruments using a lightweight (203 g) and compact (Ø97 × 81 mm) skull-mounted structure that fits within most standard imaging head coils. With optimized design in soft robotics, the system operates in two stages: i) manual coarse adjustment performed interactively by the surgeon (workspace of ±30°), ii) automatic fine adjustment with precise (<0.2° orientation error), responsive (1.4 Hz bandwidth), and high-resolution (0.058°) soft robotic positioning. Orientation locking provides sufficient transmission stiffness (4.07 N/mm) for instrument advancement. The system's clinical workflow and accuracy is validated with lab-based (<0.8 mm) and MRI-based testing on skull phantoms (<1.7 mm) and a cadaver subject (<2.2 mm). Custom-made wireless omni-directional tracking markers facilitated robot registration under MRI.


Subject(s)
Neurosurgery , Robotics , Neurosurgical Procedures/methods , Brain , Magnetic Resonance Imaging/methods
2.
Int J Comput Assist Radiol Surg ; 16(5): 731-739, 2021 May.
Article in English | MEDLINE | ID: mdl-33786777

ABSTRACT

PURPOSE: Surgical annotation promotes effective communication between medical personnel during surgical procedures. However, existing approaches to 2D annotations are mostly static with respect to a display. In this work, we propose a method to achieve 3D annotations that anchor rigidly and stably to target structures upon camera movement in a transnasal endoscopic surgery setting. METHODS: This is accomplished through intra-operative endoscope tracking and monocular depth estimation. A virtual endoscopic environment is utilized to train a supervised depth estimation network. An adversarial network transfers the style from the real endoscopic view to a synthetic-like view for input into the depth estimation network, wherein framewise depth can be obtained in real time. RESULTS: (1) Accuracy: Framewise depth was predicted from images captured from within a nasal airway phantom and compared with ground truth, achieving a SSIM value of 0.8310 ± 0.0655. (2) Stability: mean absolute error (MAE) between reference and predicted depth of a target point was 1.1330 ± 0.9957 mm. CONCLUSION: Both the accuracy and stability evaluations demonstrated the feasibility and practicality of our proposed method for achieving 3D annotations.


Subject(s)
Endoscopy/methods , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Cadaver , Calibration , Humans , Image Processing, Computer-Assisted , Monitoring, Intraoperative , Reproducibility of Results , Tomography, X-Ray Computed , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL