Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
1.
Comput Assist Surg (Abingdon) ; 28(1): 2275522, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37942523

RESUMEN

A system for performance assessment and quality assurance (QA) of surgical trackers is reported based on principles of geometric accuracy and statistical process control (SPC) for routine longitudinal testing. A simple QA test phantom was designed, where the number and distribution of registration fiducials was determined drawing from analytical models for target registration error (TRE). A tracker testbed was configured with open-source software for measurement of a TRE-based accuracy metric ε and Jitter (J). Six trackers were tested: 2 electromagnetic (EM - Aurora); and 4 infrared (IR - 1 Spectra, 1 Vega, and 2 Vicra) - all NDI (Waterloo, ON). Phase I SPC analysis of Shewhart mean (x¯) and standard deviation (s) determined system control limits. Phase II involved weekly QA of each system for up to 32 weeks and identified Pass, Note, Alert, and Failure action rules. The process permitted QA in <1 min. Phase I control limits were established for all trackers: EM trackers exhibited higher upper control limits than IR trackers in ε (EM: x¯Îµ âˆ¼2.8-3.3 mm, IR: x¯Îµ âˆ¼1.6-2.0 mm) and Jitter (EM: x¯jitter âˆ¼0.30-0.33 mm, IR: x¯jitter âˆ¼0.08-0.10 mm), and older trackers showed evidence of degradation - e.g. higher Jitter for the older Vicra (p-value < .05). Phase II longitudinal tests yielded 676 outcomes in which a total of 4 Failures were noted - 3 resolved by intervention (metal interference for EM trackers) - and 1 owing to restrictive control limits for a new system (Vega). Weekly tests also yielded 40 Notes and 16 Alerts - each spontaneously resolved in subsequent monitoring.


Asunto(s)
Cirugía Asistida por Computador , Humanos , Fantasmas de Imagen , Programas Informáticos
2.
Phys Med Biol ; 68(21)2023 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-37774711

RESUMEN

Objective. Surgical guidewires are commonly used in placing fixation implants to stabilize fractures. Accurate positioning of these instruments is challenged by difficulties in 3D reckoning from 2D fluoroscopy. This work aims to enhance the accuracy and reduce exposure times by providing 3D navigation for guidewire placement from as little as two fluoroscopic images.Approach. Our approach combines machine learning-based segmentation with the geometric model of the imager to determine the 3D poses of guidewires. Instrument tips are encoded as individual keypoints, and the segmentation masks are processed to estimate the trajectory. Correspondence between detections in multiple views is established using the pre-calibrated system geometry, and the corresponding features are backprojected to obtain the 3D pose. Guidewire 3D directions were computed using both an analytical and an optimization-based method. The complete approach was evaluated in cadaveric specimens with respect to potential confounding effects from the imaging geometry and radiographic scene clutter due to other instruments.Main results. The detection network identified the guidewire tips within 2.2 mm and guidewire directions within 1.1°, in 2D detector coordinates. Feature correspondence rejected false detections, particularly in images with other instruments, to achieve 83% precision and 90% recall. Estimating the 3D direction via numerical optimization showed added robustness to guidewires aligned with the gantry rotation plane. Guidewire tips and directions were localized in 3D world coordinates with a median accuracy of 1.8 mm and 2.7°, respectively.Significance. The paper reports a new method for automatic 2D detection and 3D localization of guidewires from pairs of fluoroscopic images. Localized guidewires can be virtually overlaid on the patient's pre-operative 3D scan during the intervention. Accurate pose determination for multiple guidewires from two images offers to reduce radiation dose by minimizing the need for repeated imaging and provides quantitative feedback prior to implant placement.


Asunto(s)
Fracturas Óseas , Procedimientos Ortopédicos , Cirugía Asistida por Computador , Humanos , Procedimientos Ortopédicos/métodos , Cirugía Asistida por Computador/métodos , Fracturas Óseas/cirugía , Fluoroscopía/métodos , Imagenología Tridimensional/métodos
3.
Artículo en Inglés | MEDLINE | ID: mdl-37143861

RESUMEN

Purpose: Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. Methods: The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Results: Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to <2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Conclusions: Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.

4.
Comput Methods Programs Biomed ; 227: 107222, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36370597

RESUMEN

PURPOSE: Effective aggregation of intraoperative x-ray images that capture the patient anatomy from multiple view-angles has the potential to enable and improve automated image analysis that can be readily performed during surgery. We present multi-perspective region-based neural networks that leverage knowledge of the imaging geometry for automatic vertebrae labeling in Long-Film images - a novel tomographic imaging modality with an extended field-of-view for spine imaging. METHOD: A multi-perspective network architecture was designed to exploit small view-angle disparities produced by a multi-slot collimator and consolidate information from overlapping image regions. A second network incorporates large view-angle disparities to jointly perform labeling on images from multiple views (viz., AP and lateral). A recurrent module incorporates contextual information and enforce anatomical order for the detected vertebrae. The three modules are combined to form the multi-view multi-slot (MVMS) network for labeling vertebrae using images from all available perspectives. The network was trained on images synthesized from 297 CT images and tested on 50 AP and 50 lateral Long-Film images acquired from 13 cadaveric specimens. Labeling performance of the multi-perspective networks was evaluated with respect to the number of vertebrae appearances and presence of surgical instrumentation. RESULTS: The MVMS network achieved an F1 score of >96% and an average vertebral localization error of 3.3 mm, with 88.3% labeling accuracy on both AP and lateral images - (15.5% and 35.0% higher than conventional Faster R-CNN on AP and lateral views, respectively). Aggregation of multiple appearances of the same vertebra using the multi-slot network significantly improved the labeling accuracy (p < 0.05). Using the multi-view network, labeling accuracy on the more challenging lateral views was improved to the same level as that of the AP views. The approach demonstrated robustness to the presence of surgical instrumentation, commonly encountered in intraoperative images, and achieved comparable performance in images with and without instrumentation (88.9% vs. 91.2% labeling accuracy). CONCLUSION: The MVMS network demonstrated effective multi-perspective aggregation, providing means for accurate, automated vertebrae labeling during spine surgery. The algorithms may be generalized to other imaging tasks and modalities that involve multiple views with view-angle disparities (e.g., bi-plane radiography). Predicted labels can help avoid adverse events during surgery (e.g., wrong-level surgery), establish correspondence with labels in preoperative modalities to facilitate image registration, and enable automated measurement of spinal alignment metrics for intraoperative assessment of spinal curvature.


Asunto(s)
Redes Neurales de la Computación , Columna Vertebral , Humanos , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/cirugía , Algoritmos , Procesamiento de Imagen Asistido por Computador
5.
Phys Med Biol ; 68(1)2022 12 22.
Artículo en Inglés | MEDLINE | ID: mdl-36317269

RESUMEN

Purpose. Target localization in pulmonary interventions (e.g. transbronchial biopsy of a lung nodule) is challenged by deformable motion and may benefit from fluoroscopic overlay of the target to provide accurate guidance. We present and evaluate a 3D-2D image registration method for fluoroscopic overlay in the presence of tissue deformation using a multi-resolution/multi-scale (MRMS) framework with an objective function that drives registration primarily by soft-tissue image gradients.Methods. The MRMS method registers 3D cone-beam CT to 2D fluoroscopy without gating of respiratory phase by coarse-to-fine resampling and global-to-local rescaling about target regions-of-interest. A variation of the gradient orientation (GO) similarity metric (denotedGO') was developed to downweight bone gradients and drive registration via soft-tissue gradients. Performance was evaluated in terms of projection distance error at isocenter (PDEiso). Phantom studies determined nominal algorithm parameters and capture range. Preclinical studies used a freshly deceased, ventilated porcine specimen to evaluate performance in the presence of real tissue deformation and a broad range of 3D-2D image mismatch.Results. Nominal algorithm parameters were identified that provided robust performance over a broad range of motion (0-20 mm), including an adaptive parameter selection technique to accommodate unknown mismatch in respiratory phase. TheGO'metric yielded median PDEiso= 1.2 mm, compared to 6.2 mm for conventionalGO.Preclinical studies with real lung deformation demonstrated median PDEiso= 1.3 mm with MRMS +GO'registration, compared to 2.2 mm with a conventional transform. Runtime was 26 s and can be reduced to 2.5 s given a prior registration within ∼5 mm as initialization.Conclusions. MRMS registration via soft-tissue gradients achieved accurate fluoroscopic overlay in the presence of deformable lung motion. By driving registration via soft-tissue image gradients, the method avoided false local minima presented by bones and was robust to a wide range of motion magnitude.


Asunto(s)
Imagenología Tridimensional , Cirugía Asistida por Computador , Animales , Porcinos , Imagenología Tridimensional/métodos , Tomografía Computarizada de Haz Cónico/métodos , Pulmón/diagnóstico por imagen , Cirugía Asistida por Computador/métodos , Fluoroscopía/métodos , Algoritmos
6.
Phys Med Biol ; 67(12)2022 06 10.
Artículo en Inglés | MEDLINE | ID: mdl-35609586

RESUMEN

Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada de Haz Cónico Espiral , Algoritmos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos
7.
Med Image Anal ; 75: 102292, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34784539

RESUMEN

PURPOSE: The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance. METHOD: The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks. RESULTS: The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s. CONCLUSION: The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Algoritmos , Procedimientos Neuroquirúrgicos , Incertidumbre
8.
Phys Med Biol ; 66(21)2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34644684

RESUMEN

Purpose.Accurate neuroelectrode placement is essential to effective monitoring or stimulation of neurosurgery targets. This work presents and evaluates a method that combines deep learning and model-based deformable 3D-2D registration to guide and verify neuroelectrode placement using intraoperative imaging.Methods.The registration method consists of three stages: (1) detection of neuroelectrodes in a pair of fluoroscopy images using a deep learning approach; (2) determination of correspondence and initial 3D localization among neuroelectrode detections in the two projection images; and (3) deformable 3D-2D registration of neuroelectrodes according to a physical device model. The method was evaluated in phantom, cadaver, and clinical studies in terms of (a) the accuracy of neuroelectrode registration and (b) the quality of metal artifact reduction (MAR) in cone-beam CT (CBCT) in which the deformably registered neuroelectrode models are taken as input to the MAR.Results.The combined deep learning and model-based deformable 3D-2D registration approach achieved 0.2 ± 0.1 mm accuracy in cadaver studies and 0.6 ± 0.3 mm accuracy in clinical studies. The detection network and 3D correspondence provided initialization of 3D-2D registration within 2 mm, which facilitated end-to-end registration runtime within 10 s. Metal artifacts, quantified as the standard deviation in voxel values in tissue adjacent to neuroelectrodes, were reduced by 72% in phantom studies and by 60% in first clinical studies.Conclusions.The method combines the speed and generalizability of deep learning (for initialization) with the precision and reliability of physical model-based registration to achieve accurate deformable 3D-2D registration and MAR in functional neurosurgery. Accurate 3D-2D guidance from fluoroscopy could overcome limitations associated with deformation in conventional navigation, and improved MAR could improve CBCT verification of neuroelectrode placement.


Asunto(s)
Algoritmos , Tomografía Computarizada de Haz Cónico , Cadáver , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Reproducibilidad de los Resultados
9.
Phys Med Biol ; 66(12)2021 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-34082413

RESUMEN

Purpose.Accurate localization and labeling of vertebrae in computed tomography (CT) is an important step toward more quantitative, automated diagnostic analysis and surgical planning. In this paper, we present a framework (called Ortho2D) for vertebral labeling in CT in a manner that is accurate and memory-efficient.Methods. Ortho2D uses two independent faster R-convolutional neural network networks to detect and classify vertebrae in orthogonal (sagittal and coronal) CT slices. The 2D detections are clustered in 3D to localize vertebrae centroids in the volumetric CT and classify the region (cervical, thoracic, lumbar, or sacral) and vertebral level. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images with and without surgical instrumentation. Labeling accuracy and memory requirements were assessed in comparison to other recently reported methods. The memory efficiency of Ortho2D permitted extension to high-resolution CT to investigate the potential for further boosts to labeling performance.Results. Ortho2D achieved overall vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. The framework achieved 95.8% and 83.6% level identification accuracy in images without and with surgical instrumentation, respectively. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ∼50 (at 1 mm voxel size) compared to a 3D U-Net, allowing extension to higher resolution datasets than normally afforded. The accuracy of level identification increased from 80.1% (for standard/low resolution CT) to 95.1% (for high-resolution CT).Conclusions. The Ortho2D method achieved vertebrae labeling performance that is comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.


Asunto(s)
Columna Vertebral , Tomografía Computarizada por Rayos X , Vértebras Lumbares , Redes Neurales de la Computación
10.
Artículo en Inglés | MEDLINE | ID: mdl-35982943

RESUMEN

Purpose: Deep brain stimulation is a neurosurgical procedure used in treatment of a growing spectrum of movement disorders. Inaccuracies in electrode placement, however, can result in poor symptom control or adverse effects and confound variability in clinical outcomes. A deformable 3D-2D registration method is presented for high-precision 3D guidance of neuroelectrodes. Methods: The approach employs a model-based, deformable algorithm for 3D-2D image registration. Variations in lead design are captured in a parametric 3D model based on a B-spline curve. The registration is solved through iterative optimization of 16 degrees-of-freedom that maximize image similarity between the 2 acquired radiographs and simulated forward projections of the neuroelectrode model. The approach was evaluated in phantom models with respect to pertinent imaging parameters, including view selection and imaging dose. Results: The results demonstrate an accuracy of (0.2 ± 0.2) mm in 3D localization of individual electrodes. The solution was observed to be robust to changes in pertinent imaging parameters, which demonstrate accurate localization with ≥20° view separation and at 1/10th the dose of a standard fluoroscopy frame. Conclusions: The presented approach provides the means for guiding neuroelectrode placement from 2 low-dose radiographic images in a manner that accommodates potential deformations at the target anatomical site. Future work will focus on improving runtime though learning-based initialization, application in reducing reconstruction metal artifacts for 3D verification of placement, and extensive evaluation in clinical data from an IRB study underway.

11.
Artículo en Inglés | MEDLINE | ID: mdl-36090307

RESUMEN

Purpose: A method and prototype for a fluoroscopically-guided surgical robot is reported for assisting pelvic fracture fixation. The approach extends the compatibility of existing guidance methods with C-arms that are in mainstream use (without prior geometric calibration) using an online calibration of the C-arm geometry automated via registration to patient anatomy. We report the first preclinical studies of this method in cadaver for evaluation of geometric accuracy. Methods: The robot is placed over the patient within the imaging field-of-view and radiographs are acquired as the robot rotates an attached instrument. The radiographs are then used to perform an online geometric calibration via 3D-2D image registration, which solves for the intrinsic and extrinsic parameters of the C-arm imaging system with respect to the patient. The solved projective geometry is then be used to register the robot to the patient and drive the robot to planned trajectories. This method is applied to a robotic system consisting of a drill guide instrument for guidewire placement and evaluated in experiments using a cadaver specimen. Results: Robotic drill guide alignment to trajectories defined in the cadaver pelvis were accurate within 2 mm and 1° (on average) using the calibration-free approach. Conformance of trajectories within bone corridors was confirmed in cadaver by extrapolating the aligned drill guide trajectory into the cadaver pelvis. Conclusion: This study demonstrates the accuracy of image-guided robotic positioning without prior calibration of the C-arm gantry, facilitating the use of surgical robots with simpler imaging devices that cannot establish or maintain an offline calibration. Future work includes testing of the system in a clinical setting with trained orthopaedic surgeons and residents.

12.
Med Image Anal ; 68: 101917, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33341493

RESUMEN

PURPOSES: Surgical reduction of pelvic fracture is a challenging procedure, and accurate restoration of natural morphology is essential to obtaining positive functional outcome. The procedure often requires extensive preoperative planning, long fluoroscopic exposure time, and trial-and-error to achieve accurate reduction. We report a multi-body registration framework for reduction planning using preoperative CT and intraoperative guidance using routine 2D fluoroscopy that could help address such challenges. METHOD: The framework starts with semi-automatic segmentation of fractured bone fragments in preoperative CT using continuous max-flow. For reduction planning, a multi-to-one registration is performed to register bone fragments to an adaptive template that adjusts to patient-specific bone shapes and poses. The framework further registers bone fragments to intraoperative fluoroscopy to provide 2D fluoroscopy guidance and/or 3D navigation relative to the reduction plan. The framework was investigated in three studies: (1) a simulation study of 40 CT images simulating three fracture categories (unilateral two-body, unilateral three-body, and bilateral two-body); (2) a proof-of-concept cadaver study to mimic clinical scenario; and (3) a retrospective clinical study investigating feasibility in three cases of increasing severity and accuracy requirement. RESULTS: Segmentation of simulated pelvic fracture demonstrated Dice coefficient of 0.92±0.06. Reduction planning using the adaptive template achieved 2-3 mm and 2-3° error for the three fracture categories, significantly better than planning based on mirroring of contralateral anatomy. 3D-2D registration yielded ~2 mm and 0.5° accuracy, providing accurate guidance with respect to the preoperative reduction plan. The cadaver study and retrospective clinical study demonstrated comparable accuracy: ~0.90 Dice coefficient in segmentation, ~3 mm accuracy in reduction planning, and ~2 mm accuracy in 3D-2D registration. CONCLUSION: The registration framework demonstrated planning and guidance accuracy within clinical requirements in both simulation and clinical feasibility studies for a broad range of fracture-dislocation patterns. Using routinely acquired preoperative CT and intraoperative fluoroscopy, the framework could improve the accuracy of pelvic fracture reduction, reduce radiation dose, and could integrate well with common clinical workflow without the need for additional navigation systems.


Asunto(s)
Ortopedia , Cirugía Asistida por Computador , Imagen Corporal , Fluoroscopía , Fijación de Fractura , Humanos , Imagenología Tridimensional , Estudios Retrospectivos , Tomografía Computarizada por Rayos X
13.
Artículo en Inglés | MEDLINE | ID: mdl-32476703

RESUMEN

Pelvic trauma surgical procedures rely heavily on guidance with 2D fluoroscopy views for navigation in complex bone corridors. This "fluoro-hunting" paradigm results in extended radiation exposure and possible suboptimal guidewire placement from limited visualization of the fractures site with overlapped anatomy in 2D fluoroscopy. A novel computer vision-based navigation system for freehand guidewire insertion is proposed. The navigation framework is compatible with the rapid workflow in trauma surgery and bridges the gap between intraoperative fluoroscopy and preoperative CT images. The system uses a drill-mounted camera to detect and track poses of simple multimodality (optical/radiographic) markers for registration of the drill axis to fluoroscopy and, in turn, to CT. Surgical navigation is achieved with real-time display of the drill axis position on fluoroscopy views and, optionally, in 3D on the preoperative CT. The camera was corrected for lens distortion effects and calibrated for 3D pose estimation. Custom marker jigs were constructed to calibrate the drill axis and tooltip with respect to the camera frame. A testing platform for evaluation of the navigation system was developed, including a robotic arm for precise, repeatable, placement of the drill. Experiments were conducted for hand-eye calibration between the drill-mounted camera and the robot using the Park and Martin solver. Experiments using checkerboard calibration demonstrated subpixel accuracy [-0.01 ± 0.23 px] for camera distortion correction. The drill axis was calibrated using a cylindrical model and demonstrated sub-mm accuracy [0.14 ± 0.70 mm] and sub-degree angular deviation.

14.
Phys Med Biol ; 65(16): 165012, 2020 08 19.
Artículo en Inglés | MEDLINE | ID: mdl-32428891

RESUMEN

Metal artifacts present a challenge to cone-beam CT (CBCT) image-guided surgery, obscuring visualization of metal instruments and adjacent anatomy-often in the very region of interest pertinent to the imaging/surgical tasks. We present a method to reduce the influence of metal artifacts by prospectively defining an image acquisition protocol-viz., the C-arm source-detector orbit-that mitigates metal-induced biases in the projection data. The metal artifact avoidance (MAA) method is compatible with simple mobile C-arms, does not require exact prior information on the patient or metal implants, and is consistent with 3D filtered backprojection (FBP), more advanced (e.g. polyenergetic) model-based image reconstruction (MBIR), and metal artifact reduction (MAR) post-processing methods. The MAA method consists of: (i) coarse localization of metal objects in the field-of-view (FOV) via two or more low-dose scout projection views and segmentation (e.g. a simple U-Net) in coarse backprojection; (ii) model-based prediction of metal-induced x-ray spectral shift for all source-detector vertices accessible by the imaging system (e.g. gantry rotation and tilt angles); and (iii) identification of a circular or non-circular orbit that reduces the variation in spectral shift. The method was developed, tested, and evaluated in a series of studies presenting increasing levels of complexity and realism, including digital simulations, phantom experiment, and cadaver experiment in the context of image-guided spine surgery (pedicle screw implants). The MAA method accurately predicted tilted circular and non-circular orbits that reduced the magnitude of metal artifacts in CBCT reconstructions. Realistic distributions of metal instrumentation were successfully localized (0.71 median Dice coefficient) from 2-6 low-dose scout views even in complex anatomical scenes. The MAA-predicted tilted circular orbits reduced root-mean-square error (RMSE) in 3D image reconstructions by 46%-70% and 'blooming' artifacts (apparent width of the screw shaft) by 20-45%. Non-circular orbits defined by MAA achieved a further ∼46% reduction in RMSE compared to the best (tilted) circular orbit. The MAA method presents a practical means to predict C-arm orbits that minimize spectral bias from metal instrumentation. Resulting orbits-either simple tilted circular orbits or more complex non-circular orbits that can be executed with a motorized multi-axis C-arm-exhibited substantial reduction of metal artifacts in raw CBCT reconstructions by virtue of higher fidelity projection data, which are in turn compatible with subsequent MAR post-processing and/or polyenergetic MBIR to further reduce artifacts.


Asunto(s)
Tomografía Computarizada de Haz Cónico/instrumentación , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Metales/química , Fantasmas de Imagen , Columna Vertebral/cirugía , Cirugía Asistida por Computador/métodos , Algoritmos , Artefactos , Humanos , Imagenología Tridimensional/métodos , Tornillos Pediculares , Columna Vertebral/diagnóstico por imagen
15.
Phys Med Biol ; 65(13): 135009, 2020 07 17.
Artículo en Inglés | MEDLINE | ID: mdl-32217833

RESUMEN

Surgical reduction of pelvic dislocation is a challenging procedure with poor long-term prognosis if reduction does not accurately restore natural morphology. The procedure often requires long fluoroscopic exposure times and trial-and-error to achieve accurate reduction. We report a method to automatically compute the target pose of dislocated bones in preoperative CT and provide 3D guidance of reduction using routine 2D fluoroscopy. A pelvic statistical shape model (SSM) and a statistical pose model (SPM) were formed from an atlas of 40 pelvic CT images. Multi-body bone segmentation was achieved by mapping the SSM to a preoperative CT via an active shape model. The target reduction pose for the dislocated bone is estimated by fitting the poses of undislocated bones to the SPM. Intraoperatively, multiple bones are registered to fluoroscopy images via 3D-2D registration to obtain 3D pose estimates from 2D images. The method was examined in three studies: (1) a simulation study of 40 CT images simulating a range of dislocation patterns; (2) a pelvic phantom study with controlled dislocation of the left innominate bone; (3) a clinical case study investigating feasibility in images acquired during pelvic reduction surgery. Experiments investigated the accuracy of registration as a function of initialization error (capture range), image quality (radiation dose and image noise), and field of view (FOV) size. The simulation study achieved target pose estimation with translational error of median 2.3 mm (1.4 mm interquartile range, IQR) and rotational error of 2.1° (1.3° IQR). 3D-2D registration yielded 0.3 mm (0.2 mm IQR) in-plane and 0.3 mm (0.2 mm IQR) out-of-plane translational error, with in-plane capture range of ±50 mm and out-of-plane capture range of ±120 mm. The phantom study demonstrated 3D-2D target registration error of 2.5 mm (1.5 mm IQR), and the method was robust over a large dose range, down to 5 [Formula: see text]Gy/frame (an order of magnitude lower than the nominal fluoroscopic dose). The clinical feasibility study demonstrated accurate registration with both preoperative and intraoperative radiographs, yielding 3.1 mm (1.0 mm IQR) projection distance error with robust performance for FOV ranging from 340 × 340 mm2 to 170 × 170 mm2 (at the image plane). The method demonstrated accurate estimation of the target reduction pose in simulation, phantom, and a clinical feasibility study for a broad range of dislocation patterns, initialization error, dose levels, and FOV size. The system provides a novel means of guidance and assessment of pelvic reduction from routinely acquired preoperative CT and intraoperative fluoroscopy. The method has the potential to reduce radiation dose by minimizing trial-and-error and to improve outcomes by guiding more accurate reduction of joint dislocations.


Asunto(s)
Imagenología Tridimensional/métodos , Luxaciones Articulares/diagnóstico por imagen , Luxaciones Articulares/cirugía , Procedimientos Ortopédicos , Pelvis/lesiones , Pelvis/cirugía , Cirugía Asistida por Computador , Algoritmos , Fluoroscopía , Humanos , Fantasmas de Imagen
16.
Artículo en Inglés | MEDLINE | ID: mdl-36082205

RESUMEN

Purpose: Conventional model-based 3D-2D registration algorithms can be challenged by limited capture range, model validity, and stringent intraoperative runtime requirements. In this work, a deep convolutional neural network was used to provide robust initialization of a registration algorithm (known-component registration, KC-Reg) for 3D localization of spine surgery implants, combining the speed and global support of data-driven approaches with the previously demonstrated accuracy of model-based registration. Methods: The approach uses a Faster R-CNN architecture to detect and localize a broad variety and orientation of spinal pedicle screws in clinical images. Training data were generated using projections from 17 clinical cone-beam CT scans and a library of screw models to simulate implants. Network output was processed to provide screw count and 2D poses. The network was tested on two test datasets of 2,000 images, each depicting real anatomy and realistic spine surgery instrumentation - one dataset involving the same patient data as in the training set (but with different screws, poses, image noise, and affine transformations) and one dataset with five patients unseen in the test data. Assessment of device detection was quantified in terms of accuracy and specificity, and localization accuracy was evaluated in terms of intersection-over-union (IOU) and distance between true and predicted bounding box coordinates. Results: The overall accuracy of pedicle screw detection was ~86.6% (85.3% for the same-patient dataset and 87.8% for the many-patient dataset), suggesting that the screw detection network performed reasonably well irrespective of disparate, complex anatomical backgrounds. The precision of screw detection was ~92.6% (95.0% and 90.2% for the respective same-patient and many-patient datasets). The accuracy of screw localization was within 1.5 mm (median difference of bounding box coordinates), and median IOU exceeded 0.85. For purposes of initializing a 3D-2D registration algorithm, the accuracy was observed to be well within the typical capture range of KC-Reg.1. Conclusions: Initial evaluation of network performance indicates sufficient accuracy to integrate with algorithms for implant registration, guidance, and verification in spine surgery. Such capability is of potential use in surgical navigation, robotic assistance, and data-intensive analysis of implant placement in large retrospective datasets. Future work includes correspondence of multiple views, 3D localization, screw classification, and expansion of the training dataset to a broader variety of anatomical sites, number of screws, and types of implants.

17.
Artículo en Inglés | MEDLINE | ID: mdl-36082206

RESUMEN

Purpose: We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms. Methods: The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies. Results: The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm). Conclusions: The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.

18.
Med Phys ; 47(2): 467-479, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31808950

RESUMEN

PURPOSE: A modular phantom for dosimetry and imaging performance evaluation in cone-beam computed tomography (CBCT) is reported, providing a tool for quantitative technical assessment that can be adapted to a broad variety of CBCT imaging configurations and clinical applications. METHODS: The phantom presents a set of modules that can be ordered in various configurations suitable to a particular CBCT system. Modules include slabs containing a uniform medium, low-contrast inserts, line-spread features, and disk features suitable to measurement of image uniformity, noise, noise-power spectrum (NPS), contrast, contrast-to-noise ratio (CNR), Hounsfield (HU) accuracy, linearity, spatial resolution modulation transfer function (MTF), and magnitude of cone-beam artifact. Automated software recognizes the phantom configuration in DICOM images and provides structured reporting of such test measures. In any modular configuration, the phantom permits measurement of air kerma in central and peripheral locations with an air ionization chamber (e.g., Farmer chamber). The utility and adaptability of the phantom were demonstrated across a spectrum of CBCT systems, including scanners for orthopaedic imaging (Carestream OnSight 3D, Rochester NY), breast imaging (Doheny prototype, UC Davis), image-guided surgery (IGS, Medtronic O-arm, Littleton MA), angiography (Siemens Artis Zeego, Forcheim Germany), and image-guided radiation therapy (IGRT, Elekta Synergy XVI, Stockholm Sweden). RESULTS: The phantom provided a consistent platform for quantitative assessment of dose and imaging performance compatible with a broad spectrum of CBCT systems. The purpose of the survey was not to obtain head-to-head performance comparison of systems designed for such distinct clinical applications. Rather, the survey demonstrated the suitability of the phantom to a broad spectrum of systems in a manner that provides characterization pertinent to disparate applications and imaging tasks. For example: the orthopaedic CBCT system (pertinent clinical tasks relating to high-resolution bone imaging) was shown to achieve MTF consistent with imaging of high-contrast trabecular bone structures (i.e., the MTF reduced to 10% at spatial frequency, f 10  = 1.2 mm-1 ); the breast system (even higher-resolution imaging of microcalcifications) exhibited f 10  = 2.2 mm-1 ; the IGS system (tasks including both bone and soft-tissue contrast resolution) provided f 10  = 0.9 mm-1 and soft-tissue CNR  = 1.64; the angiography system (soft-tissue body interventions) demonstrated CNR  = 1.2 in soft tissues approximating liver lesions; and the IGRT system (pertinent tasks emphasizing HU linearity and image uniformity) showed linear response with HU values ( R 2  = 1), with a cupping artifact ( t cup  = 5.8%) due to x-ray scatter. CONCLUSIONS: The phantom provides an adaptable, quantitative basis for CBCT dosimetry and imaging performance evaluation suitable to a broad variety of CBCT systems. The dosimetry and image quality metrics are consistent with up-to-date methods for rigorous, quantitative, physics testing and should be suitable to emerging standards for CBCT quality assurance.


Asunto(s)
Tomografía Computarizada de Haz Cónico/instrumentación , Fantasmas de Imagen , Dosis de Radiación , Artefactos , Control de Calidad , Relación Señal-Ruido
19.
Med Phys ; 47(3): 958-974, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31863480

RESUMEN

PURPOSE: To characterize the radiation dose and three-dimensional (3D) imaging performance of a recently developed mobile, isocentric C-arm equipped with a flat-panel detector (FPD) for intraoperative cone-beam computed tomography (CBCT) (Cios Spin 3D, Siemens Healthineers) and to identify potential improvements in 3D imaging protocols for pertinent imaging tasks. METHODS: The C-arm features a 30 × 30 cm2 FPD and isocentric gantry with computer-controlled motorization of rotation (0-195°), angulation (±220°), and height (0-45 cm). Geometric calibration was assessed in terms of 9 degrees of freedom of the x-ray source and detector in CBCT scans, and the reproducibility of geometric calibration was evaluated. Standard and custom scan protocols were evaluated, with variation in the number of projections (100-400) and mAs per view (0.05-1.65 mAs). Image reconstruction was based on 3D filtered backprojection using "smooth," "normal," and "sharp" reconstruction filters as well as a custom, two-dimensional 2D isotropic filter. Imaging performance was evaluated in terms of uniformity, gray value correspondence with Hounsfield units (HU), contrast, noise (noise-power spectrum, NPS), spatial resolution (modulation transfer function, MTF), and noise-equivalent quanta (NEQ). Performance tradeoffs among protocols were visualized in anthropomorphic phantoms for various anatomical sites and imaging tasks. RESULTS: Geometric calibration showed a high degree of reproducibility despite ~19 mm gantry flex over a nominal semicircular orbit. The dose for a CBCT scan varied from ~0.8-4.7 mGy for head protocols to ~6-38 mGy for body protocols. The MTF was consistent with sub-mm spatial resolution, with f10 (frequency at which MTF = 10%) equal to 0.64 mm-1 , 1.0 mm-1 , and 1.5 mm-1 for smooth, standard, and sharp filters respectively. Implementation of a custom 2D isotropic filter improved CNR ~ 50-60% for both head and body protocols and provided more isotropic resolution and noise characteristics. The NPS and NEQ quantified the 3D noise performance and provided a guide to protocol selection, confirmed in images of anthropomorphic phantoms. Alternative scan protocols were identified according to body site and task - for example, lower-dose body protocols (<3 mGy) sufficient for visualization of bone structures. CONCLUSION: The studies provided objective assessment of the dose and 3D imaging performance of a new C-arm, offering an important basis for clinical deployment and a benchmark for quality assurance. Modifications to standard 3D imaging protocols were identified that may improve performance or reduce radiation dose for pertinent imaging tasks.


Asunto(s)
Tomografía Computarizada de Haz Cónico/instrumentación , Imagenología Tridimensional , Dosis de Radiación , Fluoroscopía , Humanos , Periodo Intraoperatorio , Fantasmas de Imagen
20.
Phys Med Biol ; 64(16): 165021, 2019 08 21.
Artículo en Inglés | MEDLINE | ID: mdl-31287092

RESUMEN

Intraoperative cone-beam CT (CBCT) is increasingly used for surgical navigation and validation of device placement. In spinal deformity correction, CBCT provides visualization of pedicle screws and fixation rods in relation to adjacent anatomy. This work reports and evaluates a method that uses prior information regarding such surgical instrumentation for improved metal artifact reduction (MAR). The known-component MAR (KC-MAR) approach achieves precise localization of instrumentation in projection images using rigid or deformable 3D-2D registration of component models, thereby overcoming residual errors associated with segmentation-based methods. Projection data containing metal components are processed via 2D inpainting of the detector signal, followed by 3D filtered back-projection (FBP). Phantom studies were performed to identify nominal algorithm parameters and quantitatively investigate performance over a range of component material composition and size. A cadaver study emulating screw and rod placement in spinal deformity correction was conducted to evaluate performance under realistic clinical imaging conditions. KC-MAR demonstrated reduction in artifacts (standard deviation in voxel values) across a range of component types and dose levels, reducing the artifact to 5-10 HU. Accurate component delineation was demonstrated for rigid (screw) and deformable (rod) models with sub-mm registration errors, and a single-pixel dilation of the projected components was found to compensate for partial-volume effects. Artifacts associated with spine screws and rods were reduced by 40%-80% in cadaver studies, and the resulting images demonstrated markedly improved visualization of instrumentation (e.g. screw threads) within cortical margins. The KC-MAR algorithm combines knowledge of surgical instrumentation with 3D image reconstruction in a manner that overcomes potential pitfalls of segmentation. The approach is compatible with FBP-thereby maintaining simplicity in a manner that is consistent with surgical workflow-or more sophisticated model-based reconstruction methods that could further improve image quality and/or help reduce radiation dose.


Asunto(s)
Artefactos , Tomografía Computarizada de Haz Cónico , Metales , Intensificación de Imagen Radiográfica/métodos , Anciano , Algoritmos , Humanos , Imagenología Tridimensional , Masculino , Tornillos Pediculares , Fantasmas de Imagen , Columna Vertebral/cirugía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...