Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
1.
Int J Oral Maxillofac Surg ; 52(7): 787-792, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36328865

RESUMO

The use of deep learning (DL) in medical imaging is becoming increasingly widespread. Although DL has been used previously for the segmentation of facial bones in computed tomography (CT) images, there are few reports of segmentation involving multiple areas. In this study, a U-Net was used to investigate the automatic segmentation of facial bones into eight areas, with the aim of facilitating virtual surgical planning (VSP) and computer-aided design and manufacturing (CAD/CAM) in maxillofacial surgery. CT data from 50 patients were prepared and used for training, and five-fold cross-validation was performed. The output results generated by the DL model were validated by Dice coefficient and average symmetric surface distance (ASSD). The automatic segmentation was successful in all cases, with a mean± standard deviation Dice coefficient of 0.897 ± 0.077 and ASSD of 1.168 ± 1.962 mm. The accuracy was very high for the mandible (Dice coefficient 0.984, ASSD 0.324 mm) and zygomatic bones (Dice coefficient 0.931, ASSD 0.487 mm), and these could be introduced for VSP and CAD/CAM without any modification. The results for other areas, particularly the teeth, were slightly inferior, with possible reasons being the effects of defects, bonded maxillary and mandibular teeth, and metal artefacts. A limitation of this study is that the data were from a single institution. Hence further research is required to improve the accuracy for some facial areas and to validate the results in larger and more diverse populations.


Assuntos
Aprendizado Profundo , Dente , Humanos , Cabeça , Mandíbula/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos
2.
Phys Med Biol ; 65(24): 245019, 2020 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-32590372

RESUMO

Accurate and consistent mental interpretation of fluoroscopy to determine the position and orientation of acetabular bone fragments in 3D space is difficult. We propose a computer assisted approach that uses a single fluoroscopic view and quickly reports the pose of an acetabular fragment without any user input or initialization. Intraoperatively, but prior to any osteotomies, two constellations of metallic ball-bearings (BBs) are injected into the wing of a patient's ilium and lateral superior pubic ramus. One constellation is located on the expected acetabular fragment, and the other is located on the remaining, larger, pelvis fragment. The 3D locations of each BB are reconstructed using three fluoroscopic views and 2D/3D registrations to a preoperative CT scan of the pelvis. The relative pose of the fragment is established by estimating the movement of the two BB constellations using a single fluoroscopic view taken after osteotomy and fragment relocation. BB detection and inter-view correspondences are automatically computed throughout the processing pipeline. The proposed method was evaluated on a multitude of fluoroscopic images collected from six cadaveric surgeries performed bilaterally on three specimens. Mean fragment rotation error was 2.4 ± 1.0 degrees, mean translation error was 2.1 ± 0.6 mm, and mean 3D lateral center edge angle error was 1.0 ± 0.5 degrees. The average runtime of the single-view pose estimation was 0.7 ± 0.2 s. The proposed method demonstrates accuracy similar to other state of the art systems which require optical tracking systems or multiple-view 2D/3D registrations with manual input. The errors reported on fragment poses and lateral center edge angles are within the margins required for accurate intraoperative evaluation of femoral head coverage.


Assuntos
Acetábulo/diagnóstico por imagem , Acetábulo/cirurgia , Marcadores Fiduciais , Fluoroscopia , Osteotomia/normas , Automação , Humanos , Imageamento Tridimensional , Período Intraoperatório , Rotação , Fatores de Tempo , Tomografia Computadorizada por Raios X
3.
Skin Res Technol ; 22(2): 181-8, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26037969

RESUMO

BACKGROUND/ PURPOSE: A skin movement artifact is a major problem in three-dimensional motion analysis. Furthermore, skin tension lines are important in plastic surgery. Skin tension depends upon the body area and the direction of resistance. From the perspective of skin continuity and clinical observation, we hypothesized that the contralateral side of the skin of the extremities moves in the opposite direction. This study aimed to examine kinematics of thigh skin including movement direction during pelvic sway. METHODS: Fifteen healthy men participated in this study. Kinematic data were obtained using a three-dimensional motion analysis system. To detect opposite skin movement, 42 markers were attached to the front, back, lateral, and medial sides of the thigh and pelvis. Front and back markers in the sagittal plane and lateral and medial markers in the frontal plane were arranged in a line connecting the hip and ankle joint centers, respectively. Subjects performed maximal pelvic movements in the anterior-posterior and rightward-leftward directions. RESULTS: The results showed that the front skin of the thigh was transferred upward and that the back skin was transferred downward during pelvic anterior sway. Opposite skin movements were observed during posterior pelvic sway. We also found that the lateral skin was transferred upward and that the medial skin was transferred downward during hip adduction and vice-versa during hip abduction. CONCLUSION: These findings suggest that the skin moves according to certain physiological rules.


Assuntos
Pontos de Referência Anatômicos/anatomia & histologia , Pontos de Referência Anatômicos/fisiologia , Articulações/anatomia & histologia , Articulações/fisiologia , Movimento/fisiologia , Amplitude de Movimento Articular/fisiologia , Adulto , Artefatos , Marcadores Fiduciais , Humanos , Imageamento Tridimensional/métodos , Perna (Membro)/fisiologia , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Coxa da Perna/anatomia & histologia , Coxa da Perna/fisiologia
4.
Proc SPIE Int Soc Opt Eng ; 94152015 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-25991876

RESUMO

We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

5.
Phys Med Biol ; 59(18): 5329-45, 2014 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-25146673

RESUMO

An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image+guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image+guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 µGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Fluoroscopia/métodos , Imageamento Tridimensional/métodos , Intensificação de Imagem Radiográfica/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Imagens de Fantasmas
6.
Phys Med Biol ; 59(14): 3761-87, 2014 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-24937093

RESUMO

Image-guided spine surgery (IGSS) is associated with reduced co-morbidity and improved surgical outcome. However, precise localization of target anatomy and adjacent nerves and vessels relative to planning information (e.g., device trajectories) can be challenged by anatomical deformation. Rigid registration alone fails to account for deformation associated with changes in spine curvature, and conventional deformable registration fails to account for rigidity of the vertebrae, causing unrealistic distortions in the registered image that can confound high-precision surgery. We developed and evaluated a deformable registration method capable of preserving rigidity of bones while resolving the deformation of surrounding soft tissue. The method aligns preoperative CT to intraoperative cone-beam CT (CBCT) using free-form deformation (FFD) with constraints on rigid body motion imposed according to a simple intensity threshold of bone intensities. The constraints enforced three properties of a rigid transformation-namely, constraints on affinity (AC), orthogonality (OC), and properness (PC). The method also incorporated an injectivity constraint (IC) to preserve topology. Physical experiments involving phantoms, an ovine spine, and a human cadaver as well as digital simulations were performed to evaluate the sensitivity to registration parameters, preservation of rigid body morphology, and overall registration accuracy of constrained FFD in comparison to conventional unconstrained FFD (uFFD) and Demons registration. FFD with orthogonality and injectivity constraints (denoted FFD+OC+IC) demonstrated improved performance compared to uFFD and Demons. Affinity and properness constraints offered little or no additional improvement. The FFD+OC+IC method preserved rigid body morphology at near-ideal values of zero dilatation (D = 0.05, compared to 0.39 and 0.56 for uFFD and Demons, respectively) and shear (S = 0.08, compared to 0.36 and 0.44 for uFFD and Demons, respectively). Target registration error (TRE) was similarly improved for FFD+OC+IC (0.7 mm), compared to 1.4 and 1.8 mm for uFFD and Demons. Results were validated in human cadaver studies using CT and CBCT images, with FFD+OC+IC providing excellent preservation of rigid morphology and equivalent or improved TRE. The approach therefore overcomes distortions intrinsic to uFFD and could better facilitate high-precision IGSS.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Coluna Vertebral/diagnóstico por imagem , Coluna Vertebral/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos , Movimento
7.
Artigo em Inglês | MEDLINE | ID: mdl-34211241

RESUMO

PURPOSE: A new method for accurately portraying the impact of low-dose imaging techniques in C-arm cone-beam CT (CBCT) is presented and validated, allowing identification of minimum-dose protocols suitable to a given imaging task on a patient-specific basis in scenarios that require repeat intraoperative scans. METHOD: To accurately simulate lower-dose techniques and account for object-dependent noise levels (x-ray quantum noise and detector electronics noise) and correlations (detector blur), noise of the proper magnitude and correlation was injected into the projections from an initial CBCT acquired at the beginning of a procedure. The resulting noisy projections were then reconstructed to yield low-dose preview (LDP) images that accurately depict the image quality at any level of reduced dose in both filtered backprojection and statistical image reconstruction. Validation studies were conducted on a mobile C-arm, with the noise injection method applied to images of an anthropomorphic head phantom and cadaveric torso across a range of lower-dose techniques. RESULTS: Comparison of preview and real CBCT images across a full range of techniques demonstrated accurate noise magnitude (within ~5%) and correlation (matching noise-power spectrum, NPS). Other image quality characteristics (e.g., spatial resolution, contrast, and artifacts associated with beam hardening and scatter) were also realistically presented at all levels of dose and across reconstruction methods, including statistical reconstruction. CONCLUSION: Generating low-dose preview images for a broad range of protocols gives a useful method to select minimum-dose techniques that accounts for complex factors of imaging task, patient-specific anatomy, and observer preference. The ability to accurately simulate the influence of low-dose acquisition in statistical reconstruction provides an especially valuable means of identifying low-dose limits in a manner that does not rely on a model for the nonlinear reconstruction process or a model of observer performance.

8.
Phys Med Biol ; 59(2): 271-87, 2014 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-24351769

RESUMO

An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.


Assuntos
Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional/instrumentação , Tomografia Computadorizada por Raios X/instrumentação
9.
Phys Med Biol ; 58(14): 4951-79, 2013 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-23807549

RESUMO

Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base-of-tongue robotic surgery.


Assuntos
Tomografia Computadorizada de Feixe Cônico/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Robótica , Cirurgia Assistida por Computador/instrumentação , Língua/diagnóstico por imagem , Língua/cirurgia , Adulto , Algoritmos , Humanos , Masculino
10.
Proc SPIE Int Soc Opt Eng ; 8668: 86681L, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24949189

RESUMO

Nonlinear partial volume (NLPV) effects can be significant for objects with large attenuation differences and fine detail structures near the spatial resolution limits of a tomographic system. This is particularly true for small metal devices like cochlear implants. While traditional model-based approaches might alleviate these artifacts through very fine sampling of the image volume and subsampling of rays to each detector element, such solutions can be extremely burdensome in terms of memory and computational requirements. The work presented in this paper leverages the model-based approach called "known-component reconstruction" (KCR) where prior knowledge of a surgical device is integrated into the estimation. In KCR, the parameterization of the object separates the volume into an unknown background anatomy and a known component with unknown registration. Thus, one can model projections of an implant at very high spatial resolution while limiting the spatial resolution of the anatomy - in effect, modeling NLPV effects where they are most significant. We present modifications of the KCR approach that can be used to largely eliminate NLPV artifacts, and demonstrate the efficacy of the modified technique (with improved image quality and accurate implant position estimates) for the cochlear implant imaging scenario.

11.
Int J Comput Assist Radiol Surg ; 8(1): 1-13, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22585463

RESUMO

PURPOSE: A novel electromagnetic tracking configuration was characterized and implemented for image-guided surgery incorporating C-arm fluoroscopy and/or cone-beam CT (CBCT). The tracker employed a field generator (FG) with an open rectangular aperture and a frame enclosure with two essentially hollow sides, yielding a design that presents little or no X-ray attenuation across the C-arm orbit. The "Window" FG (WFG) was characterized in comparison with a conventional "Aurora" FG (AFG), and a configuration in which the WFG was incorporated directly into the operating table was investigated in preclinical phantom studies. METHOD: The geometric accuracy and field of view (FOV) of the WFG and AFG were evaluated in terms of target registration error (TRE) using an acrylic phantom on an (electromagnetic compatible) experimental bench. The WFG design was incorporated in a prototype operating table featuring a carbon fiber top beneath, which the FG could be translated for positioning under the patient. The X-ray compatibility was evaluated using a prototype mobile C-arm for fluoroscopy and CBCT in an anthropomorphic chest phantom. The susceptibility to EM field distortion associated with surgical tools (e.g., spine screws) and the C-arm itself was investigated in terms of TRE, and calibration methods were tested to provide robust image-world registration with minimal perturbation from the rotational C-arm. RESULTS: The WFG demonstrated mean TRE of 1.28 ± 0.79 mm compared to 1.13 ± 0.72 mm for the AFG, with no statistically significant difference between the two (p = 0.32 and n = 250). The WFG exhibited a deeper field of view by ~10 cm providing an equivalent degree of geometric accuracy to a depth of z ~55 cm, compared to z ~45 cm for the AFG. Although the presence of a small number of spine screws did not degrade tracker accuracy, the mobile C-arm perturbed the electromagnetic field sufficiently to degrade TRE; however, a calibration method was identified to mitigate the effect. Specifically, the average calibration between posterior-anterior and lateral orientations of the C-arm was found to yield fairly robust registration for any C-arm pose with only a slight reduction in geometric accuracy (1.43 ± 0.31 mm in comparison with 1.28 ± 0.79 mm, p = 0.05). The WFG demonstrated reasonable X-ray compatibility, although the initial design of the window frame included suboptimal material and shape of the side bars that caused a level of streak artifacts in CBCT reconstructions. The streak artifacts were of sufficient magnitude to degrade soft-tissue visibility in CBCT but were negligible in the context of high-contrast imaging tasks (e.g., bone visualization). CONCLUSION: The open frame of the WFG offers a potentially valuable configuration for electromagnetic trackers in image-guided surgery applications that are based on X-ray fluoroscopy and/or CBCT. The geometric accuracy and FOV are comparable to the conventional AFG and offers increased depth (z-direction) FOV. Incorporation directly within the operating table offers a streamlined implementation in which the tracker is in place but "invisible," potentially simplifying tableside logistics, avoidance of the sterile field, and compatibility with X-ray imaging.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Fluoroscopia , Imageamento Tridimensional/métodos , Mesas Cirúrgicas , Intensificação de Imagem Radiográfica/métodos , Cirurgia Assistida por Computador/instrumentação , Calibragem , Desenho de Equipamento , Humanos , Imagens de Fantasmas
12.
Med Phys ; 39(10): 6484-98, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039683

RESUMO

PURPOSE: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. METHODS: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. RESULTS: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all anatomical sites, including challenging scenarios involving the presence of interventional tools. The reprojection error of marker localization was independent of the distance of the ARM from isocenter, and the overall TRE was dominated by the configuration of individual fiducials and distance from the target as predicted by theory. The median TRE increased with greater ARM-to-isocenter distance (e.g., for the Free-Form method, TRE increasing from 0.78 mm to 2.04 mm at distances of ∼75 mm and 370 mm, respectively). The median TRE within ∼200 mm distance was consistently lower than that of the manual method (TRE = 0.82 mm). Registration performance was independent of anatomical site (head, thorax, and abdomen). The Free-Form method demonstrated a statistically significant improvement (p = 0.0044) in reproducibility compared to manual registration (0.22 mm versus 0.30 mm, respectively). CONCLUSIONS: Automatic image-to-world registration methods demonstrate the potential for improved accuracy, reproducibility, and workflow in CBCT-guided procedures. A Free-Form method was shown to exhibit robustness against anatomical site, with comparable or improved TRE compared to manual registration. It was also comparable or superior in performance to a Known-Model method in which the ARM configuration is specified as a predefined tool, thereby allowing configuration of fiducials on the fly or attachment to the patient.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Automação , Tomografia Computadorizada de Feixe Cônico/instrumentação , Tomografia Computadorizada de Feixe Cônico/normas , Marcadores Fiduciais , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/normas , Análise dos Mínimos Quadrados , Modelos Lineares
13.
Phys Med Biol ; 57(17): 5485-508, 2012 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-22864366

RESUMO

Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.


Assuntos
Fluoroscopia/métodos , Imageamento Tridimensional/métodos , Coluna Vertebral/diagnóstico por imagem , Coluna Vertebral/cirurgia , Cirurgia Assistida por Computador/métodos , Automação , Humanos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X
14.
Int J Comput Assist Radiol Surg ; 7(5): 647-65, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22539008

RESUMO

PURPOSE: Conventional tracker configurations for surgical navigation carry a variety of limitations, including limited geometric accuracy, line-of-sight obstruction, and mismatch of the view angle with the surgeon's-eye view. This paper presents the development and characterization of a novel tracker configuration (referred to as "Tracker-on-C") intended to address such limitations by incorporating the tracker directly on the gantry of a mobile C-arm for fluoroscopy and cone-beam CT (CBCT). METHODS: A video-based tracker (MicronTracker, Claron Technology Inc., Toronto, ON, Canada) was mounted on the gantry of a prototype mobile isocentric C-arm next to the flat-panel detector. To maintain registration within a dynamically moving reference frame (due to rotation of the C-arm), a reference marker consisting of 6 faces (referred to as a "hex-face marker") was developed to give visibility across the full range of C-arm rotation. Three primary functionalities were investigated: surgical tracking, generation of digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool or the current C-arm angle, and augmentation of the tracker video scene with image, DRR, and planning data. Target registration error (TRE) was measured in comparison with the same tracker implemented in a conventional in-room configuration. Graphics processing unit (GPU)-accelerated DRRs were generated in real time as an assistant to C-arm positioning (i.e., positioning the C-arm such that target anatomy is in the field-of-view (FOV)), radiographic search (i.e., a virtual X-ray projection preview of target anatomy without X-ray exposure), and localization (i.e., visualizing the location of the surgical target or planning data). Video augmentation included superimposing tracker data, the X-ray FOV, DRRs, planning data, preoperative images, and/or intraoperative CBCT onto the video scene. Geometric accuracy was quantitatively evaluated in each case, and qualitative assessment of clinical feasibility was analyzed by an experienced and fellowship-trained orthopedic spine surgeon within a clinically realistic surgical setup of the Tracker-on-C. RESULTS: The Tracker-on-C configuration demonstrated improved TRE (0.87 ± 0.25) mm in comparison with a conventional in-room tracker setup (1.92 ± 0.71) mm (p < 0.0001) attributed primarily to improved depth resolution of the stereoscopic camera placed closer to the surgical field. The hex-face reference marker maintained registration across the 180° C-arm orbit (TRE = 0.70 ± 0.32 mm). DRRs generated from the perspective of the C-arm X-ray detector demonstrated sub- mm accuracy (0.37 ± 0.20 mm) in correspondence with the real X-ray image. Planning data and DRRs overlaid on the video scene exhibited accuracy of (0.59 ± 0.38) pixels and (0.66 ± 0.36) pixels, respectively. Preclinical assessment suggested potential utility of the Tracker-on-C in a spectrum of interventions, including improved line of sight, an assistant to C-arm positioning, and faster target localization, while reducing X-ray exposure time. CONCLUSIONS: The proposed tracker configuration demonstrated sub- mm TRE from the dynamic reference frame of a rotational C-arm through the use of the multi-face reference marker. Real-time DRRs and video augmentation from a natural perspective over the operating table assisted C-arm setup, simplified radiographic search and localization, and reduced fluoroscopy time. Incorporation of the proposed tracker configuration with C-arm CBCT guidance has the potential to simplify intraoperative registration, improve geometric accuracy, enhance visualization, and reduce radiation exposure.


Assuntos
Intensificação de Imagem Radiográfica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Ecrans Intensificadores para Raios X , Tomografia Computadorizada de Feixe Cônico , Desenho de Equipamento , Fluoroscopia , Humanos , Imageamento Tridimensional , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
15.
Int J Comput Assist Radiol Surg ; 7(1): 159-73, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21744085

RESUMO

PURPOSE: A system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios. METHODS: The architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women's Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements. RESULTS: The resulting architecture (referred to as "TREK") hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time "virtual fluoroscopy" computed from GPU-accelerated digitally reconstructed radiographs (DRRs). Application in three preclinical scenarios (temporal bone, skull base, and spine surgery) demonstrates the utility of the modular, task-specific approach in progressively complex tasks. CONCLUSIONS: The design and development of a system architecture for image-guided surgery has been reported, demonstrating enhanced utilization of intraoperative CBCT in surgical applications with vastly different requirements. The system integrates C-arm CBCT with a broad variety of data sources in a modular fashion that streamlines the interface to application-specific tools, accommodates distinct workflow scenarios, and accelerates testing and translation of novel toolsets to clinical use. The modular architecture was shown to adapt to and satisfy the requirements of distinct surgical scenarios from a common code-base, leveraging software components arising from over a decade of effort within the imaging and computer-assisted interventions community.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Radiografia Intervencionista/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Humanos , Software , Cirurgia Assistida por Computador/instrumentação , Integração de Sistemas , Fluxo de Trabalho
16.
Med Phys ; 39(6Part28): 3972-3973, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28519618

RESUMO

PURPOSE: Imaging in the presence of implants (instrumentation and prostheses) presents a notoriously difficult challenge to CT because of photon starvation and beam hardening. To alleviate these limitations, a statistical reconstruction approach that includes knowledge of implant shape and composition was previously reported. This work extends the approach to modeling of photon transport, including polychromatic x-ray beams and scatter, and evaluates the method in simulated and real data. METHODS: Previous work on Known-Component Reconstruction (KCR) is first extended to include a polyenergetic beam (KCR-POLY). The method simultaneously estimates the unknown background volume and the position of implants with known attenuation and shape. Simulations included an anthropomorphic knee with a Co-Cr-Mo implant and system model for an extremities CT system (110 kVp+0.2 mm Cu). Experimental validation was performed on an imaging bench in which a Titanium spine fixation rod (65 mm long, 5.5 mm diameter) was imaged within a 20.5 cm diameter water cylinder (120 kVp+0.2 mm Cu) in geometry simulating an interventional C- arm. RESULTS: The polyenergetic system model was essential to high image quality in KCR reconstructions of large, highly attenuating implants such as knee prostheses and spine instrumentation, where standard penalized- likelihood and monoenergetic variants of KCR fail. The first application of KCR-POLY in real data demonstrates the potential of the algorithm in practice, reducing or eliminating artifacts and restoring image uniformity. CONCLUSIONS: The KCR-POLY algorithm yielded major reduction in metal artifacts, owing both to a priori component knowledge (the implant) and account of the polyenergetic beam, object attenuation, and x-ray scatter. Ongoing research focuses on improvements to the registration algorithm, scatter, and experimental studies with complex, deformable implants. The work supports application of CT to a range of applications conventionally prohibited by metal implants - e.g. surgical guidance or diagnostic imaging of joints with prostheses. This work was supported in part by NIH 2R01-CA-112163.

17.
Proc SPIE Int Soc Opt Eng ; 83162012 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-26166930

RESUMO

This paper proposes to utilize a patient-specific prior to augment intraoperative sparse-scan data to accurately reconstruct the aspects of the region that have changed by a surgical procedure in image-guided surgeries. When anatomical changes are introduced by a surgical procedure, only a sparse set of x-ray images are acquired, and the prior volume is registered to these data. Since all the information of the patient anatomy except for the surgical change is already known from the prior volume, we highlight only the change by creating difference images between the new scan and digitally reconstructed radiographs (DRR) computed from the registered prior volume. The region of change (RoC) is reconstructed from these sparse difference images by a penalized likelihood (PL) reconstruction method regularized by a compressed sensing penalty. When the surgical changes are local and relatively small, the RoC reconstruction involves only a small volume size and a small number of projections, allowing much faster computation and lower radiation dose than is needed to reconstruct the entire surgical volume. The reconstructed RoC merges with the prior volume to visualize an updated surgical field. We apply this novel approach to sacroplasty phantom data obtained from a cone-beam CT (CBCT) test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector (FPD).

18.
Proc SPIE Int Soc Opt Eng ; 83132012 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-26203201

RESUMO

Because tomographic reconstructions are ill-conditioned, algorithms that incorporate additional knowledge about the imaging volume generally have improved image quality. This is particularly true when measurements are noisy or have missing data. This paper presents a general reconstruction framework for including attenuation contributions from objects known to be in the field-of-view. Components such as surgical devices and tools may be modeled explicitly as part of the attenuating volume but are inexactly known with respect to their locations poses, and possible deformations. The proposed reconstruction framework, referred to as Known-Component Reconstruction (KCR), is based on this novel parameterization of the object, a likelihood-based objective function, and alternating optimizations between registration and image parameters to jointly estimate the both the underlying attenuation and unknown registrations. A deformable KCR (dKCR) approach is introduced that adopts a control point-based warping operator to accommodate shape mismatches between the component model and the physical component, thereby allowing for a more general class of inexactly known components. The KCR and dKCR approaches are applied to low-dose cone-beam CT data with spine fixation hardware present in the imaging volume. Such data is particularly challenging due to photon starvation effects in projection data behind the metallic components. The proposed algorithms are compared with traditional filtered-backprojection and penalized-likelihood reconstructions and found to provide substantially improved image quality. Whereas traditional approaches exhibit significant artifacts that complicate detection of breaches or fractures near metal, the KCR framework tends to provide good visualization of anatomy right up to the boundary of surgical devices.

19.
Artigo em Inglês | MEDLINE | ID: mdl-37621997

RESUMO

The ability to perform fast, accurate, deformable registration with intraoperative images featuring surgical excisions was investigated for use in cone-beam CT (CBCT) guided head and neck surgery. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the images with no ability to account for tissue that is removed (or introduced) between scans. We have thus developed an approach in which an extra dimension is added during the registration process to act as a sink for voxels removed during the course of the procedure. A series of cadaveric images acquired using a prototype CBCT-capable C-arm were used to model tissue deformation and excision occurring during a surgical procedure, and the ability of deformable registration to correctly account for anatomical changes under these conditions was investigated. Using a previously developed version of the Demons deformable registration algorithm, we identify the difficulties that traditional registration algorithms encounter when faced with excised tissue and present a modified version of the algorithm better suited for use in intraoperative image-guided procedures. Studies were performed for different deformation and tissue excision tasks, and registration performance was quantified in terms of the ability to accurately account for tissue excision while avoiding spurious deformations arising around the excision.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...