Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Musculoskelet Disord ; 21(1): 103, 2020 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-32061248

RESUMO

BACKGROUND: Computer-assisted solutions are changing surgical practice continuously. One of the most disruptive technologies among the computer-integrated surgical techniques is Augmented Reality (AR). While Augmented Reality is increasingly used in several medical specialties, its potential benefit in orthopedic surgery is not yet clear. The purpose of this article is to provide a systematic review of the current state of knowledge and the applicability of AR in orthopedic surgery. METHODS: A systematic review of the current literature was performed to find the state of knowledge and applicability of AR in Orthopedic surgery. A systematic search of the following three databases was performed: "PubMed", "Cochrane Library" and "Web of Science". The systematic review followed the Preferred Reporting Items on Systematic Reviews and Meta-analysis (PRISMA) guidelines and it has been published and registered in the international prospective register of systematic reviews (PROSPERO). RESULTS: 31 studies and reports are included and classified into the following categories: Instrument / Implant Placement, Osteotomies, Tumor Surgery, Trauma, and Surgical Training and Education. Quality assessment could be performed in 18 studies. Among the clinical studies, there were six case series with an average score of 90% and one case report, which scored 81% according to the Joanna Briggs Institute Critical Appraisal Checklist (JBI CAC). The 11 cadaveric studies scored 81% according to the QUACS scale (Quality Appraisal for Cadaveric Studies). CONCLUSION: This manuscript provides 1) a summary of the current state of knowledge and research of Augmented Reality in orthopedic surgery presented in the literature, and 2) a discussion by the authors presenting the key remarks required for seamless integration of Augmented Reality in the future surgical practice. TRIAL REGISTRATION: PROSPERO registration number: CRD42019128569.


Assuntos
Realidade Aumentada , Procedimentos Ortopédicos/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Imageamento Tridimensional/métodos , Cirurgiões/educação , Realidade Virtual
2.
J Arthroplasty ; 35(6): 1636-1641.e3, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32063415

RESUMO

BACKGROUND: Malposition of the acetabular component of a hip prosthesis can lead to poor outcomes. Traditional placement with fluoroscopic guidance results in a 35% malpositioning rate. We compared the (1) accuracy and precision of component placement, (2) procedure time, (3) radiation dose, and (4) usability of a novel 3-dimensional augmented reality (AR) guidance system vs standard fluoroscopic guidance for acetabular component placement. METHODS: We simulated component placement using a radiopaque foam pelvis. Cone-beam computed tomographic data and optical data from a red-green-blue-depth camera were coregistered to create the AR environment. Eight orthopedic surgery trainees completed component placement using both methods. We measured component position (inclination, anteversion), procedure time, radiation dose, and usability (System Usability Scale score, Surgical Task Load Index value). Alpha = .05. RESULTS: Compared with fluoroscopic technique, AR technique was significantly more accurate for achieving target inclination (P = .01) and anteversion (P = .02) and more precise for achieving target anteversion (P < .01). AR technique was faster (mean ± standard deviation, 1.8 ± 0.25 vs 3.9 ± 1.6 minute; P < .01), and participants rated it as significantly easier to use according to both scales (P < .05). Radiation dose was not significantly different between techniques (P = .48). CONCLUSION: A novel 3-dimensional AR guidance system produced more accurate inclination and anteversion and more precise anteversion in the placement of the acetabular component of a hip prosthesis. AR guidance was faster and easier to use than standard fluoroscopic guidance and did not involve greater radiation dose.


Assuntos
Artroplastia de Quadril , Realidade Aumentada , Prótese de Quadril , Acetábulo/diagnóstico por imagem , Acetábulo/cirurgia , Humanos , Estudos Retrospectivos
3.
IEEE Trans Med Imaging ; 40(11): 3165-3177, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34181536

RESUMO

Image stitching is a prominent challenge in medical imaging, where the limited field-of-view captured by single images prohibits holistic analysis of patient anatomy. The barrier that prevents straight-forward mosaicing of 2D images is depth mismatch due to parallax. In this work, we leverage the Fourier slice theorem to aggregate information from multiple transmission images in parallax-free domains using fundamental principles of X-ray image formation. The details of the stitched image are subsequently restored using a novel deep learning strategy that exploits similarity measures designed around frequency, as well as dense and sparse spatial image content. Our work provides evidence that reconstruction of orthographic mosaics is possible with realistic motions of the C-arm involving both translation and rotation. We also show that these orthographic mosaics enable metric measurements of clinically relevant quantities directly on the 2D image plane.


Assuntos
Algoritmos , Humanos , Raios X
4.
IEEE Trans Med Imaging ; 40(2): 765-778, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33166252

RESUMO

Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. As a consequence, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. The main contribution of this paper is to reveal how exemplary workflows are redefined by taking full advantage of head-mounted displays when entirely co-registered with the imaging system at all times. The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. Our system achieved an error of 4.76 ± 2.91mm for placing K-wire in a fracture management procedure, and yielded errors of 1.57 ± 1.16° and 1.46 ± 1.00° in the abduction and anteversion angles, respectively, for total hip arthroplasty (THA). We compared the results with the outcomes from baseline standard operative and non-immersive AR procedures, which had yielded errors of [4.61mm, 4.76°, 4.77°] and [5.13mm, 1.78°, 1.43°], respectively, for wire placement, and abduction and anteversion during THA. We hope that our holistic approach towards improving the interface of surgery not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications and provide novel approaches of documenting procedures for training purposes.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos
5.
Med Image Anal ; 72: 102127, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34147832

RESUMO

We present a novel methodology to detect imperfect bilateral symmetry in CT of human anatomy. In this paper, the structurally symmetric nature of the pelvic bone is explored and is used to provide interventional image augmentation for treatment of unilateral fractures in patients with traumatic injuries. The mathematical basis of our solution is based on the incorporation of attributes and characteristics that satisfy the properties of intrinsic and extrinsic symmetry and are robust to outliers. In the first step, feature points that satisfy intrinsic symmetry are automatically detected in the Möbius space defined on the CT data. These features are then pruned via a two-stage RANSAC to attain correspondences that satisfy also the extrinsic symmetry. Then, a disparity function based on Tukey's biweight robust estimator is introduced and minimized to identify a symmetry plane parametrization that yields maximum contralateral similarity. Finally, a novel regularization term is introduced to enhance similarity between bone density histograms across the partial symmetry plane, relying on the important biological observation that, even if injured, the dislocated bone segments remain within the body. Our extensive evaluations on various cases of common fracture types demonstrate the validity of the novel concepts and the accuracy of the proposed method.


Assuntos
Fraturas Ósseas , Ossos Pélvicos , Algoritmos , Fraturas Ósseas/diagnóstico por imagem , Humanos , Imageamento Tridimensional
6.
Mach Learn Med Imaging ; 12436: 281-291, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33145587

RESUMO

Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful updates of X-ray pose and reduce the likelihood of getting trapped in a local minimum. The conventional approaches strongly depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) registration solution that captures large-range pose relations by extracting both local and contextual information, yielding meaningful X-ray pose updates without the need for accurate initialization. To register a 2D X-ray image and a 3D CT scan, our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates in the direction of the pose gradient on the Riemannian Manifold. Our approach integrates seamlessly with conventional image-based registration frameworks, where long-range relations are captured primarily by our CNN-based method while short-range offsets are recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the human pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization.

7.
Int J Comput Assist Radiol Surg ; 14(6): 913-922, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30863981

RESUMO

PURPOSE: As the trend toward minimally invasive and percutaneous interventions continues, the importance of appropriate surgical data visualization becomes more evident. Ineffective interventional data display techniques that yield poor ergonomics that hinder hand-eye coordination, and therefore promote frustration which can compromise on-task performance up to adverse outcome. A very common example of ineffective visualization is monitors attached to the base of mobile C-arm X-ray systems. METHODS: We present a spatially and imaging geometry-aware paradigm for visualization of fluoroscopic images using Interactive Flying Frustums (IFFs) in a mixed reality environment. We exploit the fact that the C-arm imaging geometry can be modeled as a pinhole camera giving rise to an 11-degree-of-freedom view frustum on which the X-ray image can be translated while remaining valid. Visualizing IFFs to the surgeon in an augmented reality environment intuitively unites the virtual 2D X-ray image plane and the real 3D patient anatomy. To achieve this visualization, the surgeon and C-arm are tracked relative to the same coordinate frame using image-based localization and mapping, with the augmented reality environment being delivered to the surgeon via a state-of-the-art optical see-through head-mounted display. RESULTS: The root-mean-squared error of C-arm source tracking after hand-eye calibration was determined as [Formula: see text] and [Formula: see text] in rotation and translation, respectively. Finally, we demonstrated the application of spatially aware data visualization for internal fixation of pelvic fractures and percutaneous vertebroplasty. CONCLUSION: Our spatially aware approach to transmission image visualization effectively unites patient anatomy with X-ray images by enabling spatial image manipulation that abides image formation. Our proof-of-principle findings indicate potential applications for surgical tasks that mostly rely on orientational information such as placing the acetabular component in total hip arthroplasty, making us confident that the proposed augmented reality concept can pave the way for improving surgical performance and visuo-motor coordination in fluoroscopy-guided surgery.


Assuntos
Fluoroscopia/métodos , Fixação Interna de Fraturas/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Vertebroplastia/métodos , Calibragem , Visualização de Dados , Fraturas Ósseas/cirurgia , Humanos , Ossos Pélvicos/cirurgia
8.
Int J Comput Assist Radiol Surg ; 14(12): 2199-2210, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31321601

RESUMO

PURPOSE: For orthopedic procedures, surgeons utilize intra-operative medical images such as fluoroscopy to plan screw placement and accurately position the guide wire with the intended trajectory. The number of fluoroscopic images needed depends on the complexity of the case and skill of the surgeon. Since more fluoroscopic images lead to more exposure and higher radiation dose for both surgeon and patient, a solution that decreases the number of fluoroscopic images would be an improvement in clinical care. METHODS: This article describes and compares three different novel navigation methods and techniques for screw placement using an attachable Inertial Measurement Unit device or a robotic arm. These methods provide projection and visualization of the surgical tool trajectory during the slipped capital femoral epiphysis procedure. RESULTS: These techniques resulted in faster and more efficient preoperative calibration and set up times compared to other intra-operative navigation systems in our phantom study. We conducted an experiment using 120 model bones to measure the accuracy of the methods. CONCLUSION: As conclusion, these approaches have the potential to improve accuracy of surgical tool navigation and decrease the number of required X-ray images without any change in the clinical workflow. The results also show 65% decrease in total error compared to the conventional manual approach.


Assuntos
Parafusos Ósseos , Fluoroscopia/métodos , Procedimentos Ortopédicos/métodos , Escorregamento das Epífises Proximais do Fêmur/cirurgia , Cirurgia Assistida por Computador/métodos , Humanos , Tomografia Computadorizada por Raios X
9.
Int J Comput Assist Radiol Surg ; 14(9): 1541-1551, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31300963

RESUMO

PURPOSE: For a perfectly plane symmetric object, we can find two views-mirrored at the plane of symmetry-that will yield the exact same image of that object. In consequence, having one image of a plane symmetric object and a calibrated camera, we automatically have a second, virtual image of that object if the 3-D location of the symmetry plane is known. METHODS: We propose a method for estimating the symmetry plane from a set of projection images as the solution of a consistency maximization based on epipolar consistency. With the known symmetry plane, we can exploit symmetry to estimate in-plane motion by introducing the X-trajectory that can be acquired with a conventional short-scan trajectory by simply tilting the acquisition plane relative to the plane of symmetry. RESULTS: We inspect the symmetry plane estimation on a real scan of an anthropomorphic human head phantom and show the robustness using a synthetic dataset. Further, we demonstrate the advantage of the proposed method for estimating in-plane motion using the acquired projection data. CONCLUSION: Symmetry breakers in the human body are widely used for the detection of tumors or strokes. We provide a fast estimation of the symmetry plane, robust to outliers, by computing it directly from a set of projections. Further, by coupling the symmetry prior with epipolar consistency, we overcome inherent limitations in the estimation of in-plane motion.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Cabeça/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Algoritmos , Antropometria , Humanos , Imageamento Tridimensional , Movimento (Física)
10.
Int J Comput Assist Radiol Surg ; 14(9): 1517-1528, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31187399

RESUMO

PURPOSE: Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS: We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS: Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION: Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.


Assuntos
Fluoroscopia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Algoritmos , Cadáver , Simulação por Computador , Humanos , Imageamento Tridimensional , Modelos Anatômicos , Espalhamento de Radiação , Raios X
11.
Int J Comput Assist Radiol Surg ; 14(9): 1553-1563, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31350704

RESUMO

PURPOSE: Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS: In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS: We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS: The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.


Assuntos
Realidade Aumentada , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia/métodos , Raios X , Algoritmos , Calibragem , Desenho de Equipamento , Fluoroscopia , Humanos , Imageamento Tridimensional , Modelos Estatísticos , Destreza Motora , Estudos Prospectivos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
12.
Int J Comput Assist Radiol Surg ; 14(9): 1463-1473, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31006106

RESUMO

PURPOSE: Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. METHODS: In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of [Formula: see text]. RESULTS: On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. CONCLUSION: We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.


Assuntos
Imageamento Tridimensional/métodos , Pelve/diagnóstico por imagem , Radiografia/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Feminino , Humanos , Masculino , Redes Neurais de Computação , Reprodutibilidade dos Testes , Cirurgia Assistida por Computador , Raios X
13.
J Med Imaging (Bellingham) ; 5(2): 021209, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29392161

RESUMO

Fluoroscopic x-ray guidance is a cornerstone for percutaneous orthopedic surgical procedures. However, two-dimensional (2-D) observations of the three-dimensional (3-D) anatomy suffer from the effects of projective simplification. Consequently, many x-ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient's anatomy and the surgical tools. We present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasiunprepared operating rooms. The proposed system builds upon a multimodality marker and simultaneous localization and mapping technique to cocalibrate an optical see-through head mounted display to a C-arm fluoroscopy system. Then, annotations on the 2-D x-ray images can be rendered as virtual objects in 3-D providing surgical guidance. We quantitatively evaluate the components of the proposed system and, finally, design a feasibility study on a semianthropomorphic phantom. The accuracy of our system was comparable to the traditional image-guided technique while substantially reducing the number of acquired x-ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed toward common orthopedic interventions.

14.
Healthc Technol Lett ; 5(5): 143-147, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30464844

RESUMO

Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation. We propose a marker-free 'technician-in-the-loop' Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display system capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a target view, the recorded pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. Our proof-of-principle findings from a simulated trauma surgery indicate that the proposed system can decrease the 2.76 X-ray images required for re-aligning the scanner with an intra-operatively recorded C-arm view down to zero, suggesting substantial reductions of radiation dose. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future.

15.
Med Phys ; 45(6): 2463-2475, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29569728

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS: A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS: On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS: The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Reconhecimento Automatizado de Padrão/métodos , Animais , Calibragem , Tomografia Computadorizada de Feixe Cônico/instrumentação , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Marcadores Fiduciais , Humanos , Imageamento Tridimensional/instrumentação , Raios Infravermelhos , Período Intraoperatório , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Procedimentos Ortopédicos , Imagens de Fantasmas , Suínos , Fatores de Tempo , Gravação em Vídeo
16.
J Med Imaging (Bellingham) ; 5(2): 021205, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29322072

RESUMO

Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the acetabular cup is verified in C-arm x-ray images that are acquired in an anterior-posterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon's experience in understanding the 3-D orientation of a hemispheric implant from 2-D AP projection images. This work proposes an easy to use intraoperative component planning system based on two C-arm x-ray images that are combined with 3-D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital and report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10 deg, and 0.53 deg, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.

17.
Int J Comput Assist Radiol Surg ; 12(7): 1211-1219, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28343303

RESUMO

PURPOSE: Cone-Beam Computed Tomography (CBCT) is an important 3D imaging technology for orthopedic, trauma, radiotherapy guidance, angiography, and dental applications. The major limitation of CBCT is the poor image quality due to scattered radiation, truncation, and patient movement. In this work, we propose to incorporate information from a co-registered Red-Green-Blue-Depth (RGBD) sensor attached near the detector plane of the C-arm to improve the reconstruction quality, as well as correcting for undesired rigid patient movement. METHODS: Calibration of the RGBD and C-arm imaging devices is performed in two steps: (i) calibration of the RGBD sensor and the X-ray source using a multimodal checkerboard pattern, and (ii) calibration of the RGBD surface reconstruction to the CBCT volume. The patient surface is acquired during the CBCT scan and then used as prior information for the reconstruction using Maximum-Likelihood Expectation-Maximization. An RGBD-based simultaneous localization and mapping method is utilized to estimate the rigid patient movement during scanning. RESULTS: Performance is quantified and demonstrated using artificial data and bone phantoms with and without metal implants. Finally, we present movement-corrected CBCT reconstructions based on RGBD data on an animal specimen, where the average voxel intensity difference reduces from 0.157 without correction to 0.022 with correction. CONCLUSION: This work investigated the advantages of a C-arm X-ray imaging system used with an attached RGBD sensor. The experiments show the benefits of the opto/X-ray imaging system in: (i) improving the quality of reconstruction by incorporating the surface information of the patient, reducing the streak artifacts as well as the number of required projections, and (ii) recovering the scanning trajectory for the reconstruction in the presence of undesired patient rigid movement.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Calibragem , Humanos , Imagens de Fantasmas
18.
Int J Comput Assist Radiol Surg ; 12(7): 1221-1230, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28527025

RESUMO

PURPOSE: In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS: We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS: Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION: The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.


Assuntos
Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Calibragem , Fêmur , Humanos , Imagem Multimodal , Pelve , Imagens de Fantasmas
19.
Healthc Technol Lett ; 4(5): 168-173, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29184659

RESUMO

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red-green-blue-depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.

20.
Int J Comput Assist Radiol Surg ; 11(6): 967-75, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27059022

RESUMO

PURPOSE: This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. METHODS: An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. RESULTS: Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. CONCLUSION: To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional , Monitorização Intraoperatória/métodos , Imagens de Fantasmas , Calibragem , Humanos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA