Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Med Image Anal ; 72: 102127, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34147832

RESUMO

We present a novel methodology to detect imperfect bilateral symmetry in CT of human anatomy. In this paper, the structurally symmetric nature of the pelvic bone is explored and is used to provide interventional image augmentation for treatment of unilateral fractures in patients with traumatic injuries. The mathematical basis of our solution is based on the incorporation of attributes and characteristics that satisfy the properties of intrinsic and extrinsic symmetry and are robust to outliers. In the first step, feature points that satisfy intrinsic symmetry are automatically detected in the Möbius space defined on the CT data. These features are then pruned via a two-stage RANSAC to attain correspondences that satisfy also the extrinsic symmetry. Then, a disparity function based on Tukey's biweight robust estimator is introduced and minimized to identify a symmetry plane parametrization that yields maximum contralateral similarity. Finally, a novel regularization term is introduced to enhance similarity between bone density histograms across the partial symmetry plane, relying on the important biological observation that, even if injured, the dislocated bone segments remain within the body. Our extensive evaluations on various cases of common fracture types demonstrate the validity of the novel concepts and the accuracy of the proposed method.


Assuntos
Fraturas Ósseas , Ossos Pélvicos , Algoritmos , Fraturas Ósseas/diagnóstico por imagem , Humanos , Imageamento Tridimensional
2.
Int J Comput Assist Radiol Surg ; 15(6): 973-980, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32342258

RESUMO

PURPOSE: We propose a novel methodology for generating synthetic X-rays from 2D RGB images. This method creates accurate simulations for use in non-diagnostic visualization problems where the only input comes from a generic camera. Traditional methods are restricted to using simulation algorithms on 3D computer models. To solve this problem, we propose a method of synthetic X-ray generation using conditional generative adversarial networks (CGANs). METHODS: We create a custom synthetic X-ray dataset generator to generate image triplets for X-ray images, pose images, and RGB images of natural hand poses sampled from the NYU hand pose dataset. This dataset is used to train two general-purpose CGAN networks, pix2pix and CycleGAN, as well as our novel architecture called pix2xray which expands upon the pix2pix architecture to include the hand pose into the network. RESULTS: Our results demonstrate that our pix2xray architecture outperforms both pix2pix and CycleGAN in producing higher-quality X-ray images. We measure higher similarity metrics in our approach, with pix2pix coming in second, and CycleGAN producing the worst results. Our network performs better in the difficult cases which involve high occlusion due to occluded poses or large rotations. CONCLUSION: Overall our work establishes a baseline that synthetic X-rays can be simulated using 2D RGB input. We establish the need for additional data such as the hand pose to produce clearer results and show that future research must focus on more specialized architectures to improve overall image clarity and structure.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Radiografia/métodos , Raios X , Algoritmos , Simulação por Computador , Humanos
3.
J Healthc Eng ; 2019: 2163705, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31015903

RESUMO

Unsuccessful rehabilitation therapy is a widespread issue amongst modern day amputees. Of the estimated 10 million amputees worldwide, 3 million of whom are upper limb amputees, a large majority are discontent and experience rejection with their current prosthesis during activities of daily living (ADL). Here we introduce Upbeat, an augmented reality (AR) dance game designed to improve rehabilitation therapies in upper limb amputees. In Upbeat, the patient is instructed to follow a virtual dance instructor, performing choreographed dance movements containing hand gestures involved in upper limb rehabilitation therapy. The patient's position is then tracked using a Microsoft Kinect sensor while the hand gestures are analyzed using EMG data collected from a Myo Armband. Additionally, a gamified score is calculated based on how many gestures and movements were correctly performed. Upon completion of the game, a diagnostic summary of the results is shown in the form of a graph summarizing the collected EMG data, as well as with a video displaying an augmented visualization of the patient's upper arm muscle activity during gameplay. By gamifying the rehabilitation process, Upbeat has the potential to improve therapy on upper limb amputees by enabling the start of rehabilitation immediately after trauma, providing personalized feedback which professionals can utilize to accurately assess patient's progress, and increasing patient excitement, therefore increasing patient willingness to complete rehabilitation. This paper is concerned with the description and evaluation of our prototypic implementation of Upbeat that will serve as the basis for conducting clinical studies to evaluate its impact on rehabilitation.


Assuntos
Amputados , Realidade Aumentada , Dança , Reabilitação/métodos , Extremidade Superior/cirurgia , Atividades Cotidianas , Membros Artificiais , Eletromiografia , Humanos , Imageamento Tridimensional , Masculino , Movimento , Próteses e Implantes , Reprodutibilidade dos Testes , Índice de Gravidade de Doença , Interface Usuário-Computador , Jogos de Vídeo
4.
Int J Comput Assist Radiol Surg ; 14(9): 1553-1563, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31350704

RESUMO

PURPOSE: Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS: In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS: We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS: The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.


Assuntos
Realidade Aumentada , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia/métodos , Raios X , Algoritmos , Calibragem , Desenho de Equipamento , Fluoroscopia , Humanos , Imageamento Tridimensional , Modelos Estatísticos , Destreza Motora , Estudos Prospectivos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X
5.
Int J Comput Assist Radiol Surg ; 14(9): 1517-1528, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31187399

RESUMO

PURPOSE: Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS: We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS: Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION: Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.


Assuntos
Fluoroscopia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Algoritmos , Cadáver , Simulação por Computador , Humanos , Imageamento Tridimensional , Modelos Anatômicos , Espalhamento de Radiação , Raios X
6.
Med Phys ; 45(6): 2463-2475, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29569728

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and nonoverlapping CBCT volumes to enable 3D measurements on large anatomical structures. METHODS: A CBCT-capable mobile C-arm is augmented with a red-green-blue-depth (RGBD) camera. An offline cocalibration of the two imaging modalities results in coregistered video, infrared, and x-ray views of the surgical scene. Then, automatic stitching of multiple small, nonoverlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. RESULTS: On an animal cadaver, we show stitching errors as low as 0.33, 0.91, and 1.72 mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. CONCLUSIONS: The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Reconhecimento Automatizado de Padrão/métodos , Animais , Calibragem , Tomografia Computadorizada de Feixe Cônico/instrumentação , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Marcadores Fiduciais , Humanos , Imageamento Tridimensional/instrumentação , Raios Infravermelhos , Período Intraoperatório , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Procedimentos Ortopédicos , Imagens de Fantasmas , Suínos , Fatores de Tempo , Gravação em Vídeo
7.
J Med Imaging (Bellingham) ; 5(2): 021205, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29322072

RESUMO

Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the acetabular cup is verified in C-arm x-ray images that are acquired in an anterior-posterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon's experience in understanding the 3-D orientation of a hemispheric implant from 2-D AP projection images. This work proposes an easy to use intraoperative component planning system based on two C-arm x-ray images that are combined with 3-D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital and report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10 deg, and 0.53 deg, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.

8.
Healthc Technol Lett ; 4(5): 168-173, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29184659

RESUMO

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red-green-blue-depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.

9.
Int J Comput Assist Radiol Surg ; 12(7): 1221-1230, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28527025

RESUMO

PURPOSE: In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance. The commonly used intensity-based 2D/3D registration has a limited capture range and is sensitive to initialization. We propose to utilize an opto/X-ray C-arm system which allows to maintain the registration during intervention by automating the re-initialization for the 2D/3D image registration. Consequently, the surgical workflow is not disrupted and the interaction time for manual initialization is eliminated. METHODS: We utilize two distinct vision-based tracking techniques to estimate the relative poses between different C-arm arrangements: (1) global tracking using fused depth information and (2) RGBD SLAM system for surgical scene tracking. A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target. RESULTS: Several in vitro studies are conducted on pelvic-femur phantom that is encased in gelatin and covered with drapes to simulate a clinically realistic scenario. The mean target registration errors (mTRE) for re-initialization using depth-only and RGB [Formula: see text] depth are 13.23 mm and 11.81 mm, respectively. 2D/3D registration yielded 75% success rate using this automatic re-initialization, compared to a random initialization which yielded only 23% successful registration. CONCLUSION: The pose-aware C-arm contributes to the 2D/3D registration process by globally re-initializing the relationship of C-arm image and pre-interventional CT data. This system performs inside-out tracking, is self-contained, and does not require any external tracking devices.


Assuntos
Imageamento Tridimensional/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Calibragem , Fêmur , Humanos , Imagem Multimodal , Pelve , Imagens de Fantasmas
10.
Int J Comput Assist Radiol Surg ; 11(6): 967-75, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27059022

RESUMO

PURPOSE: This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. METHODS: An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. RESULTS: Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. CONCLUSION: To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional , Monitorização Intraoperatória/métodos , Imagens de Fantasmas , Calibragem , Humanos , Reprodutibilidade dos Testes
11.
Int J Comput Assist Radiol Surg ; 11(6): 1007-14, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26995603

RESUMO

PURPOSE: In many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon's view to assist accurate placement of tools. METHOD: We design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization. RESULTS: The evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency. CONCLUSION: The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


Assuntos
Fios Ortopédicos , Fixação Interna de Fraturas/métodos , Fraturas Ósseas/cirurgia , Ossos Pélvicos/cirurgia , Imagens de Fantasmas , Radiografia Intervencionista/métodos , Tomografia Computadorizada por Raios X/métodos , Fraturas Ósseas/diagnóstico , Humanos , Imageamento Tridimensional/métodos , Ossos Pélvicos/diagnóstico por imagem , Ossos Pélvicos/lesões
12.
Int J Comput Assist Radiol Surg ; 11(6): 1173-81, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27097600

RESUMO

PURPOSE: Precise needle placement is an important task during several medical procedures. Ultrasound imaging is often used to guide the needle toward the target region in soft tissue. This task remains challenging due to the user's dependence on image quality, limited field of view, moving target, and moving needle. In this paper, we present a novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance. METHOD: We integrated force-controlled ultrasound image acquisition, registration of preoperative and intraoperative images, vision-based robot control, and target localization, in combination with a novel needle tracking algorithm. The framework allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions. We assessed the framework by considering both static and moving targets embedded in water and tissue-mimicking gelatin. RESULTS: The presented dual-robot tracking algorithms allow for accurate needle placement, namely to target the region of interest with an error around 1 mm. CONCLUSION: To the best of our knowledge, we show the first use of two independent robots, one for imaging, the other for needle insertion, that are simultaneously controlled using image processing algorithms. Experimental results show the feasibility and demonstrate the accuracy and robustness of the process.


Assuntos
Algoritmos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador , Agulhas , Imagens de Fantasmas , Software , Design de Software , Ultrassonografia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA