Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(6)2023 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-36991751

RESUMO

The adoption of extended reality solutions is growing rapidly in the healthcare world. Augmented reality (AR) and virtual reality (VR) interfaces can bring advantages in various medical-health sectors; it is thus not surprising that the medical MR market is among the fastest-growing ones. The present study reports on a comparison between two of the most popular MR head-mounted displays, Magic Leap 1 and Microsoft HoloLens 2, for the visualization of 3D medical imaging data. We evaluate the functionalities and performance of both devices through a user-study in which surgeons and residents assessed the visualization of 3D computer-generated anatomical models. The digital content is obtained through a dedicated medical imaging suite (Verima imaging suite) developed by the Italian start-up company (Witapp s.r.l.). According to our performance analysis in terms of frame rate, there are no significant differences between the two devices. The surgical staff expressed a clear preference for Magic Leap 1, particularly for the better visualization quality and the ease of interaction with the 3D virtual content. Nonetheless, even though the results of the questionnaire were slightly more positive for Magic Leap 1, the spatial understanding of the 3D anatomical model in terms of depth relations and spatial arrangement was positively evaluated for both devices.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Realidade Virtual , Humanos , Simulação por Computador , Cirurgia Assistida por Computador/métodos , Imageamento Tridimensional
2.
Sensors (Basel) ; 23(7)2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37050554

RESUMO

The growing interest in augmented reality applications has led to an in-depth look at the performance of head-mounted displays and their testing in numerous domains. Other devices for augmenting the real world with virtual information are presented less frequently and usually focus on the description of the device rather than on its performance analysis. This is the case of projected augmented reality, which, compared to head-worn AR displays, offers the advantages of being simultaneously accessible by multiple users whilst preserving user awareness of the environment and feeling of immersion. This work provides a general evaluation of a custom-made head-mounted projector for the aid of precision manual tasks through an experimental protocol designed for investigating spatial and temporal registration and their combination. The results of the tests show that the accuracy (0.6±0.1 mm of spatial registration error) and motion-to-photon latency (113±12 ms) make the proposed solution suitable for guiding precision tasks.

3.
Sensors (Basel) ; 21(13)2021 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-34209748

RESUMO

Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient's preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning.


Assuntos
Realidade Aumentada , Criocirurgia , Simulação por Computador , Humanos , Estudos Retrospectivos , Software
4.
Sensors (Basel) ; 20(5)2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-32155808

RESUMO

The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.

5.
Sensors (Basel) ; 20(6)2020 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-32183212

RESUMO

Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.


Assuntos
Realidade Aumentada , Imageamento Tridimensional/métodos , Imagens de Fantasmas , Dispositivos Eletrônicos Vestíveis , Gráficos por Computador , Humanos , Cirurgia Assistida por Computador/tendências , Interface Usuário-Computador , Gravação em Vídeo
6.
Neurosurg Rev ; 40(4): 537-548, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27154018

RESUMO

Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated 3D virtual model of anatomical details, overlaid on the real surgical field. Currently, only a few AR systems have been tested in a clinical setting. The aim is to review such devices. We performed a PubMed search of reports restricted to human studies of in vivo applications of AR in any neurosurgical procedure using the search terms "Augmented reality" and "Neurosurgery." Eligibility assessment was performed independently by two reviewers in an unblinded standardized manner. The systems were qualitatively evaluated on the basis of the following: neurosurgical subspecialty of application, pathology of treated lesions and lesion locations, real data source, virtual data source, tracking modality, registration technique, visualization processing, display type, and perception location. Eighteen studies were included during the period 1996 to September 30, 2015. The AR systems were grouped by the real data source: microscope (8), hand- or head-held cameras (4), direct patient view (2), endoscope (1), and X-ray fluoroscopy (1) head-mounted display (1). A total of 195 lesions were treated: 75 (38.46 %) were neoplastic, 77 (39.48 %) neurovascular, and 1 (0.51 %) hydrocephalus, and 42 (21.53 %) were undetermined. Current literature confirms that AR is a reliable and versatile tool when performing minimally invasive approaches in a wide range of neurosurgical diseases, although prospective randomized studies are not yet available and technical improvements are needed.


Assuntos
Neuronavegação/instrumentação , Cirurgia Assistida por Computador/instrumentação , Humanos , Neuronavegação/métodos , Cirurgia Assistida por Computador/métodos
7.
Healthc Technol Lett ; 11(2-3): 101-107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638490

RESUMO

Recent research studies reported that the employment of wearable augmented reality (AR) systems such as head-mounted displays for the in situ visualisation of ultrasound (US) images can improve the outcomes of US-guided biopsies through reduced procedure completion times and improved accuracy. Here, the authors continue in the direction of recent developments and present the first AR system for guiding an in-depth tumour enucleation procedure under US guidance. The system features an innovative visualisation modality with cutting trajectories that 'sink' into the tissue according to the depth reached by the electric scalpel, tracked in real-time, and a virtual-to-virtual alignment between the scalpel's tip and the trajectory. The system has high accuracy in estimating the scalpel's tip position (mean depth error of 0.4 mm and mean radial error of 1.34 mm). Furthermore, we demonstrated with a preliminary user study that our system allowed us to successfully guide an in-depth tumour enucleation procedure (i.e. preserving the safety margin around the lesion).

8.
IEEE J Transl Eng Health Med ; 12: 258-267, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38410181

RESUMO

Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Salas Cirúrgicas , Imagens de Fantasmas
10.
IEEE Trans Vis Comput Graph ; 28(8): 3069, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35771835

RESUMO

In the original article, there was a mistake in the content of Table 2 page 8 column 1 as published. The values of the mean and standard deviation of the virtual-to-real overlay error in visual angles, which are reported for different checkerboard distances, are to be corrected. Due to a typing error within the data analysis code, we mistakenly considered an erroneous value of the average angular resolution for the eye-replacement camera. This scale factor is used to pass from the original registration errors (expressed in pixel) to the angular registration errors (in arcmin). The value of the average angular resolution is $\approx 2.67$≈2.67 arcmin/pixel. The corrected Table 2 appears below.

11.
IEEE Trans Vis Comput Graph ; 28(3): 1608-1618, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-32881688

RESUMO

Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this article, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ±100 mm around the focal length of the magnifier.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Calibragem , Gráficos por Computador , Espaço Pessoal , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador
12.
IEEE J Biomed Health Inform ; 26(2): 910-921, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34115600

RESUMO

Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 ± 2.55 arcmin for VST and 7.72 ± 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 ± 1.04 mm, 5.96 ± 2.22 ° for the VST system, and 4.36 ± 0.80 mm, 5.65 ± 1.42 ° for the OST system.


Assuntos
Realidade Aumentada , Ortopedia , Cirurgia Assistida por Computador , Calibragem , Humanos , Reprodutibilidade dos Testes , Cirurgia Assistida por Computador/métodos
13.
Artigo em Inglês | MEDLINE | ID: mdl-35627884

RESUMO

In recent years, huge progress has been made in the management of brain tumors, due to the availability of imaging devices, which provide fundamental anatomical and pathological information not only for diagnostic purposes [...].


Assuntos
Realidade Aumentada , Neoplasias Encefálicas , Cirurgia Assistida por Computador , Neoplasias Encefálicas/diagnóstico por imagem , Previsões , Humanos
14.
Bioengineering (Basel) ; 8(10)2021 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-34677204

RESUMO

Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.

15.
Artigo em Inglês | MEDLINE | ID: mdl-34639256

RESUMO

BACKGROUND: This report discusses the utility of a wearable augmented reality platform in neurosurgery for parasagittal and convexity en plaque meningiomas with bone flap removal and custom-made cranioplasty. METHODS: A real patient with en plaque cranial vault meningioma with diffuse and extensive dural involvement, extracranial extension into the calvarium, and homogeneous contrast enhancement on gadolinium-enhanced T1-weighted MRI, was selected for this case study. A patient-specific manikin was designed starting with the segmentation of the patient's preoperative MRI images to simulate a craniotomy procedure. Surgical planning was performed according to the segmented anatomy, and customized bone flaps were designed accordingly. During the surgical simulation stage, the VOSTARS head-mounted display was used to accurately display the planned craniotomy trajectory over the manikin skull. The precision of the craniotomy was assessed based on the evaluation of previously prepared custom-made bone flaps. RESULTS: A bone flap with a radius 0.5 mm smaller than the radius of an ideal craniotomy fitted perfectly over the performed craniotomy, demonstrating an error of less than ±1 mm in the task execution. The results of this laboratory-based experiment suggest that the proposed augmented reality platform helps in simulating convexity en plaque meningioma resection and custom-made cranioplasty, as carefully planned in the preoperative phase. CONCLUSIONS: Augmented reality head-mounted displays have the potential to be a useful adjunct in tumor surgical resection, cranial vault lesion craniotomy and also skull base surgery, but more study with large series is needed.


Assuntos
Realidade Aumentada , Neoplasias Meníngeas , Meningioma , Craniotomia , Humanos , Laboratórios , Neoplasias Meníngeas/diagnóstico por imagem , Neoplasias Meníngeas/cirurgia , Meningioma/diagnóstico por imagem , Meningioma/cirurgia
16.
Ann Biomed Eng ; 49(9): 2590-2605, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34297263

RESUMO

Today, neuronavigation is widely used in daily clinical routine to perform safe and efficient surgery. Augmented reality (AR) interfaces can provide anatomical models and preoperative planning contextually blended with the real surgical scenario, overcoming the limitations of traditional neuronavigators. This study aims to demonstrate the reliability of a new-concept AR headset in navigating complex craniotomies. Moreover, we aim to prove the efficacy of a patient-specific template-based methodology for fast, non-invasive, and fully automatic planning-to-patient registration. The AR platform navigation performance was assessed with an in-vitro study whose goal was twofold: to measure the real-to-virtual 3D target visualization error (TVE), and assess the navigation accuracy through a user study involving 10 subjects in tracing a complex craniotomy. The feasibility of the template-based registration was preliminarily tested on a volunteer. The TVE mean and standard deviation were 1.3 and 0.6 mm. The results of the user study, over 30 traced craniotomies, showed that 97% of the trajectory length was traced within an error margin of 1.5 mm, and 92% within a margin of 1 mm. The in-vivo test confirmed the feasibility and reliability of the patient-specific template for registration. The proposed AR headset allows ergonomic and intuitive fruition of preoperative planning, and it can represent a valid option to support neurosurgical tasks.


Assuntos
Realidade Aumentada , Craniotomia/métodos , Neurocirurgia/métodos , Dispositivos Eletrônicos Vestíveis , Adulto , Craniotomia/instrumentação , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Neurocirurgia/instrumentação , Imagens de Fantasmas , Crânio/diagnóstico por imagem , Crânio/cirurgia
17.
J Imaging ; 7(8)2021 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-34460773

RESUMO

Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth of field. This study investigates the use of liquid-lens optics to create an autofocus system for wearable VST visors. The autofocus system is based on a Time of Flight (TOF) distance sensor and an active autofocus control system. The integrated autofocus system in the wearable VST viewers showed good potential in terms of providing rapid focus at various distances and a magnified view.

18.
Front Robot AI ; 7: 572001, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501331

RESUMO

Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space.

19.
J Clin Med ; 9(11)2020 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-33167432

RESUMO

BACKGROUND: In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. METHODS: An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. RESULTS: VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory's length was within ±0.5 mm accuracy. CONCLUSIONS: Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.

20.
J Healthc Eng ; 2019: 5613931, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31316742

RESUMO

Aortic valve replacement is the only definitive treatment for aortic stenosis, a highly prevalent condition in elderly population. Minimally invasive surgery brought numerous benefits to this intervention, and robotics recently provided additional improvements in terms of telemanipulation, motion scaling, and smaller incisions. Difficulties in obtaining a clear and wide field of vision is a major challenge in minimally invasive aortic valve surgery: surgeon orientates with difficulty because of lack of direct view and limited spaces. This work focuses on the development of a computer vision methodology, for a three-eyed endoscopic vision system, to ease minimally invasive instrument guidance during aortic valve surgery. Specifically, it presents an efficient image stitching method to improve spatial awareness and overcome the orientation problems which arise when cameras are decentralized with respect to the main axis of the aorta and are nonparallel oriented. The proposed approach was tested for the navigation of an innovative robotic system for minimally invasive valve surgery. Based on the specific geometry of the setup and the intrinsic parameters of the three cameras, we estimate the proper plane-induced homographic transformation that merges the views of the operatory site plane into a single stitched image. To evaluate the deviation from the image correct alignment, we performed quantitative tests by stitching a chessboard pattern. The tests showed a minimum error with respect to the image size of 0.46 ± 0.15% measured at the homography distance of 40 mm and a maximum error of 6.09 ± 0.23% at the maximum offset of 10 mm. Three experienced surgeons in aortic valve replacement by mini-sternotomy and mini-thoracotomy performed experimental tests based on the comparison of navigation and orientation capabilities in a silicone aorta with and without stitched image. The tests showed that the stitched image allows for good orientation and navigation within the aorta, and furthermore, it provides more safety while releasing the valve than driving from the three separate views. The average processing time for the stitching of three views into one image is 12.6 ms, proving that the method is not computationally expensive, thus leaving space for further real-time processing.


Assuntos
Endoscópios , Endoscopia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Valva Aórtica/cirurgia , Endoscopia/instrumentação , Endoscopia/métodos , Desenho de Equipamento , Implante de Prótese de Valva Cardíaca/métodos , Humanos , Esternotomia/métodos , Toracotomia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA