Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Surg Endosc ; 38(5): 2483-2496, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38456945

RESUMO

OBJECTIVE: Evaluation of the benefits of a virtual reality (VR) environment with a head-mounted display (HMD) for decision-making in liver surgery. BACKGROUND: Training in liver surgery involves appraising radiologic images and considering the patient's clinical information. Accurate assessment of 2D-tomography images is complex and requires considerable experience, and often the images are divorced from the clinical information. We present a comprehensive and interactive tool for visualizing operation planning data in a VR environment using a head-mounted-display and compare it to 3D visualization and 2D-tomography. METHODS: Ninety medical students were randomized into three groups (1:1:1 ratio). All participants analyzed three liver surgery patient cases with increasing difficulty. The cases were analyzed using 2D-tomography data (group "2D"), a 3D visualization on a 2D display (group "3D") or within a VR environment (group "VR"). The VR environment was displayed using the "Oculus Rift ™" HMD technology. Participants answered 11 questions on anatomy, tumor involvement and surgical decision-making and 18 evaluative questions (Likert scale). RESULTS: Sum of correct answers were significantly higher in the 3D (7.1 ± 1.4, p < 0.001) and VR (7.1 ± 1.4, p < 0.001) groups than the 2D group (5.4 ± 1.4) while there was no difference between 3D and VR (p = 0.987). Times to answer in the 3D (6:44 ± 02:22 min, p < 0.001) and VR (6:24 ± 02:43 min, p < 0.001) groups were significantly faster than the 2D group (09:13 ± 03:10 min) while there was no difference between 3D and VR (p = 0.419). The VR environment was evaluated as most useful for identification of anatomic anomalies, risk and target structures and for the transfer of anatomical and pathological information to the intraoperative situation in the questionnaire. CONCLUSIONS: A VR environment with 3D visualization using a HMD is useful as a surgical training tool to accurately and quickly determine liver anatomy and tumor involvement in surgery.


Assuntos
Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Realidade Virtual , Humanos , Tomografia Computadorizada por Raios X/métodos , Feminino , Masculino , Hepatectomia/métodos , Hepatectomia/educação , Adulto , Adulto Jovem , Tomada de Decisão Clínica , Interface Usuário-Computador , Neoplasias Hepáticas/cirurgia , Neoplasias Hepáticas/diagnóstico por imagem
2.
Surg Endosc ; 36(1): 126-134, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33475848

RESUMO

BACKGROUND: Virtual reality (VR) with head-mounted displays (HMD) may improve medical training and patient care by improving display and integration of different types of information. The aim of this study was to evaluate among different healthcare professions the potential of an interactive and immersive VR environment for liver surgery that integrates all relevant patient data from different sources needed for planning and training of procedures. METHODS: 3D-models of the liver, other abdominal organs, vessels, and tumors of a sample patient with multiple hepatic masses were created. 3D-models, clinical patient data, and other imaging data were visualized in a dedicated VR environment with an HMD (IMHOTEP). Users could interact with the data using head movements and a computer mouse. Structures of interest could be selected and viewed individually or grouped. IMHOTEP was evaluated in the context of preoperative planning and training of liver surgery and for the potential of broader surgical application. A standardized questionnaire was voluntarily answered by four groups (students, nurses, resident and attending surgeons). RESULTS: In the evaluation by 158 participants (57 medical students, 35 resident surgeons, 13 attending surgeons and 53 nurses), 89.9% found the VR system agreeable to work with. Participants generally agreed that complex cases in particular could be assessed better (94.3%) and faster (84.8%) with VR than with traditional 2D display methods. The highest potential was seen in student training (87.3%), resident training (84.6%), and clinical routine use (80.3%). Least potential was seen in nursing training (54.8%). CONCLUSIONS: The present study demonstrates that using VR with HMD to integrate all available patient data for the preoperative planning of hepatic resections is a viable concept. VR with HMD promises great potential to improve medical training and operation planning and thereby to achieve improvement in patient care.


Assuntos
Cirurgiões , Realidade Virtual , Humanos , Fígado , Interface Usuário-Computador
3.
Int J Comput Assist Radiol Surg ; 19(6): 985-993, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38407730

RESUMO

PURPOSE: In surgical computer vision applications, data privacy and expert annotation challenges impede the acquisition of labeled training data. Unpaired image-to-image translation techniques have been explored to automatically generate annotated datasets by translating synthetic images into a realistic domain. The preservation of structure and semantic consistency, i.e., per-class distribution during translation, poses a significant challenge, particularly in cases of semantic distributional mismatch. METHOD: This study empirically investigates various translation methods for generating data in surgical applications, explicitly focusing on semantic consistency. Through our analysis, we introduce a novel and simple combination of effective approaches, which we call ConStructS. The defined losses within this approach operate on multiple image patches and spatial resolutions during translation. RESULTS: Various state-of-the-art models were extensively evaluated on two challenging surgical datasets. With two different evaluation schemes, the semantic consistency and the usefulness of the translated images on downstream semantic segmentation tasks were evaluated. The results demonstrate the effectiveness of the ConStructS method in minimizing semantic distortion, with images generated by this model showing superior utility for downstream training. CONCLUSION: In this study, we tackle semantic inconsistency in unpaired image translation for surgical applications with minimal labeled data. The simple model (ConStructS) enhances consistency during translation and serves as a practical way of generating fully labeled and semantically consistent datasets at minimal cost. Our code is available at https://gitlab.com/nct_tso_public/constructs .


Assuntos
Semântica , Humanos , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
4.
IEEE Trans Biomed Eng ; PP2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39008390

RESUMO

A major challenge in image-guided laparoscopic surgery is that structures of interest often deform and go, even if only momentarily, out of view. Methods which rely on having an up-to-date impression of those structures, such as registration or localisation, are undermined in these circumstances. This is particularly true for soft-tissue structures that continually change shape - in registration, they must often be re-mapped. Furthermore, methods which require 'revisiting' of previously seen areas cannot in principle function reliably in dynamic contexts, drastically weakening their uptake in the operating room. We present a novel approach for learning to estimate the deformed states of previously seen soft tissue surfaces from currently observable regions, using a combined approach that includes a Graph Neural Network (GNN). The training data is based on stereo laparoscopic surgery videos, generated semi-automatically with minimal labelling effort. Trackable segments are first identified using a feature detection algorithm, from which surface meshes are produced using depth estimation and delaunay triangulation. We show the method can predict the displacements of previously visible soft tissue structures connected to currently visible regions with observed displacements, both on our own data and porcine data. Our innovative approach learns to compensate non-rigidity in abdominal endoscopic scenes directly from stereo laparoscopic videos through targeting a new problem formulation, and stands to benefit a variety of target applications in dynamic environments. Project page for this work: https://gitlab.com/nct_tso_public/seesaw-soft-tissue-deformation.

5.
Sci Data ; 11(1): 242, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38409278

RESUMO

Endoscopic optical coherence tomography (OCT) offers a non-invasive approach to perform the morphological and functional assessment of the middle ear in vivo. However, interpreting such OCT images is challenging and time-consuming due to the shadowing of preceding structures. Deep neural networks have emerged as a promising tool to enhance this process in multiple aspects, including segmentation, classification, and registration. Nevertheless, the scarcity of annotated datasets of OCT middle ear images poses a significant hurdle to the performance of neural networks. We introduce the Dresden in vivo OCT Dataset of the Middle Ear (DIOME) featuring 43 OCT volumes from both healthy and pathological middle ears of 29 subjects. DIOME provides semantic segmentations of five crucial anatomical structures (tympanic membrane, malleus, incus, stapes and promontory), and sparse landmarks delineating the salient features of the structures. The availability of these data facilitates the training and evaluation of algorithms regarding various analysis tasks with middle ear OCT images, e.g. diagnostics.


Assuntos
Orelha Média , Tomografia de Coerência Óptica , Humanos , Algoritmos , Orelha Média/diagnóstico por imagem , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos
6.
Int J Comput Assist Radiol Surg ; 17(1): 167-176, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34697757

RESUMO

PURPOSE: The initial registration of a 3D pre-operative CT model to a 2D laparoscopic video image in augmented reality systems for liver surgery needs to be fast, intuitive to perform and with minimal interruptions to the surgical intervention. Several recent methods have focussed on using easily recognisable landmarks across modalities. However, these methods still need manual annotation or manual alignment. We propose a novel, fully automatic pipeline for 3D-2D global registration in laparoscopic liver interventions. METHODS: Firstly, we train a fully convolutional network for the semantic detection of liver contours in laparoscopic images. Secondly, we propose a novel contour-based global registration algorithm to estimate the camera pose without any manual input during surgery. The contours used are the anterior ridge and the silhouette of the liver. RESULTS: We show excellent generalisation of the semantic contour detection on test data from 8 clinical cases. In quantitative experiments, the proposed contour-based registration can successfully estimate a global alignment with as little as 30% of the liver surface, a visibility ratio which is characteristic of laparoscopic interventions. Moreover, the proposed pipeline showed very promising results in clinical data from 5 laparoscopic interventions. CONCLUSIONS: Our proposed automatic global registration could make augmented reality systems more intuitive and usable for surgeons and easier to translate to operating rooms. Yet, as the liver is deformed significantly during surgery, it will be very beneficial to incorporate deformation into our method for more accurate registration.


Assuntos
Realidade Aumentada , Laparoscopia , Cirurgia Assistida por Computador , Algoritmos , Humanos , Imageamento Tridimensional , Fígado/diagnóstico por imagem , Fígado/cirurgia
7.
Sci Rep ; 11(1): 13440, 2021 06 29.
Artigo em Inglês | MEDLINE | ID: mdl-34188080

RESUMO

Recent technological advances have made Virtual Reality (VR) attractive in both research and real world applications such as training, rehabilitation, and gaming. Although these other fields benefited from VR technology, it remains unclear whether VR contributes to better spatial understanding and training in the context of surgical planning. In this study, we evaluated the use of VR by comparing the recall of spatial information in two learning conditions: a head-mounted display (HMD) and a desktop screen (DT). Specifically, we explored (a) a scene understanding and then (b) a direction estimation task using two 3D models (i.e., a liver and a pyramid). In the scene understanding task, participants had to navigate the rendered the 3D models by means of rotation, zoom and transparency in order to substantially identify the spatial relationships among its internal objects. In the subsequent direction estimation task, participants had to point at a previously identified target object, i.e., internal sphere, on a materialized 3D-printed version of the model using a tracked pointing tool. Results showed that the learning condition (HMD or DT) did not influence participants' memory and confidence ratings of the models. In contrast, the model type, that is, whether the model to be recalled was a liver or a pyramid significantly affected participants' memory about the internal structure of the model. Furthermore, localizing the internal position of the target sphere was also unaffected by participants' previous experience of the model via HMD or DT. Overall, results provide novel insights on the use of VR in a surgical planning scenario and have paramount implications in medical learning by shedding light on the mental model we make to recall spatial structures.

8.
Int J Comput Assist Radiol Surg ; 14(7): 1147-1155, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30993520

RESUMO

PURPOSE: In surgical navigation, pre-operative organ models are presented to surgeons during the intervention to help them in efficiently finding their target. In the case of soft tissue, these models need to be deformed and adapted to the current situation by using intra-operative sensor data. A promising method to realize this are real-time capable biomechanical models. METHODS: We train a fully convolutional neural network to estimate a displacement field of all points inside an organ when given only the displacement of a part of the organ's surface. The network trains on entirely synthetic data of random organ-like meshes, which allows us to use much more data than is otherwise available. The input and output data are discretized into a regular grid, allowing us to fully utilize the capabilities of convolutional operators and to train and infer in a highly parallelized manner. RESULTS: The system is evaluated on in-silico liver models, phantom liver data and human in-vivo breathing data. We test the performance with varying material parameters, organ shapes and amount of visible surface. Even though the network is only trained on synthetic data, it adapts well to the various cases and gives a good estimation of the internal organ displacement. The inference runs at over 50 frames per second. CONCLUSION: We present a novel method for training a data-driven, real-time capable deformation model. The accuracy is comparable to other registration methods, it adapts very well to previously unseen organs and does not need to be re-trained for every patient. The high inferring speed makes this method useful for many applications such as surgical navigation and real-time simulation.


Assuntos
Fígado/cirurgia , Redes Neurais de Computação , Simulação por Computador , Sistemas Computacionais , Humanos , Movimentos dos Órgãos , Imagens de Fantasmas
9.
Int J Comput Assist Radiol Surg ; 13(5): 741-748, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29551011

RESUMO

PURPOSE: The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient's treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system. METHODS: We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework's modular design, it can easily be adapted and extended for various clinical applications. RESULTS: The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios-including the simulation of excitation propagation in the human atrium-demonstrated the framework's adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient's liver. CONCLUSION: The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.


Assuntos
Carcinoma Hepatocelular/cirurgia , Hepatectomia/educação , Neoplasias Hepáticas/cirurgia , Realidade Virtual , Idoso , Simulação por Computador , Estudos de Viabilidade , Feminino , Cirurgia Geral/educação , Humanos , Imageamento Tridimensional , Internato e Residência , Projetos Piloto , Treinamento por Simulação/métodos , Estudantes de Medicina , Procedimentos Cirúrgicos Operatórios/educação , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA