Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
Add more filters

Country/Region as subject
Publication year range
1.
J Med Syst ; 48(1): 25, 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38393660

ABSTRACT

Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.


Subject(s)
Brain Neoplasms , Surgery, Computer-Assisted , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/surgery , Brain Neoplasms/pathology , Neuronavigation/methods , Surgery, Computer-Assisted/methods , Neurosurgical Procedures/methods , Ultrasonography , Magnetic Resonance Imaging/methods
2.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Article in English | MEDLINE | ID: mdl-36171451

ABSTRACT

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Subject(s)
Machine Learning , Surgeons , Humans , Morbidity
3.
Laryngorhinootologie ; 101(10): 805-813, 2022 10.
Article in German | MEDLINE | ID: mdl-35724676

ABSTRACT

BACKGROUND: Endoscopic surgical procedures have been established as gold standard in sinus surgery. Challenges for surgical training have been addressed by the use of virtual reality (VR) simulators. To date, a number of simulators have been developed. However, previous studies regarding their training effects investigated only medically pretrained subjects or the time course of training outcomes has not been reported. METHODS: A computer tomography (CT) dataset was segmented manually. A three-dimensional polygonal surface model was generated and textured using original photographic material. Interaction with the virtual environment was performed using a haptic input device. For the investigation of training outcomes with the simulator, the parameters duration and the number of errors were recorded. Ten subjects completed a training consisting of five runs on ten consecutive days. RESULTS: Within the whole exercise period, four subjects reduced the duration of intervention by more than 60%. Four subjects reduced the number of errors by more than 60%. Eight out of 10 subjects showed an improvement with respect to both parameters. On median, the duration of the procedure was reduced by 46 seconds and the number of errors by 191. The statistical analysis between the two parameters showed a positive correlation. CONCLUSION: Our data suggests that training on the FESS-simulator considerably improves the performance even in inexperienced subjects, both in terms of duration and accuracy of the procedure.


Subject(s)
Endoscopy , Virtual Reality , Clinical Competence , Computer Simulation , Endoscopy/methods , Humans
4.
Int J Comput Assist Radiol Surg ; 19(1): 69-82, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37620748

ABSTRACT

PURPOSE: For the modeling, execution, and control of complex, non-standardized intraoperative processes, a modeling language is needed that reflects the variability of interventions. As the established Business Process Model and Notation (BPMN) reaches its limits in terms of flexibility, the Case Management Model and Notation (CMMN) was considered as it addresses weakly structured processes. METHODS: To analyze the suitability of the modeling languages, BPMN and CMMN models of a Robot-Assisted Minimally Invasive Esophagectomy and Cochlea Implantation were derived and integrated into a situation recognition workflow. Test cases were used to contrast the differences and compare the advantages and disadvantages of the models concerning modeling, execution, and control. Furthermore, the impact on transferability was investigated. RESULTS: Compared to BPMN, CMMN allows flexibility for modeling intraoperative processes while remaining understandable. Although more effort and process knowledge are needed for execution and control within a situation recognition system, CMMN enables better transferability of the models and therefore the system. Concluding, CMMN should be chosen as a supplement to BPMN for flexible process parts that can only be covered insufficiently by BPMN, or otherwise as a replacement for the entire process. CONCLUSION: CMMN offers the flexibility for variable, weakly structured process parts, and is thus suitable for surgical interventions. A combination of both notations could allow optimal use of their advantages and support the transferability of the situation recognition system.


Subject(s)
Case Management , Humans , Workflow
5.
Sci Rep ; 14(1): 3713, 2024 02 14.
Article in English | MEDLINE | ID: mdl-38355678

ABSTRACT

Accurate localization of gliomas, the most common malignant primary brain cancer, and its different sub-region from multimodal magnetic resonance imaging (MRI) volumes are highly important for interventional procedures. Recently, deep learning models have been applied widely to assist automatic lesion segmentation tasks for neurosurgical interventions. However, these models are often complex and represented as "black box" models which limit their applicability in clinical practice. This article introduces new hybrid vision Transformers and convolutional neural networks for accurate and robust glioma segmentation in Brain MRI scans. Our proposed method, TransXAI, provides surgeon-understandable heatmaps to make the neural networks transparent. TransXAI employs a post-hoc explanation technique that provides visual interpretation after the brain tumor localization is made without any network architecture modifications or accuracy tradeoffs. Our experimental findings showed that TransXAI achieves competitive performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about the tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully. Thus, it enables the physicians' trust in such deep learning systems towards applying them clinically. To facilitate TransXAI model development and results reproducibility, we will share the source code and the pre-trained models after acceptance at https://github.com/razeineldin/TransXAI .


Subject(s)
Brain Neoplasms , Glioma , Humans , Reproducibility of Results , Image Processing, Computer-Assisted/methods , Glioma/diagnostic imaging , Glioma/pathology , Magnetic Resonance Imaging/methods , Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Brain/pathology
6.
Stud Health Technol Inform ; 302: 149-150, 2023 May 18.
Article in English | MEDLINE | ID: mdl-37203635

ABSTRACT

This project aims to evaluate existing big data infrastructures for their applicability in the operating room to support medical staff with context-sensitive systems. Requirements for the system design were generated. The project compares different data mining technologies, interfaces, and software system infrastructures with a focus on their usefulness in the peri-operative setting. The lambda architecture was chosen for the proposed system design, which will provide data for both postoperative analysis and real-time support during surgery.


Subject(s)
Operating Rooms , Software , Humans , Big Data , Data Mining , Cognition
7.
Pediatr Surg Int ; 28(4): 357-62, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22200733

ABSTRACT

PURPOSE: Surgical robots are designed to facilitate dissection and suturing, although objective data on their superiority are lacking. This study compares conventional laparoscopic Nissen fundoplication (CLNF) to robot-assisted Nissen fundoplication (RANF) using computer-based workflow analysis in an infant pig model. METHODS: CLNF and RANF were performed in 12 pigs. Surgical workflow was segmented into phases. Time required to perform specific actions was compared by t test. The quality of knot-tying was evaluated by a skill scoring system. Cardia yield pressure (CYP) was determined to test the efficacy of the fundoplications, and the incidence of complications was compared. RESULTS: There was no difference in average times to complete the various phases, despite faster robotic knot-tying (p = 0.001). Suturing quality was superior in CLNF (p = 0.02). CYP increased similarly in both groups. Workflow-interrupting hemorrhage and pneumothorax occurred more frequently during CLNF (p = 0.040 and 0.044, respectively), while more sutures broke during RANF (p = 0.001). CONCLUSION: The robot provides no clear temporal advantage compared to conventional laparoscopy for fundoplication, although suturing was faster in RANF. Fewer complications were noted using the robot. RANF and CLNF were equally efficient anti-reflux procedures. For robotic surgery to manifest its full potential, more complex operations may have to be evaluated.


Subject(s)
Fundoplication/methods , Laparoscopy , Robotics , Animals , Models, Animal , Sus scrofa
8.
J Digit Imaging ; 25(3): 352-8, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21858592

ABSTRACT

Surgeons have to deal with many devices from different vendors within the operating room during surgery. Independent communication standards are necessary for the system integration of these devices. For implantations, three new extensions of the Digital Imaging and Communications in Medicine (DICOM) standard make use of a common communication standard that may optimise one of the surgeon's presently very time-consuming daily tasks. The paper provides a brief description of these DICOM Supplements and gives recommendations to their application in practice based on workflows that are proposed to be covered by the new standard extension. Two of the workflows are described in detail and separated into phases that are supported by the new data structures. Examples for the application of the standard within these phases give an impression of the potential usage. Even if the presented workflows are from different domains, we identified a generic core that may benefit from the surgical DICOM Supplements. In some steps of the workflows, the surgical DICOM Supplements are able to replace or optimise conventional methods. Standardisation can only be a means for integration and interoperability. Thus, it can be used as the basis for new applications and system architectures. The influence on current applications and communication processes is limited. Additionally, the supplements provide the basis for further applications, such as the support of surgical navigation systems. Given the support of all involved stakeholders, it is possible to provide a benefit for surgeons and patients.


Subject(s)
Arthroplasty, Replacement, Hip , Dental Implants , Hip Prosthesis , Surgery, Computer-Assisted/methods , Surgery, Oral , Diagnostic Imaging , Humans , Patient Care Planning , Radiology Information Systems , Software
9.
Int J Comput Assist Radiol Surg ; 17(11): 2161-2171, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35593986

ABSTRACT

PURPOSE: Context awareness in the operating room (OR) is important to realize targeted assistance to support actors during surgery. A situation recognition system (SRS) is used to interpret intraoperative events and derive an intraoperative situation from these. To achieve a modular system architecture, it is desirable to de-couple the SRS from other system components. This leads to the need of an interface between such an SRS and context-aware systems (CAS). This work aims to provide an open standardized interface to enable loose coupling of the SRS with varying CAS to allow vendor-independent device orchestrations. METHODS: A requirements analysis investigated limiting factors that currently prevent the integration of CAS in today's ORs. These elicited requirements enabled the selection of a suitable base architecture. We examined how to specify this architecture with the constraints of an interoperability standard. The resulting middleware was integrated into a prototypic SRS and our system for intraoperative support, the OR-Pad, as exemplary CAS for evaluating whether our solution can enable context-aware assistance during simulated orthopedical interventions. RESULTS: The emerging Service-oriented Device Connectivity (SDC) standard series was selected to specify and implement a middleware for providing the interpreted contextual information while the SRS and CAS are loosely coupled. The results were verified within a proof of concept study using the OR-Pad demonstration scenario. The fulfillment of the CAS' requirements to act context-aware, conformity to the SDC standard series, and the effort for integrating the middleware in individual systems were evaluated. The semantically unambiguous encoding of contextual information depends on the further standardization process of the SDC nomenclature. The discussion of the validity of these results proved the applicability and transferability of the middleware. CONCLUSION: The specified and implemented SDC-based middleware shows the feasibility of loose coupling an SRS with unknown CAS to realize context-aware assistance in the OR.


Subject(s)
Operating Rooms , Surgery, Computer-Assisted , Humans
10.
Stud Health Technol Inform ; 294: 809-810, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35612211

ABSTRACT

Physicians in interventional radiology are exposed to high physical stress. To avoid negative long-term effects resulting from unergonomic working conditions, we demonstrated the feasibility of a system that gives feedback about unergonomic situations arising during the intervention based on the Azure Kinect camera. The overall feasibility of the approach could be shown.


Subject(s)
Ergonomics , Radiologists , Humans , Posture , Radiology, Interventional
11.
Int J Comput Assist Radiol Surg ; 17(9): 1673-1683, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35460019

ABSTRACT

PURPOSE: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice. METHODS: In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent. RESULTS: NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN. CONCLUSION: Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI .


Subject(s)
Artificial Intelligence , Brain Neoplasms , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
12.
Biomed Tech (Berl) ; 66(4): 413-421, 2021 Aug 26.
Article in English | MEDLINE | ID: mdl-33655738

ABSTRACT

Uncontrolled movements of laparoscopic instruments can lead to inadvertent injury of adjacent structures. The risk becomes evident when the dissecting instrument is located outside the field of view of the laparoscopic camera. Technical solutions to ensure patient safety are appreciated. The present work evaluated the feasibility of an automated binary classification of laparoscopic image data using Convolutional Neural Networks (CNN) to determine whether the dissecting instrument is located within the laparoscopic image section. A unique record of images was generated from six laparoscopic cholecystectomies in a surgical training environment to configure and train the CNN. By using a temporary version of the neural network, the annotation of the training image files could be automated and accelerated. A combination of oversampling and selective data augmentation was used to enlarge the fully labeled image data set and prevent loss of accuracy due to imbalanced class volumes. Subsequently the same approach was applied to the comprehensive, fully annotated Cholec80 database. The described process led to the generation of extensive and balanced training image data sets. The performance of the CNN-based binary classifiers was evaluated on separate test records from both databases. On our recorded data, an accuracy of 0.88 with regard to the safety-relevant classification was achieved. The subsequent evaluation on the Cholec80 data set yielded an accuracy of 0.84. The presented results demonstrate the feasibility of a binary classification of laparoscopic image data for the detection of adverse events in a surgical training environment using a specifically configured CNN architecture.


Subject(s)
Cholecystectomy, Laparoscopic/adverse effects , Algorithms , Cholecystectomy, Laparoscopic/methods , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Patient Safety
13.
Int J Comput Assist Radiol Surg ; 15(6): 909-920, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32372386

ABSTRACT

PURPOSE: Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid-attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data. METHODS: The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study. RESULTS: The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly. CONCLUSION: This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.


Subject(s)
Brain Neoplasms/diagnostic imaging , Deep Learning , Glioma/diagnostic imaging , Neural Networks, Computer , Brain Neoplasms/pathology , Disease Progression , Glioma/pathology , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
14.
J Am Med Inform Assoc ; 16(1): 72-80, 2009.
Article in English | MEDLINE | ID: mdl-18952942

ABSTRACT

OBJECTIVE: Surgical Process Models (SPMs) are models of surgical interventions. The objectives of this study are to validate acquisition methods for Surgical Process Models and to assess the performance of different observer populations. DESIGN: The study examined 180 SPM of simulated Functional Endoscopic Sinus Surgeries (FESS), recorded with observation software. About 150,000 single measurements in total were analyzed. MEASUREMENTS: Validation metrics were used for assessing the granularity, content accuracy, and temporal accuracy of structures of SPMs. RESULTS: Differences between live observations and video observations are not statistically significant. Observations performed by subjects with medical backgrounds gave better results than observations performed by subjects with technical backgrounds. Granularity was reconstructed correctly by 90%, content by 91%, and the mean temporal accuracy was 1.8 s. CONCLUSION: The study shows the validity of video as well as live observations for modeling Surgical Process Models. For routine use, the authors recommend live observations due to their flexibility and effectiveness. If high precision is needed or the SPM parameters are altered during the study, video observations are the preferable approach.


Subject(s)
Models, Anatomic , Software , Task Performance and Analysis , Humans , Software Design , User-Computer Interface , Video Recording
15.
J Laparoendosc Adv Surg Tech A ; 19 Suppl 1: S117-22, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19021467

ABSTRACT

BACKGROUND: Many fields use workflow analysis to assess and improve performance of complex tasks. In pediatric endosurgery, workflow analysis may help optimize operative planning and motor skills by breaking down the procedure into particular phases, evaluating these steps individually, and supplying feedback to the surgeon. OBJECTIVE: To develop a module of computer-based surgical workflow analysis for laparoscopic Nissen fundoplication(LNF) and to evaluate its applicability in an infant pig model. METHODS: LNF was performed in 12 pigs (weight, 7-10 kg) by a single surgeon. Based on synchronized intra and extracorporal movie recordings, the surgical workflow was segmented into temporal operative phases(preparation, dissection, reconstruction and conclusion). During each stage, all actions were recorded in a virtual timeline using a customized workflow editor. Specific tasks, such as knot-tying, were evaluated in detail.Time necessary to perform these actions was compared throughout the study. RESULTS: While time required for the preparation decreased by more than 70% from 4577 to 1379 seconds,and the dissection phase decreased from 2359 to 399 seconds (pig 1 and 12, respectively), the other two phases remained relatively stable. Mean time to perform the entire suture and a 5-throw knot remained constant as well. CONCLUSION: Our workflow analysis model allows the quantitative evaluation of dynamic actions related to LNF.This data can be used to define average benchmark criteria for the procedures that comprise this operation. It thereby permits task-oriented refinement of surgical technique as well as monitoring the efficacy of training.Although preoperative preparation time decreased substantially, and dissection became faster, time required for the reconstruction and conclusion phases remained relatively constant for a surgeon with moderate experience.Likewise, knot-tying did not accelerate in this setting.S-117


Subject(s)
Fundoplication/methods , Laparoscopy/methods , Workflow , Animals , Computer Simulation , Feedback , General Surgery/education , Swine
16.
Article in English | MEDLINE | ID: mdl-19929296

ABSTRACT

The presented approach introduces a method for estimating the potential benefit of a surgical assist system prior to its actual development or clinical use. The central research question is: What minimal requirements must a future system meet so that its use would be more advantageous than a conventional or already existent method or system, and how can these requirements be obtained from routine clinical data? Forty-three cases of lumbar discectomies were analyzed with regard to activities related to bone ablation in order to predict the temporal requirements for an alternative strategy of using a surgical assist system for bone ablation. The study recorded and analyzed surgical process models (SPM), which are progression models with detailed and exact-to-the-second representations of surgical work steps, as a sensible means for the detailed quantification of the temporal needs of the system. The presented methods can be used for a systematic analysis of such requirements. Implementation of these methods will prove very useful in the future from a medical, technical, and administrative point of view. Manufacturers can use this analytical procedure to derive parameters for their systems that indicate success criteria. Additionally, hospitals can decide, before making actual capital expenditure decisions, if the system of interest is superior to the conventional strategy and therefore worth the investment.


Subject(s)
Diskectomy/methods , Intervertebral Disc Displacement/surgery , Surgery, Computer-Assisted/methods , Ablation Techniques/methods , Adult , Female , Humans , Intervertebral Disc/surgery , Lumbar Vertebrae/surgery , Male , Middle Aged , Needs Assessment
17.
Stud Health Technol Inform ; 132: 62-7, 2008.
Article in English | MEDLINE | ID: mdl-18391258

ABSTRACT

Very little medical virtual reality systems which are developed are applied in real surgical scenarios. One reason for this is that the system solutions resulting from research projects often address a single research question and are not embedded in an overall design. This paper presents a DICOM based approach on standardizing data structures, i.e. surface meshes, which are required for supporting surgical workflows by virtual reality applications.


Subject(s)
Computer Simulation/standards , Surgical Procedures, Operative/education , User-Computer Interface , Germany , Humans , Image Processing, Computer-Assisted/standards , Radiographic Image Enhancement
18.
Surg Laparosc Endosc Percutan Tech ; 17(5): 402-6, 2007 Oct.
Article in English | MEDLINE | ID: mdl-18049401

ABSTRACT

BACKGROUND: The use of a telemanipulator requires special training and surgical performance is associated with a learning curve. The aim of this study was to demonstrate the potential value of Haptic-Visual over Visual-Only passive Training in telemanipulator-assisted surgery. METHODS: Two telemanipulator consoles (da Vinci, Intuitive Surgical) were linked through an Application Programer's Interface allowing the applicant at the training console to register the position and passively follow the motions of the instructor's master telemanipulators (MTMs) at the master console (Haptic-Visual Learning group, HVL). The applicant could not actively interfere with the MTM movements. Both the trainee and the instructor shared the same 3-dimensional vision. Alternatively, subjects received only standard visual training without touching the MTMs (Visual-Only Learning group, VL). A standardized demonstration of tasks and the system was given for both groups. Participants (n=20) without previous experience with telemanipulation performed a set of various tasks in a randomized order. Study end points were time and accuracy required to perform the different task. RESULTS: The first task, with moving items to appropriate locations, showed differences in time to perform the task [mean: 4:06 min (HVL) vs. 5:16 min (VL) (P=0.2)] and accuracy differed among groups [mean number of errors 1.7 (VL) vs. 1.3 (HVL) P=0.38]. With more challenging tasks [cut out round figures (cut) and performing double dot suture lines (sti)] the number of errors was less in the HVL group [mean: 1.1 errors (cut) (P=0.05) and 1.8 errors (sti) (P=0.26)] compared with the VL group [mean: 1.8 errors (cut) and 2.3 errors (sti)]. In addition, the time to perform the tasks decreased in the HVL group with mean: 5.42 minutes (cut) (P=0.26) and 9.41 minutes (sti) (P=0.36) compared with the VL group with mean: 7.09 minutes (cut) and 11.43 minutes (sti). CONCLUSIONS: This study demonstrated the impact of haptic-visual passive learning in telemanipulator-assisted surgery which may alter the training for telemanipulator-assisted endoscopic procedures.


Subject(s)
General Surgery/education , Man-Machine Systems , Robotics , Surgical Procedures, Operative/methods , Telemetry/instrumentation , Educational Measurement , Humans , Learning , Models, Anatomic , Models, Structural , Reproducibility of Results , Surgical Procedures, Operative/education , Video Recording
19.
Stud Health Technol Inform ; 125: 58-63, 2007.
Article in English | MEDLINE | ID: mdl-17377234

ABSTRACT

Surgical simulations are normally developed in a cycle of continuous refinement. This leads to high costs in simulator design and as a result to a very limited number of simulators which are used in clinical training scenarios. We propose using Surgical Workflow Analysis for a goal-oriented specification of surgical simulators. Based on Surgical Workflows, the needed interaction scenarios and properties of a simulator can be derived easily. It is also possible to compare an existing simulator with the real workflow to distinguish whether it behaves realistically. We are currently using this method for the design of a new simulator for transnasal neurosurgery with good success.


Subject(s)
Computer Simulation/standards , Surgical Procedures, Operative , Germany , Humans , Nasal Cavity , Neurosurgical Procedures
20.
Stud Health Technol Inform ; 235: 33-37, 2017.
Article in English | MEDLINE | ID: mdl-28423750

ABSTRACT

Clinical reading centers provide expertise for consistent, centralized analysis of medical data gathered in a distributed context. Accordingly, appropriate software solutions are required for the involved communication and data management processes. In this work, an analysis of general requirements and essential architectural and software design considerations for reading center information systems is provided. The identified patterns have been applied to the implementation of the reading center platform which is currently operated at the Center of Ophthalmology of the University Hospital of Tübingen.


Subject(s)
Biomedical Research , Medical Records Systems, Computerized , Software Design , Software , Humans , Medical Informatics Applications , Medical Record Linkage
SELECTION OF CITATIONS
SEARCH DETAIL