Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Surg Endosc ; 38(7): 3917-3928, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38834723

RESUMEN

BACKGROUND: Tissue handling is a crucial skill for surgeons and is challenging to learn. The aim of this study was to develop laparoscopic instruments with different integrated tactile vibration feedback by varying different tactile modalities and assess its effect on tissue handling skills. METHODS: Standard laparoscopic instruments were equipped with a vibration effector, which was controlled by a microcomputer attached to a force sensor platform. One of three different vibration feedbacks (F1: double vibration > 2 N; F2: increasing vibration relative to force; F3: one vibration > 1.5 N and double vibration > 2 N) was applied to the instruments. In this multicenter crossover trial, surgical novices and expert surgeons performed two laparoscopic tasks (Peg transfer, laparoscopic suture, and knot) each with all the three vibration feedback modalities and once without any feedback, in a randomized order. The primary endpoint was force exertion. RESULTS: A total of 57 subjects (15 surgeons, 42 surgical novices) were included in the trial. In the Peg transfer task, there were no differences between the tactile feedback modalities in terms of force application. However, in subgroup analysis, the use of F2 resulted in a significantly lower mean-force application (p-value = 0.02) among the student group. In the laparoscopic suture and knot task, all participants exerted significantly lower mean and peak forces using F2 (p-value < 0.01). These findings remained significant after subgroup analysis for both, the student and surgeon groups individually. The condition without tactile feedback led to the highest mean and peak force exertion compared to the three other feedback modalities. CONCLUSION: Continuous tactile vibration feedback decreases the mean and peak force applied during laparoscopic training tasks. This effect is more pronounced in demanding tasks such as laparoscopic suturing and knot tying and might be more beneficial for students. Laparoscopic tasks without feedback lead to increased force application.


Asunto(s)
Competencia Clínica , Estudios Cruzados , Laparoscopía , Tacto , Vibración , Humanos , Laparoscopía/educación , Femenino , Masculino , Técnicas de Sutura/educación , Adulto , Retroalimentación Sensorial
2.
Surg Endosc ; 38(5): 2900-2910, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38632120

RESUMEN

BACKGROUND: Virtual reality is a frequently chosen method for learning the basics of robotic surgery. However, it is unclear whether tissue handling is adequately trained in VR training compared to training on a real robotic system. METHODS: In this randomized controlled trial, participants were split into two groups for "Fundamentals of Robotic Surgery (FRS)" training on either a DaVinci VR simulator (VR group) or a DaVinci robotic system (Robot group). All participants completed four tasks on the DaVinci robotic system before training (Baseline test), after proficiency in three FRS tasks (Midterm test), and after proficiency in all FRS tasks (Final test). Primary endpoints were forces applied across tests. RESULTS: This trial included 87 robotic novices, of which 43 and 44 participants received FRS training in VR group and Robot group, respectively. The Baseline test showed no significant differences in force application between the groups indicating a sufficient randomization. In the Midterm and Final test, the force application was not different between groups. Both groups displayed sufficient learning curves with significant improvement of force application. However, the Robot group needed significantly less repetitions in the three FRS tasks Ring tower (Robot: 2.48 vs. VR: 5.45; p < 0.001), Knot Tying (Robot: 5.34 vs. VR: 8.13; p = 0.006), and Vessel Energy Dissection (Robot: 2 vs. VR: 2.38; p = 0.001) until reaching proficiency. CONCLUSION: Robotic tissue handling skills improve significantly and comparably after both VR training and training on a real robotic system, but training on a VR simulator might be less efficient.


Asunto(s)
Competencia Clínica , Procedimientos Quirúrgicos Robotizados , Realidad Virtual , Humanos , Procedimientos Quirúrgicos Robotizados/educación , Femenino , Masculino , Estudios Prospectivos , Adulto , Entrenamiento Simulado/métodos , Curva de Aprendizaje , Adulto Joven
3.
Surg Endosc ; 38(5): 2483-2496, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38456945

RESUMEN

OBJECTIVE: Evaluation of the benefits of a virtual reality (VR) environment with a head-mounted display (HMD) for decision-making in liver surgery. BACKGROUND: Training in liver surgery involves appraising radiologic images and considering the patient's clinical information. Accurate assessment of 2D-tomography images is complex and requires considerable experience, and often the images are divorced from the clinical information. We present a comprehensive and interactive tool for visualizing operation planning data in a VR environment using a head-mounted-display and compare it to 3D visualization and 2D-tomography. METHODS: Ninety medical students were randomized into three groups (1:1:1 ratio). All participants analyzed three liver surgery patient cases with increasing difficulty. The cases were analyzed using 2D-tomography data (group "2D"), a 3D visualization on a 2D display (group "3D") or within a VR environment (group "VR"). The VR environment was displayed using the "Oculus Rift ™" HMD technology. Participants answered 11 questions on anatomy, tumor involvement and surgical decision-making and 18 evaluative questions (Likert scale). RESULTS: Sum of correct answers were significantly higher in the 3D (7.1 ± 1.4, p < 0.001) and VR (7.1 ± 1.4, p < 0.001) groups than the 2D group (5.4 ± 1.4) while there was no difference between 3D and VR (p = 0.987). Times to answer in the 3D (6:44 ± 02:22 min, p < 0.001) and VR (6:24 ± 02:43 min, p < 0.001) groups were significantly faster than the 2D group (09:13 ± 03:10 min) while there was no difference between 3D and VR (p = 0.419). The VR environment was evaluated as most useful for identification of anatomic anomalies, risk and target structures and for the transfer of anatomical and pathological information to the intraoperative situation in the questionnaire. CONCLUSIONS: A VR environment with 3D visualization using a HMD is useful as a surgical training tool to accurately and quickly determine liver anatomy and tumor involvement in surgery.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Realidad Virtual , Humanos , Tomografía Computarizada por Rayos X/métodos , Femenino , Masculino , Hepatectomía/métodos , Hepatectomía/educación , Adulto , Adulto Joven , Toma de Decisiones Clínicas , Interfaz Usuario-Computador , Neoplasias Hepáticas/cirugía , Neoplasias Hepáticas/diagnóstico por imagen
4.
Surg Endosc ; 37(4): 2439-2452, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36303044

RESUMEN

BACKGROUND: Force feedback is a critical element for performing and learning surgical suturing skill. Force feedback is impoverished or not present at all in non-open surgery (i.e., in simulation, laparoscopic, and robotic-assisted surgery), but it can be augmented using different modalities. This rapid, systematic review examines how the modality of delivering force feedback influences the performance and learning of surgical suturing skills. METHODS: An electronic search was performed on PubMed/MEDLINE, Web of Science, and Embase databases to identify relevant articles. The results were synthesized using vote counting based on direction of effect. RESULTS: A total of nine studies of medium-to-low quality were included. The synthesis of results suggests that the visual modality could be more beneficial than the tactile and auditory modalities in improving force control and that auditory and tactile modalities could be more beneficial than the visual modality in improving suturing performance. Results are mixed and unclear with regards to how modality affects the reduction of force magnitude and unclear when unimodal was compared to multimodal feedback. The studies have a general low level of evidence. CONCLUSION: The low number of studies with low methodological quality and low level of evidence (most were proof of concept) prevents us from drawing any meaningful conclusion and as such it is currently unknown whether and how force feedback modality influences surgical suturing skill. Speculatively, the visual modality may be more beneficial for improving the control of exerted force, while auditory and tactile modalities may be more effective in improving the overall suturing performance. We consider the issue of feedback modality to be highly relevant in this field, and we encourage future research to conduct further investigation integrating principles from learning psychology and neuroscience: identify feedback goal, context, and skill level and then design and compare feedback modalities accordingly.


Asunto(s)
Internado y Residencia , Procedimientos Quirúrgicos Robotizados , Humanos , Retroalimentación , Aprendizaje , Motivación
5.
Surg Endosc ; 37(11): 8690-8707, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37516693

RESUMEN

BACKGROUND: Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS: Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS: The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION: This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.


Asunto(s)
Inteligencia Artificial , Mejoramiento de la Calidad , Humanos , Consenso , Recolección de Datos
6.
Surg Endosc ; 37(11): 8577-8593, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37833509

RESUMEN

BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.


Asunto(s)
Esofagectomía , Robótica , Humanos , Teorema de Bayes , Esofagectomía/métodos , Aprendizaje Automático , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos , Estudios Prospectivos
7.
Surg Endosc ; 36(1): 126-134, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-33475848

RESUMEN

BACKGROUND: Virtual reality (VR) with head-mounted displays (HMD) may improve medical training and patient care by improving display and integration of different types of information. The aim of this study was to evaluate among different healthcare professions the potential of an interactive and immersive VR environment for liver surgery that integrates all relevant patient data from different sources needed for planning and training of procedures. METHODS: 3D-models of the liver, other abdominal organs, vessels, and tumors of a sample patient with multiple hepatic masses were created. 3D-models, clinical patient data, and other imaging data were visualized in a dedicated VR environment with an HMD (IMHOTEP). Users could interact with the data using head movements and a computer mouse. Structures of interest could be selected and viewed individually or grouped. IMHOTEP was evaluated in the context of preoperative planning and training of liver surgery and for the potential of broader surgical application. A standardized questionnaire was voluntarily answered by four groups (students, nurses, resident and attending surgeons). RESULTS: In the evaluation by 158 participants (57 medical students, 35 resident surgeons, 13 attending surgeons and 53 nurses), 89.9% found the VR system agreeable to work with. Participants generally agreed that complex cases in particular could be assessed better (94.3%) and faster (84.8%) with VR than with traditional 2D display methods. The highest potential was seen in student training (87.3%), resident training (84.6%), and clinical routine use (80.3%). Least potential was seen in nursing training (54.8%). CONCLUSIONS: The present study demonstrates that using VR with HMD to integrate all available patient data for the preoperative planning of hepatic resections is a viable concept. VR with HMD promises great potential to improve medical training and operation planning and thereby to achieve improvement in patient care.


Asunto(s)
Cirujanos , Realidad Virtual , Humanos , Hígado , Interfaz Usuario-Computador
8.
Surg Endosc ; 36(6): 4359-4368, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34782961

RESUMEN

BACKGROUND: Coffee can increase vigilance and performance, especially during sleep deprivation. The hypothetical downside of caffeine in the surgical field is the potential interaction with the ergonomics of movement and the central nervous system. The objective of this trial was to investigate the influence of caffeine on laparoscopic performance. METHODS: Fifty laparoscopic novices participated in this prospective randomized, blinded crossover trial and were trained in a modified FLS curriculum until reaching a predefined proficiency. Subsequently, all participants performed four laparoscopic tasks twice, once after consumption of a placebo and once after a caffeinated (200 mg) beverage. Comparative analysis was performed between the cohorts. Primary endpoint analysis included task time, task errors, OSATS score and a performance analysis with an instrument motion analysis (IMA) system. RESULTS: Fifty participants completed the study. Sixty-eight percent of participants drank coffee daily. The time to completion for each task was comparable between the caffeine and placebo cohorts for PEG transfer (119 s vs 121 s; p = 0.73), precise cutting (157 s vs 163 s; p = 0.74), gallbladder resection (190 s vs 173 s; p = 0.6) and surgical knot (171 s vs 189 s; p = 0.68). The instrument motion analysis showed no significant differences between the caffeine and placebo groups in any parameters: instrument volume, path length, idle, velocity, acceleration, and instrument out of view. Additionally, OSATS scores did not differ between groups, regardless of task. Major errors occurred similarly in both groups, except for one error criteria during the circle cutting task, which occurred significantly more often in the caffeine group (34% vs. 16%, p < 0.05). CONCLUSION: The objective IMA and performance scores of laparoscopic skills revealed that caffeine consumption does not enhance or impair the overall laparoscopic performance of surgical novices. The occurrence of major errors is not conclusive but could be negatively influenced in part by caffeine intake.


Asunto(s)
Cafeína , Laparoscopía , Competencia Clínica , Café , Estudios Cruzados , Humanos , Laparoscopía/educación , Estudios Prospectivos
9.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36171451

RESUMEN

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Asunto(s)
Aprendizaje Automático , Cirujanos , Humanos , Morbilidad
10.
Sensors (Basel) ; 22(4)2022 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-35214306

RESUMEN

In the early 2020s, the coronavirus pandemic brought the notion of remotely connected care to the general population across the globe. Oftentimes, the timely provisioning of access to and the implementation of affordable care are drivers behind tele-healthcare initiatives. Tele-healthcare has already garnered significant momentum in research and implementations in the years preceding the worldwide challenge of 2020, supported by the emerging capabilities of communication networks. The Tactile Internet (TI) with human-in-the-loop is one of those developments, leading to the democratization of skills and expertise that will significantly impact the long-term developments of the provisioning of care. However, significant challenges remain that require today's communication networks to adapt to support the ultra-low latency required. The resulting latency challenge necessitates trans-disciplinary research efforts combining psychophysiological as well as technological solutions to achieve one millisecond and below round-trip times. The objective of this paper is to provide an overview of the benefits enabled by solving this network latency reduction challenge by employing state-of-the-art Time-Sensitive Networking (TSN) devices in a testbed, realizing the service differentiation required for the multi-modal human-machine interface. With completely new types of services and use cases resulting from the TI, we describe the potential impacts on remote surgery and remote rehabilitation as examples, with a focus on the future of tele-healthcare in rural settings.


Asunto(s)
Infecciones por Coronavirus , Telemedicina , Atención a la Salud , Humanos , Internet , Pandemias , Telemedicina/métodos , Tacto
11.
Ann Surg ; 273(4): 684-693, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33201088

RESUMEN

OBJECTIVE: To provide an overview of ML models and data streams utilized for automated surgical phase recognition. BACKGROUND: Phase recognition identifies different steps and phases of an operation. ML is an evolving technology that allows analysis and interpretation of huge data sets. Automation of phase recognition based on data inputs is essential for optimization of workflow, surgical training, intraoperative assistance, patient safety, and efficiency. METHODS: A systematic review was performed according to the Cochrane recommendations and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. PubMed, Web of Science, IEEExplore, GoogleScholar, and CiteSeerX were searched. Literature describing phase recognition based on ML models and the capture of intraoperative signals during general surgery procedures was included. RESULTS: A total of 2254 titles/abstracts were screened, and 35 full-texts were included. Most commonly used ML models were Hidden Markov Models and Artificial Neural Networks with a trend towards higher complexity over time. Most frequently used data types were feature learning from surgical videos and manual annotation of instrument use. Laparoscopic cholecystectomy was used most commonly, often achieving accuracy rates over 90%, though there was no consistent standardization of defined phases. CONCLUSIONS: ML for surgical phase recognition can be performed with high accuracy, depending on the model, data type, and complexity of surgery. Different intraoperative data inputs such as video and instrument type can successfully be used. Most ML models still require significant amounts of manual expert annotations for training. The ML models may drive surgical workflow towards standardization, efficiency, and objectiveness to improve patient outcome in the future. REGISTRATION PROSPERO: CRD42018108907.


Asunto(s)
Algoritmos , Colecistectomía Laparoscópica/métodos , Aprendizaje Automático , Cirugía Asistida por Computador/métodos , Humanos , Flujo de Trabajo
12.
BMC Med Imaging ; 21(1): 119, 2021 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-34353290

RESUMEN

BACKGROUND: Object detection and image segmentation of regions of interest provide the foundation for numerous pipelines across disciplines. Robust and accurate computer vision methods are needed to properly solve image-based tasks. Multiple algorithms have been developed to solely detect edges in images. Constrained to the problem of creating a thin, one-pixel wide, edge from a predicted object boundary, we require an algorithm that removes pixels while preserving the topology. Thanks to skeletonize algorithms, an object boundary is transformed into an edge; contrasting uncertainty with exact positions. METHODS: To extract edges from boundaries generated from different algorithms, we present a computational pipeline that relies on: a novel skeletonize algorithm, a non-exhaustive discrete parameter search to find the optimal parameter combination of a specific post-processing pipeline, and an extensive evaluation using three data sets from the medical and natural image domains (kidney boundaries, NYU-Depth V2, BSDS 500). While the skeletonize algorithm was compared to classical topological skeletons, the validity of our post-processing algorithm was evaluated by integrating the original post-processing methods from six different works. RESULTS: Using the state of the art metrics, precision and recall based Signed Distance Error (SDE) and the Intersection over Union bounding box (IOU-box), our results indicate that the SDE metric for these edges is improved up to 2.3 times. CONCLUSIONS: Our work provides guidance for parameter tuning and algorithm selection in the post-processing of predicted object boundaries.


Asunto(s)
Algoritmos , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cirugía Asistida por Computador , Humanos
13.
Surg Endosc ; 32(9): 4052-4061, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29508142

RESUMEN

BACKGROUND: This study aimed at developing and evaluating a tool for computer-assisted 3D bowel length measurement (BMS) to improve objective measurement in minimally invasive surgery. Standardization and quality of surgery as well as its documentation are currently limited by lack of objective intraoperative measurements. To solve this problem, we developed BMS as a clinical application of Quantitative Laparoscopy (QL). METHODS: BMS processes images from a conventional 3D laparoscope. Computer vision algorithms are used to measure the distance between laparoscopic instruments along a 3D reconstruction of the bowel surface. Preclinical evaluation was performed in phantom, ex vivo porcine, and in vivo porcine models. A bowel length of 70 cm was measured with BMS and compared to a manually obtained ground truth. Afterwards 70 cm of bowel (ground truth) was measured and compared to BMS. RESULTS: Ground truth was 66.1 ± 2.7 cm (relative error + 5.8%) in phantom, 65.8 ± 2.5 cm (relative error + 6.4%) in ex vivo, and 67.5 ± 6.6 cm (relative error + 3.7%) in in vivo porcine evaluation when 70 cm was measured with BMS. Using 70 cm of bowel, BMS measured 75.0 ± 2.9 cm (relative error + 7.2%) in phantom and 74.4 ± 2.8 cm (relative error + 6.3%) in ex vivo porcine evaluation. After thorough preclinical evaluation, BMS was successfully used in a patient undergoing laparoscopic Roux-en-Y gastric bypass for morbid obesity. CONCLUSIONS: QL using BMS was shown to be feasible and was successfully translated from studies on phantom, ex vivo, and in vivo porcine bowel to a clinical feasibility study.


Asunto(s)
Intestinos/anatomía & histología , Intestinos/diagnóstico por imagen , Laparoscopía , Animales , Derivación Gástrica , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Laparoscopios , Fantasmas de Imagen , Porcinos
14.
Surg Endosc ; 31(5): 2155-2165, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-27604368

RESUMEN

INTRODUCTION: Training and assessment outside of the operating room is crucial for minimally invasive surgery due to steep learning curves. Thus, we have developed and validated the sensor- and expert model-based laparoscopic training system, the iSurgeon. MATERIALS: Participants of different experience levels (novice, intermediate, expert) performed four standardized laparoscopic knots. Instruments and surgeons' joint motions were tracked with an NDI Polaris camera and Microsoft Kinect v1. With frame-by-frame image analysis, the key steps of suturing and knot tying were identified and registered with motion data. Construct validity, concurrent validity, and test-retest reliability were analyzed. The Objective Structured Assessment of Technical Skills (OSATS) was used as the gold standard for concurrent validity. RESULTS: The system showed construct validity by discrimination between experience levels by parameters such as time (novice = 442.9 ± 238.5 s; intermediate = 190.1 ± 50.3 s; expert = 115.1 ± 29.1 s; p < 0.001), total path length (novice = 18,817 ± 10318 mm; intermediate = 9995 ± 3286 mm; expert = 7265 ± 2232 mm; p < 0.001), average speed (novice = 42.9 ± 8.3 mm/s; intermediate = 52.7 ± 11.2 mm/s; expert = 63.6 ± 12.9 mm/s; p < 0.001), angular path (novice = 20,573 ± 12,611°; intermediate = 8652 ± 2692°; expert = 5654 ± 1746°; p < 0.001), number of movements (novice = 2197 ± 1405; intermediate = 987 ± 367; expert = 743 ± 238; p < 0.001), number of movements per second (novice = 5.0 ± 1.4; intermediate = 5.2 ± 1.5; expert = 6.6 ± 1.6; p = 0.025), and joint angle range (for different axes and joints all p < 0.001). Concurrent validity of OSATS and iSurgeon parameters was established. Test-retest reliability was given for 7 out of 8 parameters. The key steps "wrapping the thread around the instrument" and "needle positioning" were most difficult to learn. CONCLUSION: Validity and reliability of the self-developed sensor-and expert model-based laparoscopic training system "iSurgeon" were established. Using multiple parameters proved more reliable than single metric parameters. Wrapping of the needle around the thread and needle positioning were identified as difficult key steps for laparoscopic suturing and knot tying. The iSurgeon could generate automated real-time feedback based on expert models which may result in shorter learning curves for laparoscopic tasks. Our next steps will be the implementation and evaluation of full procedural training in an experimental model.


Asunto(s)
Laparoscopía/educación , Entrenamiento Simulado , Competencia Clínica , Retroalimentación , Humanos , Reproducibilidad de los Resultados , Técnicas de Sutura/educación
15.
Surg Endosc ; 28(3): 933-40, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24178862

RESUMEN

BACKGROUND: Laparoscopic liver surgery is particularly challenging owing to restricted access, risk of bleeding, and lack of haptic feedback. Navigation systems have the potential to improve information on the exact position of intrahepatic tumors, and thus facilitate oncological resection. This study aims to evaluate the feasibility of a commercially available augmented reality (AR) guidance system employing intraoperative robotic C-arm cone-beam computed tomography (CBCT) for laparoscopic liver surgery. METHODS: A human liver-like phantom with 16 target fiducials was used to evaluate the Syngo iPilot(®) AR system. Subsequently, the system was used for the laparoscopic resection of a hepatocellular carcinoma in segment 7 of a 50-year-old male patient. RESULTS: In the phantom experiment, the AR system showed a mean target registration error of 0.96 ± 0.52 mm, with a maximum error of 2.49 mm. The patient successfully underwent the operation and showed no postoperative complications. CONCLUSION: The use of intraoperative CBCT and AR for laparoscopic liver resection is feasible and could be considered an option for future liver surgery in complex cases.


Asunto(s)
Carcinoma Hepatocelular/cirugía , Tomografía Computarizada de Haz Cónico/métodos , Marcadores Fiduciales , Hepatectomía/métodos , Laparoscopía/métodos , Neoplasias Hepáticas/cirugía , Fantasmas de Imagen , Cirugía Asistida por Computador/instrumentación , Carcinoma Hepatocelular/diagnóstico por imagen , Diseño de Equipo , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Factores de Tiempo
16.
Med Image Anal ; 94: 103126, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38452578

RESUMEN

Batch Normalization's (BN) unique property of depending on other samples in a batch is known to cause problems in several tasks, including sequence modeling. Yet, BN-related issues are hardly studied for long video understanding, despite the ubiquitous use of BN in CNNs (Convolutional Neural Networks) for feature extraction. Especially in surgical workflow analysis, where the lack of pretrained feature extractors has led to complex, multi-stage training pipelines, limited awareness of BN issues may have hidden the benefits of training CNNs and temporal models end to end. In this paper, we analyze pitfalls of BN in video learning, including issues specific to online tasks such as a 'cheating' effect in anticipation. We observe that BN's properties create major obstacles for end-to-end learning. However, using BN-free backbones, even simple CNN-LSTMs beat the state of the art on three surgical workflow benchmarks by utilizing adequate end-to-end training strategies which maximize temporal context. We conclude that awareness of BN's pitfalls is crucial for effective end-to-end learning in surgical tasks. By reproducing results on natural-video datasets, we hope our insights will benefit other areas of video learning as well. Code is available at: https://gitlab.com/nct_tso_public/pitfalls_bn.


Asunto(s)
Redes Neurales de la Computación , Humanos , Flujo de Trabajo
17.
Chirurgie (Heidelb) ; 95(6): 429-435, 2024 Jun.
Artículo en Alemán | MEDLINE | ID: mdl-38443676

RESUMEN

At the central workplace of the surgeon the digitalization of the operating room has particular consequences for the surgical work. Starting with intraoperative cross-sectional imaging and sonography, through functional imaging, minimally invasive and robot-assisted surgery up to digital surgical and anesthesiological documentation, the vast majority of operating rooms are now at least partially digitalized. The increasing digitalization of the whole process chain enables not only for the collection but also the analysis of big data. Current research focuses on artificial intelligence for the analysis of intraoperative data as the prerequisite for assistance systems that support surgical decision making or warn of risks; however, these technologies raise new ethical questions for the surgical community that affect the core of surgical work.


Asunto(s)
Inteligencia Artificial , Quirófanos , Humanos , Cirugía Asistida por Computador/ética , Cirugía Asistida por Computador/métodos , Cirugía Asistida por Computador/instrumentación , Procedimientos Quirúrgicos Robotizados/ética
18.
Int J Comput Assist Radiol Surg ; 19(6): 1233-1241, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38678102

RESUMEN

PURPOSE: Understanding surgical scenes is crucial for computer-assisted surgery systems to provide intelligent assistance functionality. One way of achieving this is via scene segmentation using machine learning (ML). However, such ML models require large amounts of annotated training data, containing examples of all relevant object classes, which are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, providing complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. METHODS: Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of these binary annotations, as we cannot tell if a positive prediction by the model is correct. RESULTS: We evaluate our method by training a DeepLabV3 model on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, significantly increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce the confusion between classes, e.g. a 24% drop for stomach and colon. CONCLUSION: By leveraging multiple datasets and applying mutual exclusion constraints, we developed a method that improves surgical scene segmentation performance without the need for fully annotated datasets. Our results demonstrate the feasibility of training a model on multiple complementary datasets. This paves the way for future work further alleviating the need for one specialized large, fully segmented dataset but instead the use of already existing datasets.


Asunto(s)
Aprendizaje Automático , Humanos , Cirugía Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Conjuntos de Datos como Asunto , Bases de Datos Factuales
19.
Int J Comput Assist Radiol Surg ; 19(6): 985-993, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38407730

RESUMEN

PURPOSE: In surgical computer vision applications, data privacy and expert annotation challenges impede the acquisition of labeled training data. Unpaired image-to-image translation techniques have been explored to automatically generate annotated datasets by translating synthetic images into a realistic domain. The preservation of structure and semantic consistency, i.e., per-class distribution during translation, poses a significant challenge, particularly in cases of semantic distributional mismatch. METHOD: This study empirically investigates various translation methods for generating data in surgical applications, explicitly focusing on semantic consistency. Through our analysis, we introduce a novel and simple combination of effective approaches, which we call ConStructS. The defined losses within this approach operate on multiple image patches and spatial resolutions during translation. RESULTS: Various state-of-the-art models were extensively evaluated on two challenging surgical datasets. With two different evaluation schemes, the semantic consistency and the usefulness of the translated images on downstream semantic segmentation tasks were evaluated. The results demonstrate the effectiveness of the ConStructS method in minimizing semantic distortion, with images generated by this model showing superior utility for downstream training. CONCLUSION: In this study, we tackle semantic inconsistency in unpaired image translation for surgical applications with minimal labeled data. The simple model (ConStructS) enhances consistency during translation and serves as a practical way of generating fully labeled and semantically consistent datasets at minimal cost. Our code is available at https://gitlab.com/nct_tso_public/constructs .


Asunto(s)
Semántica , Humanos , Cirugía Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
20.
Int J Comput Assist Radiol Surg ; 19(1): 139-145, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37328716

RESUMEN

PURPOSE: Middle ear infection is the most prevalent inflammatory disease, especially among the pediatric population. Current diagnostic methods are subjective and depend on visual cues from an otoscope, which is limited for otologists to identify pathology. To address this shortcoming, endoscopic optical coherence tomography (OCT) provides both morphological and functional in vivo measurements of the middle ear. However, due to the shadow of prior structures, interpretation of OCT images is challenging and time-consuming. To facilitate fast diagnosis and measurement, improvement in the readability of OCT data is achieved by merging morphological knowledge from ex vivo middle ear models with OCT volumetric data, so that OCT applications can be further promoted in daily clinical settings. METHODS: We propose C2P-Net: a two-staged non-rigid registration pipeline for complete to partial point clouds, which are sampled from ex vivo and in vivo OCT models, respectively. To overcome the lack of labeled training data, a fast and effective generation pipeline in Blender3D is designed to simulate middle ear shapes and extract in vivo noisy and partial point clouds. RESULTS: We evaluate the performance of C2P-Net through experiments on both synthetic and real OCT datasets. The results demonstrate that C2P-Net is generalized to unseen middle ear point clouds and capable of handling realistic noise and incompleteness in synthetic and real OCT data. CONCLUSIONS: In this work, we aim to enable diagnosis of middle ear structures with the assistance of OCT images. We propose C2P-Net: a two-staged non-rigid registration pipeline for point clouds to support the interpretation of in vivo noisy and partial OCT images for the first time. Code is available at: https://gitlab.com/nct_tso_public/c2p-net.


Asunto(s)
Oído Medio , Tomografía de Coherencia Óptica , Humanos , Niño , Tomografía de Coherencia Óptica/métodos , Oído Medio/diagnóstico por imagen , Oído Medio/patología , Endoscopía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA