Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Surg Endosc ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38834723

RESUMO

BACKGROUND: Tissue handling is a crucial skill for surgeons and is challenging to learn. The aim of this study was to develop laparoscopic instruments with different integrated tactile vibration feedback by varying different tactile modalities and assess its effect on tissue handling skills. METHODS: Standard laparoscopic instruments were equipped with a vibration effector, which was controlled by a microcomputer attached to a force sensor platform. One of three different vibration feedbacks (F1: double vibration > 2 N; F2: increasing vibration relative to force; F3: one vibration > 1.5 N and double vibration > 2 N) was applied to the instruments. In this multicenter crossover trial, surgical novices and expert surgeons performed two laparoscopic tasks (Peg transfer, laparoscopic suture, and knot) each with all the three vibration feedback modalities and once without any feedback, in a randomized order. The primary endpoint was force exertion. RESULTS: A total of 57 subjects (15 surgeons, 42 surgical novices) were included in the trial. In the Peg transfer task, there were no differences between the tactile feedback modalities in terms of force application. However, in subgroup analysis, the use of F2 resulted in a significantly lower mean-force application (p-value = 0.02) among the student group. In the laparoscopic suture and knot task, all participants exerted significantly lower mean and peak forces using F2 (p-value < 0.01). These findings remained significant after subgroup analysis for both, the student and surgeon groups individually. The condition without tactile feedback led to the highest mean and peak force exertion compared to the three other feedback modalities. CONCLUSION: Continuous tactile vibration feedback decreases the mean and peak force applied during laparoscopic training tasks. This effect is more pronounced in demanding tasks such as laparoscopic suturing and knot tying and might be more beneficial for students. Laparoscopic tasks without feedback lead to increased force application.

2.
Surg Endosc ; 38(5): 2900-2910, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38632120

RESUMO

BACKGROUND: Virtual reality is a frequently chosen method for learning the basics of robotic surgery. However, it is unclear whether tissue handling is adequately trained in VR training compared to training on a real robotic system. METHODS: In this randomized controlled trial, participants were split into two groups for "Fundamentals of Robotic Surgery (FRS)" training on either a DaVinci VR simulator (VR group) or a DaVinci robotic system (Robot group). All participants completed four tasks on the DaVinci robotic system before training (Baseline test), after proficiency in three FRS tasks (Midterm test), and after proficiency in all FRS tasks (Final test). Primary endpoints were forces applied across tests. RESULTS: This trial included 87 robotic novices, of which 43 and 44 participants received FRS training in VR group and Robot group, respectively. The Baseline test showed no significant differences in force application between the groups indicating a sufficient randomization. In the Midterm and Final test, the force application was not different between groups. Both groups displayed sufficient learning curves with significant improvement of force application. However, the Robot group needed significantly less repetitions in the three FRS tasks Ring tower (Robot: 2.48 vs. VR: 5.45; p < 0.001), Knot Tying (Robot: 5.34 vs. VR: 8.13; p = 0.006), and Vessel Energy Dissection (Robot: 2 vs. VR: 2.38; p = 0.001) until reaching proficiency. CONCLUSION: Robotic tissue handling skills improve significantly and comparably after both VR training and training on a real robotic system, but training on a VR simulator might be less efficient.


Assuntos
Competência Clínica , Procedimentos Cirúrgicos Robóticos , Realidade Virtual , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Feminino , Masculino , Estudos Prospectivos , Adulto , Treinamento por Simulação/métodos , Curva de Aprendizado , Adulto Jovem
3.
Int J Comput Assist Radiol Surg ; 19(6): 1233-1241, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38678102

RESUMO

PURPOSE: Understanding surgical scenes is crucial for computer-assisted surgery systems to provide intelligent assistance functionality. One way of achieving this is via scene segmentation using machine learning (ML). However, such ML models require large amounts of annotated training data, containing examples of all relevant object classes, which are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, providing complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. METHODS: Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of these binary annotations, as we cannot tell if a positive prediction by the model is correct. RESULTS: We evaluate our method by training a DeepLabV3 model on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, significantly increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce the confusion between classes, e.g. a 24% drop for stomach and colon. CONCLUSION: By leveraging multiple datasets and applying mutual exclusion constraints, we developed a method that improves surgical scene segmentation performance without the need for fully annotated datasets. Our results demonstrate the feasibility of training a model on multiple complementary datasets. This paves the way for future work further alleviating the need for one specialized large, fully segmented dataset but instead the use of already existing datasets.


Assuntos
Aprendizado de Máquina , Humanos , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Conjuntos de Dados como Assunto , Bases de Dados Factuais
4.
Chirurgie (Heidelb) ; 95(6): 429-435, 2024 Jun.
Artigo em Alemão | MEDLINE | ID: mdl-38443676

RESUMO

At the central workplace of the surgeon the digitalization of the operating room has particular consequences for the surgical work. Starting with intraoperative cross-sectional imaging and sonography, through functional imaging, minimally invasive and robot-assisted surgery up to digital surgical and anesthesiological documentation, the vast majority of operating rooms are now at least partially digitalized. The increasing digitalization of the whole process chain enables not only for the collection but also the analysis of big data. Current research focuses on artificial intelligence for the analysis of intraoperative data as the prerequisite for assistance systems that support surgical decision making or warn of risks; however, these technologies raise new ethical questions for the surgical community that affect the core of surgical work.


Assuntos
Inteligência Artificial , Salas Cirúrgicas , Humanos , Cirurgia Assistida por Computador/ética , Cirurgia Assistida por Computador/métodos , Cirurgia Assistida por Computador/instrumentação , Procedimentos Cirúrgicos Robóticos/ética
5.
Int J Comput Assist Radiol Surg ; 19(6): 1045-1052, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38526613

RESUMO

PURPOSE: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.


Assuntos
Competência Clínica , Técnicas de Sutura , Gravação em Vídeo , Humanos , Técnicas de Sutura/educação , Benchmarking
6.
Sci Data ; 11(1): 242, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38409278

RESUMO

Endoscopic optical coherence tomography (OCT) offers a non-invasive approach to perform the morphological and functional assessment of the middle ear in vivo. However, interpreting such OCT images is challenging and time-consuming due to the shadowing of preceding structures. Deep neural networks have emerged as a promising tool to enhance this process in multiple aspects, including segmentation, classification, and registration. Nevertheless, the scarcity of annotated datasets of OCT middle ear images poses a significant hurdle to the performance of neural networks. We introduce the Dresden in vivo OCT Dataset of the Middle Ear (DIOME) featuring 43 OCT volumes from both healthy and pathological middle ears of 29 subjects. DIOME provides semantic segmentations of five crucial anatomical structures (tympanic membrane, malleus, incus, stapes and promontory), and sparse landmarks delineating the salient features of the structures. The availability of these data facilitates the training and evaluation of algorithms regarding various analysis tasks with middle ear OCT images, e.g. diagnostics.


Assuntos
Orelha Média , Tomografia de Coerência Óptica , Humanos , Algoritmos , Orelha Média/diagnóstico por imagem , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos
7.
Int J Comput Assist Radiol Surg ; 19(1): 139-145, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37328716

RESUMO

PURPOSE: Middle ear infection is the most prevalent inflammatory disease, especially among the pediatric population. Current diagnostic methods are subjective and depend on visual cues from an otoscope, which is limited for otologists to identify pathology. To address this shortcoming, endoscopic optical coherence tomography (OCT) provides both morphological and functional in vivo measurements of the middle ear. However, due to the shadow of prior structures, interpretation of OCT images is challenging and time-consuming. To facilitate fast diagnosis and measurement, improvement in the readability of OCT data is achieved by merging morphological knowledge from ex vivo middle ear models with OCT volumetric data, so that OCT applications can be further promoted in daily clinical settings. METHODS: We propose C2P-Net: a two-staged non-rigid registration pipeline for complete to partial point clouds, which are sampled from ex vivo and in vivo OCT models, respectively. To overcome the lack of labeled training data, a fast and effective generation pipeline in Blender3D is designed to simulate middle ear shapes and extract in vivo noisy and partial point clouds. RESULTS: We evaluate the performance of C2P-Net through experiments on both synthetic and real OCT datasets. The results demonstrate that C2P-Net is generalized to unseen middle ear point clouds and capable of handling realistic noise and incompleteness in synthetic and real OCT data. CONCLUSIONS: In this work, we aim to enable diagnosis of middle ear structures with the assistance of OCT images. We propose C2P-Net: a two-staged non-rigid registration pipeline for point clouds to support the interpretation of in vivo noisy and partial OCT images for the first time. Code is available at: https://gitlab.com/nct_tso_public/c2p-net.


Assuntos
Orelha Média , Tomografia de Coerência Óptica , Humanos , Criança , Tomografia de Coerência Óptica/métodos , Orelha Média/diagnóstico por imagem , Orelha Média/patologia , Endoscopia
8.
Surg Endosc ; 37(11): 8577-8593, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37833509

RESUMO

BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.


Assuntos
Esofagectomia , Robótica , Humanos , Teorema de Bayes , Esofagectomia/métodos , Aprendizado de Máquina , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Estudos Prospectivos
9.
Int J Surg ; 109(10): 2962-2974, 2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37526099

RESUMO

BACKGROUND: Lack of anatomy recognition represents a clinically relevant risk in abdominal surgery. Machine learning (ML) methods can help identify visible patterns and risk structures; however, their practical value remains largely unclear. MATERIALS AND METHODS: Based on a novel dataset of 13 195 laparoscopic images with pixel-wise segmentations of 11 anatomical structures, we developed specialized segmentation models for each structure and combined models for all anatomical structures using two state-of-the-art model architectures (DeepLabv3 and SegFormer) and compared segmentation performance of algorithms to a cohort of 28 physicians, medical students, and medical laypersons using the example of pancreas segmentation. RESULTS: Mean Intersection-over-Union for semantic segmentation of intra-abdominal structures ranged from 0.28 to 0.83 and from 0.23 to 0.77 for the DeepLabv3-based structure-specific and combined models, and from 0.31 to 0.85 and from 0.26 to 0.67 for the SegFormer-based structure-specific and combined models, respectively. Both the structure-specific and the combined DeepLabv3-based models are capable of near-real-time operation, while the SegFormer-based models are not. All four models outperformed at least 26 out of 28 human participants in pancreas segmentation. CONCLUSIONS: These results demonstrate that ML methods have the potential to provide relevant assistance in anatomy recognition in minimally invasive surgery in near-real-time. Future research should investigate the educational value and subsequent clinical impact of the respective assistance systems.


Assuntos
Laparoscopia , Aprendizado de Máquina , Humanos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
10.
Eur J Surg Oncol ; : 106996, 2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37591704

RESUMO

INTRODUCTION: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning. MATERIALS AND METHODS: A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity. RESULTS: The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance. CONCLUSION: Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery.

11.
Updates Surg ; 75(5): 1103-1115, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37160843

RESUMO

Training improves skills in minimally invasive surgery. This study aimed to investigate the learning curves of complex motion parameters for both hands during a standardized training course using a novel measurement tool. An additional focus was placed on the parameters representing surgical safety and precision. Fifty-six laparoscopic novices participated in a training course on the basic skills of minimally invasive surgery based on a modified Fundamentals of Laparoscopic Surgery (FLS) curriculum. Before, twice during, and once after the practical lessons, all participants had to perform four laparoscopic tasks (peg transfer, precision cut, balloon resection, and laparoscopic suture and knot), which were recorded and analyzed using an instrument motion analysis system. Participants significantly improved the time per task for all four tasks (all p < 0.001). The individual instrument path length decreased significantly for the dominant and non-dominant hands in all four tasks. Similarly, both hands became significantly faster in all tasks, with the exception of the non-dominant hand in the precision cut task. In terms of relative idle time, only in the peg transfer task did both hands improve significantly, while in the precision cut task, only the dominant hand performed better. In contrast, the motion volume of both hands combined was reduced in only one task (precision cut, p = 0.01), whereas no significant improvement in the relative time of instruments being out of view was observed. FLS-based skills training increases motion efficiency primarily by increasing speed and reducing idle time and path length. Parameters relevant for surgical safety and precision (motion volume and relative time of instruments being out of view) are minimally affected by short-term training. Consequently, surgical training should also focus on safety and precision-related parameters, and assessment of these parameters should be incorporated into basic skill training accordingly.


Assuntos
Laparoscopia , Humanos , Estudos Prospectivos , Laparoscopia/educação , Currículo , Procedimentos Cirúrgicos Minimamente Invasivos , Curva de Aprendizado , Competência Clínica
12.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
13.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
14.
Sci Data ; 10(1): 3, 2023 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-36635312

RESUMO

Laparoscopy is an imaging technique that enables minimally-invasive procedures in various medical disciplines including abdominal surgery, gynaecology and urology. To date, publicly available laparoscopic image datasets are mostly limited to general classifications of data, semantic segmentations of surgical instruments and low-volume weak annotations of specific abdominal organs. The Dresden Surgical Anatomy Dataset provides semantic segmentations of eight abdominal organs (colon, liver, pancreas, small intestine, spleen, stomach, ureter, vesicular glands), the abdominal wall and two vessel structures (inferior mesenteric artery, intestinal veins) in laparoscopic view. In total, this dataset comprises 13195 laparoscopic images. For each anatomical structure, we provide over a thousand images with pixel-wise segmentations. Annotations comprise semantic segmentations of single organs and one multi-organ-segmentation dataset including segments for all eleven anatomical structures. Moreover, we provide weak annotations of organ presence for every single image. This dataset markedly expands the horizon for surgical data science applications of computer vision in laparoscopic surgery and could thereby contribute to a reduction of risks and faster translation of Artificial Intelligence into surgical practice.


Assuntos
Abdome , Inteligência Artificial , Abdome/anatomia & histologia , Abdome/cirurgia , Algoritmos , Ciência de Dados , Tomografia Computadorizada por Raios X/métodos , Alemanha
15.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36171451

RESUMO

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Assuntos
Aprendizado de Máquina , Cirurgiões , Humanos , Morbidade
16.
Surg Endosc ; 36(6): 4359-4368, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34782961

RESUMO

BACKGROUND: Coffee can increase vigilance and performance, especially during sleep deprivation. The hypothetical downside of caffeine in the surgical field is the potential interaction with the ergonomics of movement and the central nervous system. The objective of this trial was to investigate the influence of caffeine on laparoscopic performance. METHODS: Fifty laparoscopic novices participated in this prospective randomized, blinded crossover trial and were trained in a modified FLS curriculum until reaching a predefined proficiency. Subsequently, all participants performed four laparoscopic tasks twice, once after consumption of a placebo and once after a caffeinated (200 mg) beverage. Comparative analysis was performed between the cohorts. Primary endpoint analysis included task time, task errors, OSATS score and a performance analysis with an instrument motion analysis (IMA) system. RESULTS: Fifty participants completed the study. Sixty-eight percent of participants drank coffee daily. The time to completion for each task was comparable between the caffeine and placebo cohorts for PEG transfer (119 s vs 121 s; p = 0.73), precise cutting (157 s vs 163 s; p = 0.74), gallbladder resection (190 s vs 173 s; p = 0.6) and surgical knot (171 s vs 189 s; p = 0.68). The instrument motion analysis showed no significant differences between the caffeine and placebo groups in any parameters: instrument volume, path length, idle, velocity, acceleration, and instrument out of view. Additionally, OSATS scores did not differ between groups, regardless of task. Major errors occurred similarly in both groups, except for one error criteria during the circle cutting task, which occurred significantly more often in the caffeine group (34% vs. 16%, p < 0.05). CONCLUSION: The objective IMA and performance scores of laparoscopic skills revealed that caffeine consumption does not enhance or impair the overall laparoscopic performance of surgical novices. The occurrence of major errors is not conclusive but could be negatively influenced in part by caffeine intake.


Assuntos
Cafeína , Laparoscopia , Competência Clínica , Café , Estudos Cross-Over , Humanos , Laparoscopia/educação , Estudos Prospectivos
17.
Front Hum Neurosci ; 15: 675700, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34675789

RESUMO

The ability to perceive differences in depth is important in many daily life situations. It is also of relevance in laparoscopic surgical procedures that require the extrapolation of three-dimensional visual information from two-dimensional planar images. Besides visual-motor coordination, laparoscopic skills and binocular depth perception are demanding visual tasks for which learning is important. This study explored potential relations between binocular depth perception and individual variations in performance gains during laparoscopic skill acquisition in medical students naïve of such procedures. Individual differences in perceptual learning of binocular depth discrimination when performing a random dot stereogram (RDS) task were measured as variations in the slope changes of the logistic disparity psychometric curves from the first to the last blocks of the experiment. The results showed that not only did the individuals differ in their depth discrimination; the extent with which this performance changed across blocks also differed substantially between individuals. Of note, individual differences in perceptual learning of depth discrimination are associated with performance gains from laparoscopic skill training, both with respect to movement speed and an efficiency score that considered both speed and precision. These results indicate that learning-related benefits for enhancing demanding visual processes are, in part, shared between these two tasks. Future studies that include a broader selection of task-varying monocular and binocular cues as well as visual-motor coordination are needed to further investigate potential mechanistic relations between depth perceptual learning and laparoscopic skill acquisition. A deeper understanding of these mechanisms would be important for applied research that aims at designing behavioral interventions for enhancing technology-assisted laparoscopic skills.

18.
Obes Surg ; 31(11): 4692-4700, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34331186

RESUMO

PURPOSE: Accurate laparoscopic bowel length measurement (LBLM), which is used primarily in metabolic surgery, remains a challenge. This study aims to three conventional methods for LBLM, namely using visual judgment (VJ), instrument markings (IM), or premeasured tape (PT) to a novel computer-assisted 3D measurement system (BMS). MATERIALS AND METHODS: LBLM methods were compared using a 3D laparoscope on bowel phantoms regarding accuracy (relative error in percent, %), time in seconds (s), and number of bowel grasps. Seventy centimeters were measured seven times. As a control, the first, third, fifth, and seventh measurements were performed with VJ. The interventions IM, PT, and BMS were performed following a randomized order as the second, fourth, and sixth measurements. RESULTS: In total, 63 people participated. BMS showed better accuracy (2.1±3.7%) compared to VJ (8.7±13.7%, p=0.001), PT (4.3±6.8%, p=0.002), and IM (11±15.3%, p<0.001). Participants performed LBLM in a similar amount of time with BMS (175.7±59.7s) and PT (166.5±63.6s, p=0.35), but VJ (64.0±24.0s, p<0.001) and IM (144.9±55.4s, p=0.002) were faster. Number of bowel grasps as a measure for the risk of bowel lesions was similar for BMS (15.8±3.0) and PT (15.9±4.6, p=0.861), whereas VJ required less (14.1±3.4, p=0.004) and IM required more than BMS (22.2±6.9, p<0.001). CONCLUSIONS: PT had higher accuracy than VJ and IM, and lower number of bowel grasps than IM. BMS shows great potential for more reliable LBLM. Until BMS is available in clinical routine, PT should be preferred for LBLM.


Assuntos
Laparoscopia , Obesidade Mórbida , Computadores , Humanos , Intestinos , Obesidade Mórbida/cirurgia
19.
Sci Data ; 8(1): 101, 2021 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-33846356

RESUMO

Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.


Assuntos
Colo Sigmoide/cirurgia , Proctocolectomia Restauradora/instrumentação , Reto/cirurgia , Sistemas de Navegação Cirúrgica , Ciência de Dados , Humanos , Laparoscopia
20.
Surg Endosc ; 35(9): 5365-5374, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33904989

RESUMO

BACKGROUND: We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. METHODS: The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. RESULTS: The duration of each operation decreased with the robot's increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. CONCLUSIONS: The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs.


Assuntos
Laparoscopia , Robótica , Cognição , Humanos , Curva de Aprendizado , Procedimentos Cirúrgicos Minimamente Invasivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...