Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 89
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Surg Endosc ; 38(10): 5793-5802, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39148005

RESUMO

BACKGROUND: Routine surgical video recording has multiple benefits. Video acts as an objective record of the operative record, allows video-based coaching and is integral to the development of digital technologies. Despite these benefits, adoption is not widespread. To date, only questionnaire studies have explored this failure in adoption. This study aims to determine the barriers and provide recommendations for the implementation of routine surgical video recording. MATERIALS AND METHODS: A pre- and post-pilot questionnaire surrounding a real-world implementation of a C-SATS©, an educational recording and surgical analytics platform, was conducted in a university teaching hospital trust. Usage metrics from the pilot study and descriptive analyses of questionnaire responses were used with the non-adoption, abandonment, scale-up, spread, sustainability (NASSS) framework to create topic guides for semi-structured interviews. Transcripts of interviews were evaluated in an inductive thematic analysis. RESULTS: Engagement with the C-SATS© platform failed to reach consistent levels with only 57 videos uploaded. Three attending surgeons, four surgical residents, one scrub nurse, three patients, one lawyer, and one industry representative were interviewed, all of which perceived value in recording. Barriers of 'change,' 'resource,' and 'governance,' were identified as the main themes. Resistance was centred on patient misinterpretation of videos. Participants believed availability of infrastructure would facilitate adoption but integration into surgical workflow is required. Regulatory uncertainty was centred around anonymity and data ownership. CONCLUSION: Barriers to the adoption of routine surgical video recording exist beyond technological barriers alone. Priorities for implementation include integration recording into the patient record, engaging all stakeholders to ensure buy-in, and formalising consent processes to establish patient trust.


Assuntos
Pesquisa Qualitativa , Gravação em Vídeo , Humanos , Projetos Piloto , Atitude do Pessoal de Saúde , Inquéritos e Questionários , Internato e Residência
2.
Surg Endosc ; 38(9): 4894-4905, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38958719

RESUMO

BACKGROUND: Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS: Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS: Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS: AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.


Assuntos
Inteligência Artificial , Laparoscopia , Pancreaticoduodenectomia , Gravação em Vídeo , Pancreaticoduodenectomia/métodos , Pancreaticoduodenectomia/educação , Humanos , Laparoscopia/métodos , Laparoscopia/educação , Curva de Aprendizado
3.
Surg Endosc ; 38(7): 3758-3772, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38789623

RESUMO

BACKGROUND: Hyperspectral imaging (HSI), combined with machine learning, can help to identify characteristic tissue signatures enabling automatic tissue recognition during surgery. This study aims to develop the first HSI-based automatic abdominal tissue recognition with human data in a prospective bi-center setting. METHODS: Data were collected from patients undergoing elective open abdominal surgery at two international tertiary referral hospitals from September 2020 to June 2021. HS images were captured at various time points throughout the surgical procedure. Resulting RGB images were annotated with 13 distinct organ labels. Convolutional Neural Networks (CNNs) were employed for the analysis, with both external and internal validation settings utilized. RESULTS: A total of 169 patients were included, 73 (43.2%) from Strasbourg and 96 (56.8%) from Verona. The internal validation within centers combined patients from both centers into a single cohort, randomly allocated to the training (127 patients, 75.1%, 585 images) and test sets (42 patients, 24.9%, 181 images). This validation setting showed the best performance. The highest true positive rate was achieved for the skin (100%) and the liver (97%). Misclassifications included tissues with a similar embryological origin (omentum and mesentery: 32%) or with overlaying boundaries (liver and hepatic ligament: 22%). The median DICE score for ten tissue classes exceeded 80%. CONCLUSION: To improve automatic surgical scene segmentation and to drive clinical translation, multicenter accurate HSI datasets are essential, but further work is needed to quantify the clinical value of HSI. HSI might be included in a new omics science, namely surgical optomics, which uses light to extract quantifiable tissue features during surgery.


Assuntos
Aprendizado Profundo , Imageamento Hiperespectral , Humanos , Estudos Prospectivos , Imageamento Hiperespectral/métodos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Abdome/cirurgia , Abdome/diagnóstico por imagem , Cirurgia Assistida por Computador/métodos
4.
Surg Endosc ; 38(8): 4316-4328, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38872018

RESUMO

BACKGROUND: Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS: A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS: In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS: An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.


Assuntos
Colecistectomia Laparoscópica , Aprendizado de Máquina , Procedimentos Cirúrgicos Robóticos , Colecistectomia Laparoscópica/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Humanos , Suínos , Animais , Algoritmos , Gravação em Vídeo , Fluxo de Trabalho
5.
Lang Resour Eval ; 58(3): 1043-1071, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39323984

RESUMO

Robot-assisted minimally invasive surgery is the gold standard for the surgical treatment of many pathological conditions since it guarantees to the patient shorter hospital stay and quicker recovery. Several manuals and academic papers describe how to perform these interventions and thus contain important domain-specific knowledge. This information, if automatically extracted and processed, can be used to extract or summarize surgical practices or develop decision making systems that can help the surgeon or nurses to optimize the patient's management before, during, and after the surgery by providing theoretical-based suggestions. However, general English natural language understanding algorithms have lower efficacy and coverage issues when applied to domain others than those they are typically trained on, and a domain specific textual annotated corpus is missing. To overcome this problem, we annotated the first robotic-surgery procedural corpus, with PropBank-style semantic labels. Starting from the original PropBank framebank, we enriched it by adding new lemmas, frames and semantic arguments required to cover missing information in general English but needed in procedural surgical language, releasing the Robotic-Surgery Procedural Framebank (RSPF). We then collected from robotic-surgery textbooks as-is sentences for a total of 32,448 tokens, and we annotated them with RSPF labels. We so obtained and publicly released the first annotated corpus of the robotic-surgical domain that can be used to foster further research on language understanding and procedural entities and relations extraction from clinical and surgical scientific literature.

6.
J Surg Res ; 283: 500-506, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36436286

RESUMO

INTRODUCTION: Video-based review of surgical procedures has proven to be useful in training by enabling efficiency in the qualitative assessment of surgical skill and intraoperative decision-making. Current video segmentation protocols focus largely on procedural steps. Although some operations are more complex than others, many of the steps in any given procedure involve an intricate choreography of basic maneuvers such as suturing, knot tying, and cutting. The use of these maneuvers at certain procedural steps can convey information that aids in the assessment of the complexity of the procedure, surgical preference, and skill. Our study aims to develop and evaluate an algorithm to identify these maneuvers. METHODS: A standard deep learning architecture was used to differentiate between suture throws, knot ties, and suture cutting on a data set comprised of videos from practicing clinicians (N = 52) who participated in a simulated enterotomy repair. Perception of the added value to traditional artificial intelligence segmentation was explored by qualitatively examining the utility of identifying maneuvers in a subset of steps for an open colon resection. RESULTS: An accuracy of 84% was reached in differentiating maneuvers. The precision in detecting the basic maneuvers was 87.9%, 60%, and 90.9% for suture throws, knot ties, and suture cutting, respectively. The qualitative concept mapping confirmed realistic scenarios that could benefit from basic maneuver identification. CONCLUSIONS: Basic maneuvers can indicate error management activity or safety measures and allow for the assessment of skill. Our deep learning algorithm identified basic maneuvers with reasonable accuracy. Such models can aid in artificial intelligence-assisted video review by providing additional information that can complement traditional video segmentation protocols.


Assuntos
Inteligência Artificial , Competência Clínica , Algoritmos , Procedimentos Neurocirúrgicos , Colo , Técnicas de Sutura/educação
7.
Surg Endosc ; 37(11): 8690-8707, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37516693

RESUMO

BACKGROUND: Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS: Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS: The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION: This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.


Assuntos
Inteligência Artificial , Melhoria de Qualidade , Humanos , Consenso , Coleta de Dados
8.
Surg Endosc ; 37(11): 8577-8593, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37833509

RESUMO

BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.


Assuntos
Esofagectomia , Robótica , Humanos , Teorema de Bayes , Esofagectomia/métodos , Aprendizado de Máquina , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Estudos Prospectivos
9.
Surg Endosc ; 37(11): 8540-8551, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37789179

RESUMO

BACKGROUND: The increased digitization in robotic surgical procedures today enables surgeons to quantify their movements through data captured directly from the robotic system. These calculations, called objective performance indicators (OPIs), offer unprecedented detail into surgical performance. In this study, we link case- and surgical step-specific OPIs to case complexity, surgical experience and console utilization, and post-operative clinical complications across 87 robotic cholecystectomy (RC) cases. METHODS: Videos of RCs performed by a principal surgeon with and without fellows were segmented into eight surgical steps and linked to patients' clinical data. Data for OPI calculations were extracted from an Intuitive Data Recorder and the da Vinci ® robotic system. RC cases were each assigned a Nassar and Parkland Grading score and categorized as standard or complex. OPIs were compared across complexity groups, console attributions, and post-surgical complication severities to determine objective relationships across variables. RESULTS: Across cases, differences in camera control and head positioning metrics of the principal surgeon were observed when comparing standard and complex cases. Further, OPI differences across the principal surgeon and the fellow(s) were observed in standard cases and include differences in arm swapping, camera control, and clutching behaviors. Monopolar coagulation energy usage differences were also observed. Select surgical step duration differences were observed across complexities and console attributions, and additional surgical task analyses determine the adhesion removal and liver bed hemostasis steps to be the most impactful steps for case complexity and post-surgical complications, respectively. CONCLUSION: This is the first study to establish the association between OPIs, case complexities, and clinical complications in RC. We identified OPI differences in intra-operative behaviors and post-surgical complications dependent on surgeon expertise and case complexity, opening the door for more standardized assessments of teaching cases, surgical behaviors and case complexities.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Colecistectomia/efeitos adversos , Cirurgiões/educação
10.
Surg Endosc ; 37(6): 4298-4314, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37157035

RESUMO

BACKGROUND: Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS: For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS: Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION: Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.


Assuntos
Procedimentos Cirúrgicos em Ginecologia , Procedimentos Cirúrgicos Minimamente Invasivos , Humanos , Feminino , Procedimentos Cirúrgicos Minimamente Invasivos/métodos
11.
Surg Endosc ; 37(8): 6153-6162, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37145173

RESUMO

BACKGROUND: Laparoscopic videos are increasingly being used for surgical artificial intelligence (AI) and big data analysis. The purpose of this study was to ensure data privacy in video recordings of laparoscopic surgery by censoring extraabdominal parts. An inside-outside-discrimination algorithm (IODA) was developed to ensure privacy protection while maximizing the remaining video data. METHODS: IODAs neural network architecture was based on a pretrained AlexNet augmented with a long-short-term-memory. The data set for algorithm training and testing contained a total of 100 laparoscopic surgery videos of 23 different operations with a total video length of 207 h (124 min ± 100 min per video) resulting in 18,507,217 frames (185,965 ± 149,718 frames per video). Each video frame was tagged either as abdominal cavity, trocar, operation site, outside for cleaning, or translucent trocar. For algorithm testing, a stratified fivefold cross-validation was used. RESULTS: The distribution of annotated classes were abdominal cavity 81.39%, trocar 1.39%, outside operation site 16.07%, outside for cleaning 1.08%, and translucent trocar 0.07%. Algorithm training on binary or all five classes showed similar excellent results for classifying outside frames with a mean F1-score of 0.96 ± 0.01 and 0.97 ± 0.01, sensitivity of 0.97 ± 0.02 and 0.0.97 ± 0.01, and a false positive rate of 0.99 ± 0.01 and 0.99 ± 0.01, respectively. CONCLUSION: IODA is able to discriminate between inside and outside with a high certainty. In particular, only a few outside frames are misclassified as inside and therefore at risk for privacy breach. The anonymized videos can be used for multi-centric development of surgical AI, quality management or educational purposes. In contrast to expensive commercial solutions, IODA is made open source and can be improved by the scientific community.


Assuntos
Inteligência Artificial , Laparoscopia , Humanos , Privacidade , Laparoscopia/métodos , Algoritmos , Redes Neurais de Computação , Gravação em Vídeo
12.
Surg Endosc ; 37(10): 7412-7424, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37584774

RESUMO

BACKGROUND: Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS: A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS: In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION: AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Minimamente Invasivos , Humanos , Academias e Institutos , Benchmarking , Lista de Checagem
13.
Surg Endosc ; 37(3): 2070-2077, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36289088

RESUMO

BACKGROUND: Phase and step annotation in surgical videos is a prerequisite for surgical scene understanding and for downstream tasks like intraoperative feedback or assistance. However, most ontologies are applied on small monocentric datasets and lack external validation. To overcome these limitations an ontology for phases and steps of laparoscopic Roux-en-Y gastric bypass (LRYGB) is proposed and validated on a multicentric dataset in terms of inter- and intra-rater reliability (inter-/intra-RR). METHODS: The proposed LRYGB ontology consists of 12 phase and 46 step definitions that are hierarchically structured. Two board certified surgeons (raters) with > 10 years of clinical experience applied the proposed ontology on two datasets: (1) StraBypass40 consists of 40 LRYGB videos from Nouvel Hôpital Civil, Strasbourg, France and (2) BernBypass70 consists of 70 LRYGB videos from Inselspital, Bern University Hospital, Bern, Switzerland. To assess inter-RR the two raters' annotations of ten randomly chosen videos from StraBypass40 and BernBypass70 each, were compared. To assess intra-RR ten randomly chosen videos were annotated twice by the same rater and annotations were compared. Inter-RR was calculated using Cohen's kappa. Additionally, for inter- and intra-RR accuracy, precision, recall, F1-score, and application dependent metrics were applied. RESULTS: The mean ± SD video duration was 108 ± 33 min and 75 ± 21 min in StraBypass40 and BernBypass70, respectively. The proposed ontology shows an inter-RR of 96.8 ± 2.7% for phases and 85.4 ± 6.0% for steps on StraBypass40 and 94.9 ± 5.8% for phases and 76.1 ± 13.9% for steps on BernBypass70. The overall Cohen's kappa of inter-RR was 95.9 ± 4.3% for phases and 80.8 ± 10.0% for steps. Intra-RR showed an accuracy of 98.4 ± 1.1% for phases and 88.1 ± 8.1% for steps. CONCLUSION: The proposed ontology shows an excellent inter- and intra-RR and should therefore be implemented routinely in phase and step annotation of LRYGB.


Assuntos
Derivação Gástrica , Laparoscopia , Obesidade Mórbida , Humanos , Obesidade Mórbida/cirurgia , Reprodutibilidade dos Testes , Resultado do Tratamento , Complicações Pós-Operatórias/cirurgia
14.
Sensors (Basel) ; 23(8)2023 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-37112231

RESUMO

Clinical alarm and decision support systems that lack clinical context may create non-actionable nuisance alarms that are not clinically relevant and can cause distractions during the most difficult moments of a surgery. We present a novel, interoperable, real-time system for adding contextual awareness to clinical systems by monitoring the heart-rate variability (HRV) of clinical team members. We designed an architecture for real-time capture, analysis, and presentation of HRV data from multiple clinicians and implemented this architecture as an application and device interfaces on the open-source OpenICE interoperability platform. In this work, we extend OpenICE with new capabilities to support the needs of the context-aware OR including a modularized data pipeline for simultaneously processing real-time electrocardiographic (ECG) waveforms from multiple clinicians to create estimates of their individual cognitive load. The system is built with standardized interfaces that allow for free interchange of software and hardware components including sensor devices, ECG filtering and beat detection algorithms, HRV metric calculations, and individual and team alerts based on changes in metrics. By integrating contextual cues and team member state into a unified process model, we believe future clinical applications will be able to emulate some of these behaviors to provide context-aware information to improve the safety and quality of surgical interventions.


Assuntos
Algoritmos , Software , Monitorização Fisiológica , Determinação da Frequência Cardíaca , Cognição
15.
Sensors (Basel) ; 23(6)2023 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-36991854

RESUMO

The direct tactile assessment of surface textures during palpation is an essential component of open surgery that is impeded in minimally invasive and robot-assisted surgery. When indirectly palpating with a surgical instrument, the structural vibrations from this interaction contain tactile information that can be extracted and analysed. This study investigates the influence of the parameters contact angle α and velocity v→ on the vibro-acoustic signals from this indirect palpation. A 7-DOF robotic arm, a standard surgical instrument, and a vibration measurement system were used to palpate three different materials with varying α and v→. The signals were processed based on continuous wavelet transformation. They showed material-specific signatures in the time-frequency domain that retained their general characteristic for varying α and v→. Energy-related and statistical features were extracted, and supervised classification was performed, where the testing data comprised only signals acquired with different palpation parameters than for training data. The classifiers support vector machine and k-nearest neighbours provided 99.67% and 96.00% accuracy for the differentiation of the materials. The results indicate the robustness of the features against variations in the palpation parameters. This is a prerequisite for an application in minimally invasive surgery but needs to be confirmed in realistic experiments with biological tissues.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Procedimentos Cirúrgicos Robóticos/métodos , Robótica/métodos , Tato , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Palpação , Acústica
16.
Sensors (Basel) ; 23(23)2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-38067671

RESUMO

This article provides a comprehensive analysis of the feature extraction methods applied to vibro-acoustic signals (VA signals) in the context of robot-assisted interventions. The primary objective is to extract valuable information from these signals to understand tissue behaviour better and build upon prior research. This study is divided into three key stages: feature extraction using the Cepstrum Transform (CT), Mel-Frequency Cepstral Coefficients (MFCCs), and Fast Chirplet Transform (FCT); dimensionality reduction employing techniques such as Principal Component Analysis (PCA), t-Distributed Stochastic Neighbour Embedding (t-SNE), and Uniform Manifold Approximation and Projection (UMAP); and, finally, classification using a nearest neighbours classifier. The results demonstrate that using feature extraction techniques, especially the combination of CT and MFCC with dimensionality reduction algorithms, yields highly efficient outcomes. The classification metrics (Accuracy, Recall, and F1-score) approach 99%, and the clustering metric is 0.61. The performance of the CT-UMAP combination stands out in the evaluation metrics.


Assuntos
Robótica , Algoritmos , Acústica , Análise por Conglomerados , Análise de Componente Principal
17.
J Biomed Inform ; 136: 104240, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36368631

RESUMO

BACKGROUND: Surgical context-aware systems can adapt to the current situation in the operating room and thus provide computer-aided assistance functionalities and intraoperative decision-support. To interact with the surgical team perceptively and assist the surgical process, the system needs to monitor the intraoperative activities, understand the current situation in the operating room at any time, and anticipate the following possible situations. METHODS: A structured representation of surgical process knowledge is a prerequisite for any applications in the intelligent operating room. For this purpose, a surgical process ontology, which is formally based on standard medical terminology (SNOMED CT) and an upper-level ontology (GFO), was developed and instantiated for a neurosurgical use case. A new ontology-based surgical workflow recognition and a novel prediction method are presented utilizing ontological reasoning, abstraction, and explication. This way, a surgical situation representation with combined phase, high-level task, and low-level task recognition and prediction was realized based on the currently used instrument as the only input information. RESULTS: The ontology-based approach performed efficiently, and decent accuracy was achieved for situation recognition and prediction. Especially during situation recognition, the missing sensor information were reasoned based on the situation representation provided by the process ontology, which resulted in improved recognition results compared to the state-of-the-art. CONCLUSIONS: In this work, a reference ontology was developed, which provides workflow support and a knowledge base for further applications in the intelligent operating room, for instance, context-aware medical device orchestration, (semi-) automatic documentation, and surgical simulation, education, and training.


Assuntos
Bases de Conhecimento , Salas Cirúrgicas , Fluxo de Trabalho , Simulação por Computador
18.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36171451

RESUMO

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Assuntos
Aprendizado de Máquina , Cirurgiões , Humanos , Morbidade
19.
Surg Endosc ; 36(10): 7444-7452, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35266049

RESUMO

BACKGROUND: Surgical process modeling automatically identifies surgical phases, and further improvement in recognition accuracy is expected with deep learning. Surgical tool or time series information has been used to improve the recognition accuracy of a model. However, it is difficult to collect this information continuously intraoperatively. The present study aimed to develop a deep convolution neural network (CNN) model that correctly identifies the surgical phase during laparoscopic cholecystectomy (LC). METHODS: We divided LC into six surgical phases (P1-P6) and one redundant phase (P0). We prepared 115 LC videos and converted them to image frames at 3 fps. Three experienced doctors labeled the surgical phases in all image frames. Our deep CNN model was trained with 106 of the 115 annotation datasets and was evaluated with the remaining datasets. By depending on both the prediction probability and frequency for a certain period, we aimed for highly accurate surgical phase recognition in the operation room. RESULTS: Nine full LC videos were converted into image frames and were fed to our deep CNN model. The average accuracy, precision, and recall were 0.970, 0.855, and 0.863, respectively. CONCLUSION: The deep learning CNN model in this study successfully identified both the six surgical phases and the redundant phase, P0, which may increase the versatility of the surgical process recognition model for clinical use. We believe that this model can be used in artificial intelligence for medical devices. The degree of recognition accuracy is expected to improve with developments in advanced deep learning algorithms.


Assuntos
Inteligência Artificial , Colecistectomia Laparoscópica , Algoritmos , Humanos , Redes Neurais de Computação , Software
20.
Surg Endosc ; 36(11): 8379-8386, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35171336

RESUMO

BACKGROUND: A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. METHODS: LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. RESULTS: 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. CONCLUSIONS: EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety.


Assuntos
Colecistectomia Laparoscópica , Humanos , Colecistectomia Laparoscópica/métodos , Gravação em Vídeo , Colangiografia , Documentação , Computadores
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA