Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Surg Endosc ; 37(11): 8577-8593, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37833509

RESUMO

BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.


Assuntos
Esofagectomia , Robótica , Humanos , Teorema de Bayes , Esofagectomia/métodos , Aprendizado de Máquina , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Estudos Prospectivos
2.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36171451

RESUMO

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Assuntos
Aprendizado de Máquina , Cirurgiões , Humanos , Morbidade
3.
Sci Rep ; 13(1): 7506, 2023 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-37161007

RESUMO

Clinically relevant postoperative pancreatic fistula (CR-POPF) can significantly affect the treatment course and outcome in pancreatic cancer patients. Preoperative prediction of CR-POPF can aid the surgical decision-making process and lead to better perioperative management of patients. In this retrospective study of 108 pancreatic head resection patients, we present risk models for the prediction of CR-POPF that use combinations of preoperative computed tomography (CT)-based radiomic features, mesh-based volumes of annotated intra- and peripancreatic structures and preoperative clinical data. The risk signatures were evaluated and analysed in detail by visualising feature expression maps and by comparing significant features to the established CR-POPF risk measures. Out of the risk models that were developed in this study, the combined radiomic and clinical signature performed best with an average area under receiver operating characteristic curve (AUC) of 0.86 and a balanced accuracy score of 0.76 on validation data. The following pre-operative features showed significant correlation with outcome in this signature ([Formula: see text]) - texture and morphology of the healthy pancreatic segment, intensity volume histogram-based feature of the pancreatic duct segment, morphology of the combined segment, and BMI. The predictions of this pre-operative signature showed strong correlation (Spearman correlation co-efficient, [Formula: see text]) with the intraoperative updated alternative fistula risk score (ua-FRS), which is the clinical gold standard for intraoperative CR-POPF risk stratification. These results indicate that the proposed combined radiomic and clinical signature developed solely based on preoperatively available clinical and routine imaging data can perform on par with the current state-of-the-art intraoperative models for CR-POPF risk stratification.


Assuntos
Fístula Pancreática , Neoplasias Pancreáticas , Humanos , Fístula Pancreática/diagnóstico por imagem , Fístula Pancreática/etiologia , Estudos Retrospectivos , Pâncreas/diagnóstico por imagem , Pâncreas/cirurgia , Complicações Pós-Operatórias/diagnóstico por imagem , Complicações Pós-Operatórias/etiologia , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/cirurgia
4.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA