Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Surg Endosc ; 37(11): 8577-8593, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37833509

RESUMO

BACKGROUND: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.


Assuntos
Esofagectomia , Robótica , Humanos , Teorema de Bayes , Esofagectomia/métodos , Aprendizado de Máquina , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Estudos Prospectivos
2.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
3.
Surg Endosc ; 36(11): 8568-8591, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36171451

RESUMO

BACKGROUND: Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS: We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS: In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION: Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.


Assuntos
Aprendizado de Máquina , Cirurgiões , Humanos , Morbidade
4.
Med Image Anal ; 76: 102306, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34879287

RESUMO

Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.


Assuntos
Ciência de Dados , Aprendizado de Máquina , Humanos
5.
Minim Invasive Ther Allied Technol ; 31(1): 34-41, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32491933

RESUMO

INTRODUCTION: The methods employed to document cystoscopic findings in bladder cancer patients lack accuracy and are subject to observer variability. We propose a novel endoimaging system and an online documentation platform to provide post-procedural 3D bladder reconstructions for improved diagnosis, management and follow-up. MATERIAL AND METHODS: The RaVeNNA4pi consortium is comprised of five industrial partners, two university hospitals and two technical institutes. These are grouped into hardware, software and clinical partners according to their professional expertise. The envisaged endoimaging system consists of an innovative cystoscope that generates 3D bladder reconstructions allowing users to remotely access a cloud-based centralized database to visualize individualized 3D bladder models from previous cystoscopies archived in DICOM format. RESULTS: Preliminary investigations successfully tracked the endoscope's rotational and translational movements. The structure-from-motion pipeline was tested in a bladder phantom and satisfactorily demonstrated 3D reconstructions of the processing sequence. AI-based semantic image segmentation achieved a 0.67 dice-score-coefficient over all classes. An online-platform allows physicians and patients to digitally visualize endoscopic findings by navigating a 3D bladder model. CONCLUSIONS: Our work demonstrates the current developments of a novel endoimaging system equipped with the potential to generate 3D bladder reconstructions from cystoscopy videos and AI-assisted automated detection of bladder tumors.


Assuntos
Neoplasias da Bexiga Urinária , Cistoscopia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Bexiga Urinária/diagnóstico por imagem , Neoplasias da Bexiga Urinária/diagnóstico por imagem
6.
Sci Data ; 8(1): 101, 2021 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-33846356

RESUMO

Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.


Assuntos
Colo Sigmoide/cirurgia , Proctocolectomia Restauradora/instrumentação , Reto/cirurgia , Sistemas de Navegação Cirúrgica , Ciência de Dados , Humanos , Laparoscopia
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5649-5652, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019258

RESUMO

To translate recent advances in medical device interoperability research into clinical practice, standards are being developed that specify precise requirements towards the network representation of particular medical devices connecting through ISO/IEEE 11073 SDC. The present contribution supplements this protocol standard with specific models for endoscopic camera systems, light sources, insufflators, and pumps. Through industry consensus, these new standards provide modular means to describe the devices' capabilities and modes of interaction in a service-oriented medical device communication architecture. This enables seamless data exchange and the potential for new assistive systems to support the caregiver.


Assuntos
Endoscopia
8.
Int J Comput Assist Radiol Surg ; 14(6): 1089-1095, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30968352

RESUMO

PURPOSE: The course of surgical procedures is often unpredictable, making it difficult to estimate the duration of procedures beforehand. This uncertainty makes scheduling surgical procedures a difficult task. A context-aware method that analyses the workflow of an intervention online and automatically predicts the remaining duration would alleviate these problems. As basis for such an estimate, information regarding the current state of the intervention is a requirement. METHODS: Today, the operating room contains a diverse range of sensors. During laparoscopic interventions, the endoscopic video stream is an ideal source of such information. Extracting quantitative information from the video is challenging though, due to its high dimensionality. Other surgical devices (e.g., insufflator, lights, etc.) provide data streams which are, in contrast to the video stream, more compact and easier to quantify. Though whether such streams offer sufficient information for estimating the duration of surgery is uncertain. In this paper, we propose and compare methods, based on convolutional neural networks, for continuously predicting the duration of laparoscopic interventions based on unlabeled data, such as from endoscopic image and surgical device streams. RESULTS: The methods are evaluated on 80 recorded laparoscopic interventions of various types, for which surgical device data and the endoscopic video streams are available. Here the combined method performs best with an overall average error of 37% and an average halftime error of approximately 28%. CONCLUSION: In this paper, we present, to our knowledge, the first approach for online procedure duration prediction using unlabeled endoscopic video data and surgical device data in a laparoscopic setting. Furthermore, we show that a method incorporating both vision and device data performs better than methods based only on vision, while methods only based on tool usage and surgical device data perform poorly, showing the importance of the visual channel.


Assuntos
Laparoscopia , Duração da Cirurgia , Fluxo de Trabalho , Humanos , Redes Neurais de Computação , Salas Cirúrgicas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA