Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
2.
Sci Rep ; 12(1): 9990, 2022 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-35705550

RESUMO

Machine-learning based risk prediction models have the potential to improve patient outcomes by assessing risk more accurately than clinicians. Significant additional value lies in these models providing feedback about the factors that amplify an individual patient's risk. Identification of risk factors enables more informed decisions on interventions to mitigate or ameliorate modifiable factors. For these reasons, risk prediction models must be explainable and grounded on medical knowledge. Current machine learning-based risk prediction models are frequently 'black-box' models whose inner workings cannot be understood easily, making it difficult to define risk drivers. Since machine learning models follow patterns in the data rather than looking for medically relevant relationships, possible risk factors identified by these models do not necessarily translate into actionable insights for clinicians. Here, we use the example of risk assessment for postoperative complications to demonstrate how explainable and medically grounded risk prediction models can be developed. Pre- and postoperative risk prediction models are trained based on clinically relevant inputs extracted from electronic medical record data. We show that these models have similar predictive performance as models that incorporate a wider range of inputs and explain the models' decision-making process by visualizing how different model inputs and their values affect the models' predictions.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Humanos , Complicações Pós-Operatórias/etiologia , Medição de Risco , Fatores de Risco
3.
Int J Comput Assist Radiol Surg ; 17(1): 5-13, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34297269

RESUMO

PURPOSE: Cholecystectomy is one of the most common laparoscopic procedures. A critical phase of laparoscopic cholecystectomy consists in clipping the cystic duct and artery before cutting them. Surgeons can improve the clipping safety by ensuring full visibility of the clipper, while enclosing the artery or the duct with the clip applier jaws. This can prevent unintentional interaction with neighboring tissues or clip misplacement. In this article, we present a novel real-time feedback to ensure safe visibility of the instrument during this critical phase. This feedback incites surgeons to keep the tip of their clip applier visible while operating. METHODS: We present a new dataset of 300 laparoscopic cholecystectomy videos with frame-wise annotation of clipper tip visibility. We further present ClipAssistNet, a neural network-based image classifier which detects the clipper tip visibility in single frames. ClipAssistNet ensembles predictions from 5 neural networks trained on different subsets of the dataset. RESULTS: Our model learns to classify the clipper tip visibility by detecting its presence in the image. Measured on a separate test set, ClipAssistNet classifies the clipper tip visibility with an AUROC of 0.9107, and 66.15% specificity at 95% sensitivity. Additionally, it can perform real-time inference (16 FPS) on an embedded computing board; this enables its deployment in operating room settings. CONCLUSION: This work presents a new application of computer-assisted surgery for laparoscopic cholecystectomy, namely real-time feedback on adequate visibility of the clip applier. We believe this feedback can increase surgeons' attentiveness when departing from safe visibility during the critical clipping of the cystic duct and artery.


Assuntos
Colecistectomia Laparoscópica , Salas Cirúrgicas , Colecistectomia , Retroalimentação , Humanos , Instrumentos Cirúrgicos
5.
Sci Rep ; 11(1): 5197, 2021 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-33664317

RESUMO

Surgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.

6.
Med Image Anal ; 70: 101920, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33676097

RESUMO

Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).


Assuntos
Processamento de Imagem Assistida por Computador , Laparoscopia , Algoritmos , Artefatos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA