Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Base de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Oper Neurosurg (Hagerstown) ; 23(3): 235-240, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35972087

RESUMEN

BACKGROUND: Intraoperative tool movement data have been demonstrated to be clinically useful in quantifying surgical performance. However, collecting this information from intraoperative video requires laborious hand annotation. The ability to automatically annotate tools in surgical video would advance surgical data science by eliminating a time-intensive step in research. OBJECTIVE: To identify whether machine learning (ML) can automatically identify surgical instruments contained within neurosurgical video. METHODS: A ML model which automatically identifies surgical instruments in frame was developed and trained on multiple publicly available surgical video data sets with instrument location annotations. A total of 39 693 frames from 4 data sets were used (endoscopic endonasal surgery [EEA] [30 015 frames], cataract surgery [4670], laparoscopic cholecystectomy [2532], and microscope-assisted brain/spine tumor removal [2476]). A second model trained only on EEA video was also developed. Intraoperative EEA videos from YouTube were used for test data (3 videos, 1239 frames). RESULTS: The YouTube data set contained 2169 total instruments. Mean average precision (mAP) for instrument detection on the YouTube data set was 0.74. The mAP for each individual video was 0.65, 0.74, and 0.89. The second model trained only on EEA video also had an overall mAP of 0.74 (0.62, 0.84, and 0.88 for individual videos). Development costs were $130 for manual video annotation and under $100 for computation. CONCLUSION: Surgical instruments contained within endoscopic endonasal intraoperative video can be detected using a fully automated ML model. The addition of disparate surgical data sets did not improve model performance, although these data sets may improve generalizability of the model in other use cases.


Asunto(s)
Aprendizaje Automático , Instrumentos Quirúrgicos , Humanos , Grabación en Video
2.
Sci Rep ; 12(1): 8137, 2022 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-35581213

RESUMEN

Major vascular injury resulting in uncontrolled bleeding is a catastrophic and often fatal complication of minimally invasive surgery. At the outset of these events, surgeons do not know how much blood will be lost or whether they will successfully control the hemorrhage (achieve hemostasis). We evaluate the ability of a deep learning neural network (DNN) to predict hemostasis control ability using the first minute of surgical video and compare model performance with human experts viewing the same video. The publicly available SOCAL dataset contains 147 videos of attending and resident surgeons managing hemorrhage in a validated, high-fidelity cadaveric simulator. Videos are labeled with outcome and blood loss (mL). The first minute of 20 videos was shown to four, blinded, fellowship trained skull-base neurosurgery instructors, and to SOCALNet (a DNN trained on SOCAL videos). SOCALNet architecture included a convolutional network (ResNet) identifying spatial features and a recurrent network identifying temporal features (LSTM). Experts independently assessed surgeon skill, predicted outcome and blood loss (mL). Outcome and blood loss predictions were compared with SOCALNet. Expert inter-rater reliability was 0.95. Experts correctly predicted 14/20 trials (Sensitivity: 82%, Specificity: 55%, Positive Predictive Value (PPV): 69%, Negative Predictive Value (NPV): 71%). SOCALNet correctly predicted 17/20 trials (Sensitivity 100%, Specificity 66%, PPV 79%, NPV 100%) and correctly identified all successful attempts. Expert predictions of the highest and lowest skill surgeons and expert predictions reported with maximum confidence were more accurate. Experts systematically underestimated blood loss (mean error - 131 mL, RMSE 350 mL, R2 0.70) and fewer than half of expert predictions identified blood loss > 500 mL (47.5%, 19/40). SOCALNet had superior performance (mean error - 57 mL, RMSE 295 mL, R2 0.74) and detected most episodes of blood loss > 500 mL (80%, 8/10). In validation experiments, SOCALNet evaluation of a critical on-screen surgical maneuver and high/low-skill composite videos were concordant with expert evaluation. Using only the first minute of video, experts and SOCALNet can predict outcome and blood loss during surgical hemorrhage. Experts systematically underestimated blood loss, and SOCALNet had no false negatives. DNNs can provide accurate, meaningful assessments of surgical video. We call for the creation of datasets of surgical adverse events for quality improvement research.


Asunto(s)
Aprendizaje Profundo , Cirujanos , Pérdida de Sangre Quirúrgica , Competencia Clínica , Humanos , Reproducibilidad de los Resultados , Grabación en Video
3.
Neurosurg Focus ; 52(4): E11, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35364576

RESUMEN

OBJECTIVE: While the utilization of machine learning (ML) for data analysis typically requires significant technical expertise, novel platforms can deploy ML methods without requiring the user to have any coding experience (termed AutoML). The potential for these methods to be applied to neurosurgical video and surgical data science is unknown. METHODS: AutoML, a code-free ML (CFML) system, was used to identify surgical instruments contained within each frame of endoscopic, endonasal intraoperative video obtained from a previously validated internal carotid injury training exercise performed on a high-fidelity cadaver model. Instrument-detection performances using CFML were compared with two state-of-the-art ML models built using the Python coding language on the same intraoperative video data set. RESULTS: The CFML system successfully ingested surgical video without the use of any code. A total of 31,443 images were used to develop this model; 27,223 images were uploaded for training, 2292 images for validation, and 1928 images for testing. The mean average precision on the test set across all instruments was 0.708. The CFML model outperformed two standard object detection networks, RetinaNet and YOLOv3, which had mean average precisions of 0.669 and 0.527, respectively, in analyzing the same data set. Significant advantages to the CFML system included ease of use, relatively low cost, displays of true/false positives and negatives in a user-friendly interface, and the ability to deploy models for further analysis with ease. Significant drawbacks of the CFML model included an inability to view the structure of the trained model, an inability to update the ML model once trained with new examples, and the inability for robust downstream analysis of model performance and error modes. CONCLUSIONS: This first report describes the baseline performance of CFML in an object detection task using a publicly available surgical video data set as a test bed. Compared with standard, code-based object detection networks, CFML exceeded performance standards. This finding is encouraging for surgeon-scientists seeking to perform object detection tasks to answer clinical questions, perform quality improvement, and develop novel research ideas. The limited interpretability and customization of CFML models remain ongoing challenges. With the further development of code-free platforms, CFML will become increasingly important across biomedical research. Using CFML, surgeons without significant coding experience can perform exploratory ML analyses rapidly and efficiently.


Asunto(s)
Benchmarking , Cirujanos , Algoritmos , Estudios de Factibilidad , Humanos , Aprendizaje Automático
4.
AMIA Annu Symp Proc ; 2021: 1039-1048, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35308958

RESUMEN

Burn wounds are most commonly evaluated through visual inspection to determine surgical candidacy, taking into account burn depth and individualized patient factors. This process, though cost effective, is subjective and varies by provider experience. Deep learning models can assist in burn wound surgical candidacy with predictions based on the wound and patient characteristics. To this end, we present a multimodal deep learning approach and a complementary mobile application - DL4Burn - for predicting burn surgical candidacy, to emulate the multi-factored approach used by clinicians. Specifically, we propose a ResNet50-based multimodal model and validate it using retrospectively obtained patient burn images, demographic, and injury data.


Asunto(s)
Quemaduras , Aprendizaje Profundo , Quemaduras/cirugía , Humanos , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA