Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Chem Phys ; 155(20): 204103, 2021 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-34852495

RESUMO

We present OrbNet Denali, a machine learning model for an electronic structure that is designed as a drop-in replacement for ground-state density functional theory (DFT) energy calculations. The model is a message-passing graph neural network that uses symmetry-adapted atomic orbital features from a low-cost quantum calculation to predict the energy of a molecule. OrbNet Denali is trained on a vast dataset of 2.3 × 106 DFT calculations on molecules and geometries. This dataset covers the most common elements in biochemistry and organic chemistry (H, Li, B, C, N, O, F, Na, Mg, Si, P, S, Cl, K, Ca, Br, and I) and charged molecules. OrbNet Denali is demonstrated on several well-established benchmark datasets, and we find that it provides accuracy that is on par with modern DFT methods while offering a speedup of up to three orders of magnitude. For the GMTKN55 benchmark set, OrbNet Denali achieves WTMAD-1 and WTMAD-2 scores of 7.19 and 9.84, on par with modern DFT functionals. For several GMTKN55 subsets, which contain chemical problems that are not present in the training set, OrbNet Denali produces a mean absolute error comparable to those of DFT methods. For the Hutchison conformer benchmark set, OrbNet Denali has a median correlation coefficient of R2 = 0.90 compared to the reference DLPNO-CCSD(T) calculation and R2 = 0.97 compared to the method used to generate the training data (ωB97X-D3/def2-TZVP), exceeding the performance of any other method with a similar cost. Similarly, the model reaches chemical accuracy for non-covalent interactions in the S66x10 dataset. For torsional profiles, OrbNet Denali reproduces the torsion profiles of ωB97X-D3/def2-TZVP with an average mean absolute error of 0.12 kcal/mol for the potential energy surfaces of the diverse fragments in the TorsionNet500 dataset.

2.
J Chem Phys ; 153(12): 124111, 2020 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-33003742

RESUMO

We introduce a machine learning method in which energy solutions from the Schrödinger equation are predicted using symmetry adapted atomic orbital features and a graph neural-network architecture. OrbNet is shown to outperform existing methods in terms of learning efficiency and transferability for the prediction of density functional theory results while employing low-cost features that are obtained from semi-empirical electronic structure calculations. For applications to datasets of drug-like molecules, including QM7b-T, QM9, GDB-13-T, DrugBank, and the conformer benchmark dataset of Folmsbee and Hutchison [Int. J. Quantum Chem. (published online) (2020)], OrbNet predicts energies within chemical accuracy of density functional theory at a computational cost that is 1000-fold or more reduced.

3.
Commun Med (Lond) ; 3(1): 42, 2023 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-36997578

RESUMO

BACKGROUND: Surgeons who receive reliable feedback on their performance quickly master the skills necessary for surgery. Such performance-based feedback can be provided by a recently-developed artificial intelligence (AI) system that assesses a surgeon's skills based on a surgical video while simultaneously highlighting aspects of the video most pertinent to the assessment. However, it remains an open question whether these highlights, or explanations, are equally reliable for all surgeons. METHODS: Here, we systematically quantify the reliability of AI-based explanations on surgical videos from three hospitals across two continents by comparing them to explanations generated by humans experts. To improve the reliability of AI-based explanations, we propose the strategy of training with explanations -TWIX -which uses human explanations as supervision to explicitly teach an AI system to highlight important video frames. RESULTS: We show that while AI-based explanations often align with human explanations, they are not equally reliable for different sub-cohorts of surgeons (e.g., novices vs. experts), a phenomenon we refer to as an explanation bias. We also show that TWIX enhances the reliability of AI-based explanations, mitigates the explanation bias, and improves the performance of AI systems across hospitals. These findings extend to a training environment where medical students can be provided with feedback today. CONCLUSIONS: Our study informs the impending implementation of AI-augmented surgical training and surgeon credentialing programs, and contributes to the safe and fair democratization of surgery.


Surgeons aim to master skills necessary for surgery. One such skill is suturing which involves connecting objects together through a series of stitches. Mastering these surgical skills can be improved by providing surgeons with feedback on the quality of their performance. However, such feedback is often absent from surgical practice. Although performance-based feedback can be provided, in theory, by recently-developed artificial intelligence (AI) systems that use a computational model to assess a surgeon's skill, the reliability of this feedback remains unknown. Here, we compare AI-based feedback to that provided by human experts and demonstrate that they often overlap with one another. We also show that explicitly teaching an AI system to align with human feedback further improves the reliability of AI-based feedback on new videos of surgery. Our findings outline the potential of AI systems to support the training of surgeons by providing feedback that is reliable and focused on a particular skill, and guide programs that give surgeons qualifications by complementing skill assessments with explanations that increase the trustworthiness of such assessments.

4.
Nat Biomed Eng ; 7(6): 780-796, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36997732

RESUMO

The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgiões , Humanos , Procedimentos Cirúrgicos Robóticos/métodos
5.
NPJ Digit Med ; 6(1): 54, 2023 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-36997642

RESUMO

Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems-SAIS-deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy -TWIX-which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students' skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.

6.
Sci Rep ; 12(1): 8137, 2022 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-35581213

RESUMO

Major vascular injury resulting in uncontrolled bleeding is a catastrophic and often fatal complication of minimally invasive surgery. At the outset of these events, surgeons do not know how much blood will be lost or whether they will successfully control the hemorrhage (achieve hemostasis). We evaluate the ability of a deep learning neural network (DNN) to predict hemostasis control ability using the first minute of surgical video and compare model performance with human experts viewing the same video. The publicly available SOCAL dataset contains 147 videos of attending and resident surgeons managing hemorrhage in a validated, high-fidelity cadaveric simulator. Videos are labeled with outcome and blood loss (mL). The first minute of 20 videos was shown to four, blinded, fellowship trained skull-base neurosurgery instructors, and to SOCALNet (a DNN trained on SOCAL videos). SOCALNet architecture included a convolutional network (ResNet) identifying spatial features and a recurrent network identifying temporal features (LSTM). Experts independently assessed surgeon skill, predicted outcome and blood loss (mL). Outcome and blood loss predictions were compared with SOCALNet. Expert inter-rater reliability was 0.95. Experts correctly predicted 14/20 trials (Sensitivity: 82%, Specificity: 55%, Positive Predictive Value (PPV): 69%, Negative Predictive Value (NPV): 71%). SOCALNet correctly predicted 17/20 trials (Sensitivity 100%, Specificity 66%, PPV 79%, NPV 100%) and correctly identified all successful attempts. Expert predictions of the highest and lowest skill surgeons and expert predictions reported with maximum confidence were more accurate. Experts systematically underestimated blood loss (mean error - 131 mL, RMSE 350 mL, R2 0.70) and fewer than half of expert predictions identified blood loss > 500 mL (47.5%, 19/40). SOCALNet had superior performance (mean error - 57 mL, RMSE 295 mL, R2 0.74) and detected most episodes of blood loss > 500 mL (80%, 8/10). In validation experiments, SOCALNet evaluation of a critical on-screen surgical maneuver and high/low-skill composite videos were concordant with expert evaluation. Using only the first minute of video, experts and SOCALNet can predict outcome and blood loss during surgical hemorrhage. Experts systematically underestimated blood loss, and SOCALNet had no false negatives. DNNs can provide accurate, meaningful assessments of surgical video. We call for the creation of datasets of surgical adverse events for quality improvement research.


Assuntos
Aprendizado Profundo , Cirurgiões , Perda Sanguínea Cirúrgica , Competência Clínica , Humanos , Reprodutibilidade dos Testes , Gravação em Vídeo
7.
Neurosurgery ; 90(6): 823-829, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35319539

RESUMO

BACKGROUND: Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video. OBJECTIVE: To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes. METHODS: Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance. RESULTS: One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively. CONCLUSION: BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated.


Assuntos
Redes Neurais de Computação , Cirurgiões , Artérias Carótidas , Hemorragia , Humanos , Modelos Lineares
8.
Oper Neurosurg (Hagerstown) ; 23(3): 235-240, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35972087

RESUMO

BACKGROUND: Intraoperative tool movement data have been demonstrated to be clinically useful in quantifying surgical performance. However, collecting this information from intraoperative video requires laborious hand annotation. The ability to automatically annotate tools in surgical video would advance surgical data science by eliminating a time-intensive step in research. OBJECTIVE: To identify whether machine learning (ML) can automatically identify surgical instruments contained within neurosurgical video. METHODS: A ML model which automatically identifies surgical instruments in frame was developed and trained on multiple publicly available surgical video data sets with instrument location annotations. A total of 39 693 frames from 4 data sets were used (endoscopic endonasal surgery [EEA] [30 015 frames], cataract surgery [4670], laparoscopic cholecystectomy [2532], and microscope-assisted brain/spine tumor removal [2476]). A second model trained only on EEA video was also developed. Intraoperative EEA videos from YouTube were used for test data (3 videos, 1239 frames). RESULTS: The YouTube data set contained 2169 total instruments. Mean average precision (mAP) for instrument detection on the YouTube data set was 0.74. The mAP for each individual video was 0.65, 0.74, and 0.89. The second model trained only on EEA video also had an overall mAP of 0.74 (0.62, 0.84, and 0.88 for individual videos). Development costs were $130 for manual video annotation and under $100 for computation. CONCLUSION: Surgical instruments contained within endoscopic endonasal intraoperative video can be detected using a fully automated ML model. The addition of disparate surgical data sets did not improve model performance, although these data sets may improve generalizability of the model in other use cases.


Assuntos
Aprendizado de Máquina , Instrumentos Cirúrgicos , Humanos , Gravação em Vídeo
9.
JAMA Netw Open ; 5(3): e223177, 2022 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-35311962

RESUMO

Importance: Surgical data scientists lack video data sets that depict adverse events, which may affect model generalizability and introduce bias. Hemorrhage may be particularly challenging for computer vision-based models because blood obscures the scene. Objective: To assess the utility of the Simulated Outcomes Following Carotid Artery Laceration (SOCAL)-a publicly available surgical video data set of hemorrhage complication management with instrument annotations and task outcomes-to provide benchmarks for surgical data science techniques, including computer vision instrument detection, instrument use metrics and outcome associations, and validation of a SOCAL-trained neural network using real operative video. Design, Setting, and Participants: For this quailty improvement study, a total of 75 surgeons with 1 to 30 years' experience (mean, 7 years) were filmed from January 1, 2017, to December 31, 2020, managing catastrophic surgical hemorrhage in a high-fidelity cadaveric training exercise at nationwide training courses. Videos were annotated from January 1 to June 30, 2021. Interventions: Surgeons received expert coaching between 2 trials. Main Outcomes and Measures: Hemostasis within 5 minutes (task success, dichotomous), time to hemostasis (in seconds), and blood loss (in milliliters) were recorded. Deep neural networks (DNNs) were trained to detect surgical instruments in view. Model performance was measured using mean average precision (mAP), sensitivity, and positive predictive value. Results: SOCAL contains 31 443 frames with 65 071 surgical instrument annotations from 147 trials with associated surgeon demographic characteristics, time to hemostasis, and recorded blood loss for each trial. Computer vision-based instrument detection methods using DNNs trained on SOCAL achieved a mAP of 0.67 overall and 0.91 for the most common surgical instrument (suction). Hemorrhage control challenges standard object detectors: detection of some surgical instruments remained poor (mAP, 0.25). On real intraoperative video, the model achieved a sensitivity of 0.77 and a positive predictive value of 0.96. Instrument use metrics derived from the SOCAL video were significantly associated with performance (blood loss). Conclusions and Relevance: Hemorrhage control is a high-stakes adverse event that poses unique challenges for video analysis, but no data sets of hemorrhage control exist. The use of SOCAL, the first data set to depict hemorrhage control, allows the benchmarking of data science applications, including object detection, performance metric development, and identification of metrics associated with outcomes. In the future, SOCAL may be used to build and validate surgical data science models.


Assuntos
Lacerações , Cirurgiões , Artérias Carótidas , Humanos , Lacerações/cirurgia , Aprendizado de Máquina , Redes Neurais de Computação
10.
NPJ Digit Med ; 5(1): 187, 2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36550203

RESUMO

How well a surgery is performed impacts a patient's outcomes; however, objective quantification of performance remains an unsolved challenge. Deconstructing a procedure into discrete instrument-tissue "gestures" is a emerging way to understand surgery. To establish this paradigm in a procedure where performance is the most important factor for patient outcomes, we identify 34,323 individual gestures performed in 80 nerve-sparing robot-assisted radical prostatectomies from two international medical centers. Gestures are classified into nine distinct dissection gestures (e.g., hot cut) and four supporting gestures (e.g., retraction). Our primary outcome is to identify factors impacting a patient's 1-year erectile function (EF) recovery after radical prostatectomy. We find that less use of hot cut and more use of peel/push are statistically associated with better chance of 1-year EF recovery. Our results also show interactions between surgeon experience and gesture types-similar gesture selection resulted in different EF recovery rates dependent on surgeon experience. To further validate this framework, two teams independently constructe distinct machine learning models using gesture sequences vs. traditional clinical features to predict 1-year EF. In both models, gesture sequences are able to better predict 1-year EF (Team 1: AUC 0.77, 95% CI 0.73-0.81; Team 2: AUC 0.68, 95% CI 0.66-0.70) than traditional clinical features (Team 1: AUC 0.69, 95% CI 0.65-0.73; Team 2: AUC 0.65, 95% CI 0.62-0.68). Our results suggest that gestures provide a granular method to objectively indicate surgical performance and outcomes. Application of this methodology to other surgeries may lead to discoveries on methods to improve surgery.

11.
Surgery ; 169(5): 1240-1244, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32988620

RESUMO

BACKGROUND: Our previous work classified a taxonomy of suturing gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep learning-based computer vision to automate the identification and classification of suturing gestures for needle driving attempts. METHODS: Using two independent raters, we manually annotated live suturing video clips to label timepoints and gestures. Identification (2,395 videos) and classification (511 videos) datasets were compiled to train computer vision models to produce 2- and 5-class label predictions, respectively. Networks were trained on inputs of raw red/blue/green pixels as well as optical flow for each frame. Each model was trained on 80/20 train/test splits. RESULTS: In this study, all models were able to reliably predict either the presence of a gesture (identification, area under the curve: 0.88) as well as the type of gesture (classification, area under the curve: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (long short-term memory unit versus convolutional long short-term memory unit) on performance. CONCLUSION: Our results demonstrate computer vision's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Procedimentos Cirúrgicos Robóticos , Técnicas de Sutura , Humanos
12.
J Neurosurg ; : 1-10, 2021 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-34972086

RESUMO

OBJECTIVE: Experts can assess surgeon skill using surgical video, but a limited number of expert surgeons are available. Automated performance metrics (APMs) are a promising alternative but have not been created from operative videos in neurosurgery to date. The authors aimed to evaluate whether video-based APMs can predict task success and blood loss during endonasal endoscopic surgery in a validated cadaveric simulator of vascular injury of the internal carotid artery. METHODS: Videos of cadaveric simulation trials by 73 neurosurgeons and otorhinolaryngologists were analyzed and manually annotated with bounding boxes to identify the surgical instruments in the frame. APMs in five domains were defined-instrument usage, time-to-phase, instrument disappearance, instrument movement, and instrument interactions-on the basis of expert analysis and task-specific surgical progressions. Bounding-box data of instrument position were then used to generate APMs for each trial. Multivariate linear regression was used to test for the associations between APMs and blood loss and task success (hemorrhage control in less than 5 minutes). The APMs of 93 successful trials were compared with the APMs of 49 unsuccessful trials. RESULTS: In total, 29,151 frames of surgical video were annotated. Successful simulation trials had superior APMs in each domain, including proportionately more time spent with the key instruments in view (p < 0.001) and less time without hemorrhage control (p = 0.002). APMs in all domains improved in subsequent trials after the participants received personalized expert instruction. Attending surgeons had superior instrument usage, time-to-phase, and instrument disappearance metrics compared with resident surgeons (p < 0.01). APMs predicted surgeon performance better than surgeon training level or prior experience. A regression model that included APMs predicted blood loss with an R2 value of 0.87 (p < 0.001). CONCLUSIONS: Video-based APMs were superior predictors of simulation trial success and blood loss than surgeon characteristics such as case volume and attending status. Surgeon educators can use APMs to assess competency, quantify performance, and provide actionable, structured feedback in order to improve patient outcomes. Validation of APMs provides a benchmark for further development of fully automated video assessment pipelines that utilize machine learning and computer vision.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa