Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Bioengineering (Basel) ; 10(8)2023 Aug 05.
Article in English | MEDLINE | ID: mdl-37627817

ABSTRACT

Acute kidney injury (AKI) is a major postoperative complication that lacks established intraoperative predictors. Our objective was to develop a prediction model using preoperative and high-frequency intraoperative data for postoperative AKI. In this retrospective cohort study, we evaluated 77,428 operative cases at a single academic center between 2016 and 2022. A total of 11,212 cases with serum creatinine (sCr) data were included in the analysis. Then, 8519 cases were randomly assigned to the training set and the remainder to the validation set. Fourteen preoperative and twenty intraoperative variables were evaluated using elastic net followed by hierarchical group least absolute shrinkage and selection operator (LASSO) regression. The training set was 56% male and had a median [IQR] age of 62 (51-72) and a 6% AKI rate. Retained model variables were preoperative sCr values, the number of minutes meeting cutoffs for urine output, heart rate, perfusion index intraoperatively, and the total estimated blood loss. The area under the receiver operator characteristic curve was 0.81 (95% CI, 0.77-0.85). At a score threshold of 0.767, specificity was 77% and sensitivity was 74%. A web application that calculates the model score is available online. Our findings demonstrate the utility of intraoperative time series data for prediction problems, including a new potential use of the perfusion index. Further research is needed to evaluate the model in clinical settings.

2.
JMIR AI ; 2: e44909, 2023 Sep 08.
Article in English | MEDLINE | ID: mdl-38875567

ABSTRACT

BACKGROUND: Accurate projections of procedural case durations are complex but critical to the planning of perioperative staffing, operating room resources, and patient communication. Nonlinear prediction models using machine learning methods may provide opportunities for hospitals to improve upon current estimates of procedure duration. OBJECTIVE: The aim of this study was to determine whether a machine learning algorithm scalable across multiple centers could make estimations of case duration within a tolerance limit because there are substantial resources required for operating room functioning that relate to case duration. METHODS: Deep learning, gradient boosting, and ensemble machine learning models were generated using perioperative data available at 3 distinct time points: the time of scheduling, the time of patient arrival to the operating or procedure room (primary model), and the time of surgical incision or procedure start. The primary outcome was procedure duration, defined by the time between the arrival and the departure of the patient from the procedure room. Model performance was assessed by mean absolute error (MAE), the proportion of predictions falling within 20% of the actual duration, and other standard metrics. Performance was compared with a baseline method of historical means within a linear regression model. Model features driving predictions were assessed using Shapley additive explanations values and permutation feature importance. RESULTS: A total of 1,177,893 procedures from 13 academic and private hospitals between 2016 and 2019 were used. Across all procedures, the median procedure duration was 94 (IQR 50-167) minutes. In estimating the procedure duration, the gradient boosting machine was the best-performing model, demonstrating an MAE of 34 (SD 47) minutes, with 46% of the predictions falling within 20% of the actual duration in the test data set. This represented a statistically and clinically significant improvement in predictions compared with a baseline linear regression model (MAE 43 min; P<.001; 39% of the predictions falling within 20% of the actual duration). The most important features in model training were historical procedure duration by surgeon, the word "free" within the procedure text, and the time of day. CONCLUSIONS: Nonlinear models using machine learning techniques may be used to generate high-performing, automatable, explainable, and scalable prediction models for procedure duration.

3.
Front Cardiovasc Med ; 9: 969325, 2022.
Article in English | MEDLINE | ID: mdl-36505372

ABSTRACT

Background: Women continue to have worse Coronary Artery Disease (CAD) outcomes than men. The causes of this discrepancy have yet to be fully elucidated. The main objective of this study is to detect gender discrepancies in the diagnosis and treatment of CAD. Methods: We used data analytics to risk stratify ~32,000 patients with CAD of the total 960,129 patients treated at the UCSF Medical Center over an 8 year period. We implemented a multidimensional data analytics framework to trace patients from admission through treatment to create a path of events. Events are any medications or noninvasive and invasive procedures. The time between events for a similar set of paths was calculated. Then, the average waiting time for each step of the treatment was calculated. Finally, we applied statistical analysis to determine differences in time between diagnosis and treatment steps for men and women. Results: There is a significant time difference from the first time of admission to diagnostic Cardiac Catheterization between genders (p-value = 0.000119), while the time difference from diagnostic Cardiac Catheterization to CABG is not statistically significant. Conclusion: Women had a significantly longer interval between their first physician encounter indicative of CAD and their first diagnostic cardiac catheterization compared to men. Avoiding this delay in diagnosis may provide more timely treatment and a better outcome for patients at risk. Finally, we conclude by discussing the impact of the study on improving patient care with early detection and managing individual patients at risk of rapid progression of CAD.

4.
Anaesth Crit Care Pain Med ; 41(5): 101126, 2022 10.
Article in English | MEDLINE | ID: mdl-35811037

ABSTRACT

BACKGROUND: The field of machine learning is being employed more and more in medicine. However, studies have shown that the quality of published studies frequently lacks completeness and adherence to published reporting guidelines. This assessment has not been done in the subspecialty of anesthesiology. METHODS: We appraised the quality of reporting of a convenience sample of 67 peer-reviewed publications sourced from the scoping review by Hashimoto et al. Each publication was appraised on the presence of reporting elements (reporting compliance) selected from 4 peer-reviewed guidelines for reporting on machine learning studies. Results are described in several cross sections, including by section of manuscript (e.g. abstract, introduction, etc.), year of publication, impact factor of journal, and impact of publication. RESULTS: On average, reporting compliance was 64% ± 13%. There was marked heterogeneity of reporting based on section of manuscript. There was a mild trend towards increased quality of reporting with increasing impact factor of journal of publication and increasing average number of citations per year since publication, and no trend regarding recency of publication. CONCLUSION: The quality of reporting of machine learning studies in anesthesiology is on par with other fields, but can benefit from improvement, especially in presenting methodology, results, and discussion points, including interpretation of models and pitfalls therein. Clinicians in today's learning health systems will benefit from skills in appraisal of evidence. Several reporting guidelines have been released, and updates to mainstream guidelines are under development, which we hope will usher in improvement in reporting quality.


Subject(s)
Anesthesiology , Anesthesiology/methods , Cohort Studies , Humans , Machine Learning , Research Design
5.
Stud Health Technol Inform ; 290: 1080-1081, 2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35673215

ABSTRACT

Early detection plays a key role to enhance the outcome for Coronary Artery Disease. We utilized a big data analytics platform on ∼32,000 patients to trace patients from the first encounter to CAD treatment. There are significant gender-based differences in patients younger than 60 from the time of the first encounter to Coronary Artery Bypass Grafting with a p-value=0.03. This recognition makes significant changes in outcome by avoiding delay in treatment.


Subject(s)
Coronary Artery Disease , Coronary Artery Bypass/adverse effects , Coronary Artery Disease/diagnosis , Coronary Artery Disease/surgery , Data Science , Electronic Health Records , Female , Humans , Risk Factors , Time-to-Treatment , Treatment Outcome
6.
BMC Anesthesiol ; 22(1): 141, 2022 05 11.
Article in English | MEDLINE | ID: mdl-35546657

ABSTRACT

BACKGROUND: The Centers for Disease Control and Prevention's (CDC) March 2016 opioid prescribing guideline did not include prescribing recommendations for surgical pain. Although opioid over-prescription for surgical patients has been well-documented, the potential effects of the CDC guideline on providers' opioid prescribing practices for surgical patients in the United States remains unclear. METHODS: We conducted an interrupted time series analysis (ITSA) of 37,009 opioid-naïve adult patients undergoing inpatient surgery from 2013-2019 at an academic medical center. We assessed quarterly changes in the discharge opioid prescription days' supply, daily and total doses in oral morphine milligram equivalents (OME), and the proportion of patients requiring opioid refills within 30 days of discharge. RESULTS: The discharge opioid prescription declined by -0.021 (95% CI, -0.045 to 0.003) days per quarter pre-guideline versus -0.201 (95% CI, -0.223 to -0.179) days per quarter post-guideline (p < 0.0001). Likewise, the mean daily and total doses of the discharge opioid prescription declined by -0.387 (95% CI, -0.661 to -0.112) and -7.124 (95% CI, -9.287 to -4.962) OME per quarter pre-guideline versus -2.307 (95% CI, -2.560 to -2.055) and -20.68 (95% CI, -22.66 to -18.69) OME per quarter post-guideline, respectively (p < 0.0001). Opioid refill prescription rates remained unchanged from baseline. CONCLUSIONS: The release of the CDC opioid guideline was associated with a significant reduction in discharge opioid prescriptions without a concomitant increase in the proportion of surgical patients requiring refills within 30 days. The mean prescription for opioid-naïve surgical patients decreased to less than 3 days' supply and less than 50 OME per day by 2019.


Subject(s)
Analgesics, Opioid , Patient Discharge , Adult , Analgesics, Opioid/therapeutic use , Centers for Disease Control and Prevention, U.S. , Hospitals , Humans , Pain, Postoperative/drug therapy , Pain, Postoperative/prevention & control , Practice Patterns, Physicians' , United States/epidemiology
7.
NPJ Digit Med ; 5(1): 66, 2022 May 31.
Article in English | MEDLINE | ID: mdl-35641814

ABSTRACT

Machine learning (ML) and artificial intelligence (AI) algorithms have the potential to derive insights from clinical data and improve patient outcomes. However, these highly complex systems are sensitive to changes in the environment and liable to performance decay. Even after their successful integration into clinical practice, ML/AI algorithms should be continuously monitored and updated to ensure their long-term safety and effectiveness. To bring AI into maturity in clinical care, we advocate for the creation of hospital units responsible for quality assurance and improvement of these algorithms, which we refer to as "AI-QI" units. We discuss how tools that have long been used in hospital quality assurance and quality improvement can be adapted to monitor static ML algorithms. On the other hand, procedures for continual model updating are still nascent. We highlight key considerations when choosing between existing methods and opportunities for methodological innovation.

8.
Br Med Bull ; 141(1): 15-32, 2022 03 21.
Article in English | MEDLINE | ID: mdl-35107127

ABSTRACT

INTRODUCTION: Management of patients in the acute care setting requires accurate diagnosis and rapid initiation of validated treatments; therefore, this setting is likely to be an environment in which cognitive augmentation of the clinician's provision of care with technology rooted in artificial intelligence, such as machine learning (ML), is likely to eventuate. SOURCES OF DATA: PubMed and Google Scholar with search terms that included ML, intensive/critical care unit, electronic health records (EHR), anesthesia information management systems and clinical decision support were the primary sources for this report. AREAS OF AGREEMENT: Different categories of learning of large clinical datasets, often contained in EHRs, are used for training in ML. Supervised learning uses algorithm-based models, including support vector machines, to pair patients' attributes with an expected outcome. Unsupervised learning uses clustering algorithms to define to which disease grouping a patient's attributes most closely approximates. Reinforcement learning algorithms use ongoing environmental feedback to deterministically pursue likely patient outcome. AREAS OF CONTROVERSY: Application of ML can result in undesirable outcomes over concerns related to fairness, transparency, privacy and accountability. Whether these ML technologies irrevocably change the healthcare workforce remains unresolved. GROWING POINTS: Well-resourced Learning Health Systems are likely to exploit ML technology to gain the fullest benefits for their patients. How these clinical advantages can be extended to patients in health systems that are neither well-endowed, nor have the necessary data gathering technologies, needs to be urgently addressed to avoid further disparities in healthcare.


Subject(s)
Artificial Intelligence , Machine Learning , Algorithms , Critical Care , Electronic Health Records , Humans
9.
BMC Anesthesiol ; 22(1): 8, 2022 01 03.
Article in English | MEDLINE | ID: mdl-34979919

ABSTRACT

BACKGROUND: Accurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression. METHODS: This was a retrospective analysis of preoperative EHR data from 24,885 adults undergoing a procedure requiring anesthesia care, recovering in the main post-anesthesia care unit, and staying in the hospital at least overnight between December 2016 and December 2019 at either of two hospitals in a tertiary care health system. One hundred fifteen preoperative risk features including demographics, comorbidities, nursing assessments, surgery type, and other preoperative EHR data were used to predict postoperative delirium (POD), defined as any instance of Nursing Delirium Screening Scale ≥2 or positive Confusion Assessment Method for the Intensive Care Unit within the first 7 postoperative days. Two ML models (Neural Network and XGBoost), two traditional logistic regression models ("clinician-guided" and "ML hybrid"), and a previously described delirium risk stratification tool (AWOL-S) were evaluated using the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, positive likelihood ratio, and positive predictive value. Model calibration was assessed with a calibration curve. Patients with no POD assessments charted or at least 20% of input variables missing were excluded. RESULTS: POD incidence was 5.3%. The AUC-ROC for Neural Net was 0.841 [95% CI 0. 816-0.863] and for XGBoost was 0.851 [95% CI 0.827-0.874], which was significantly better than the clinician-guided (AUC-ROC 0.763 [0.734-0.793], p < 0.001) and ML hybrid (AUC-ROC 0.824 [0.800-0.849], p < 0.001) regression models and AWOL-S (AUC-ROC 0.762 [95% CI 0.713-0.812], p < 0.001). Neural Net, XGBoost, and ML hybrid models demonstrated excellent calibration, while calibration of the clinician-guided and AWOL-S models was moderate; they tended to overestimate delirium risk in those already at highest risk. CONCLUSION: Using pragmatically collected EHR data, two ML models predicted POD in a broad perioperative population with high discrimination. Optimal application of the models would provide automated, real-time delirium risk stratification to improve perioperative management of surgical patients at risk for POD.


Subject(s)
Delirium/diagnosis , Electronic Health Records/statistics & numerical data , Machine Learning , Postoperative Complications/diagnosis , Aged , Cohort Studies , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Preoperative Period , Reproducibility of Results , Retrospective Studies
10.
J Clin Monit Comput ; 36(5): 1367-1377, 2022 10.
Article in English | MEDLINE | ID: mdl-34837585

ABSTRACT

Opal is the first published example of a full-stack platform infrastructure for an implementation science designed for ML in anesthesia that solves the problem of leveraging ML for clinical decision support. Users interact with a secure online Opal web application to select a desired operating room (OR) case cohort for data extraction, visualize datasets with built-in graphing techniques, and run in-client ML or extract data for external use. Opal was used to obtain data from 29,004 unique OR cases from a single academic institution for pre-operative prediction of post-operative acute kidney injury (AKI) based on creatinine KDIGO criteria using predictors which included pre-operative demographic, past medical history, medications, and flowsheet information. To demonstrate utility with unsupervised learning, Opal was also used to extract intra-operative flowsheet data from 2995 unique OR cases and patients were clustered using PCA analysis and k-means clustering. A gradient boosting machine model was developed using an 80/20 train to test ratio and yielded an area under the receiver operating curve (ROC-AUC) of 0.85 with 95% CI [0.80-0.90]. At the default probability decision threshold of 0.5, the model sensitivity was 0.9 and the specificity was 0.8. K-means clustering was performed to partition the cases into two clusters and for hypothesis generation of potential groups of outcomes related to intraoperative vitals. Opal's design has created streamlined ML functionality for researchers and clinicians in the perioperative setting and opens the door for many future clinical applications, including data mining, clinical simulation, high-frequency prediction, and quality improvement.


Subject(s)
Anesthesia , Decision Support Systems, Clinical , Creatinine , Humans , Implementation Science , Machine Learning
11.
Transplant Direct ; 7(10): e771, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34604507

ABSTRACT

Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry. METHODS: Several machine learning classifiers were trained to predict transplantation of a liver graft. We utilized 127 variables available in the DMG dataset. We included data from potential deceased organ donors between April 2012 and January 2019. The outcome was defined as liver recovery for transplantation in the operating room. The prediction was made based on data available 12-18 h after the time of authorization for transplantation. The data were randomly separated into training (60%), validation (20%), and test sets (20%). We compared the performance of our models to the Liver Discard Risk Index. RESULTS: Of 13 629 donors in the dataset, 9255 (68%) livers were recovered and transplanted, 1519 recovered but used for research or discarded, 2855 were not recovered. The optimized gradient boosting machine classifier achieved an area under the curve of the receiver operator characteristic of 0.84 on the test set, outperforming all other classifiers. CONCLUSIONS: This model predicts successful liver recovery for transplantation in the operating room, using data available early during donor management. It performs favorably when compared to existing models. It may provide real-time decision support during organ donor management and transplant logistics.

12.
Surg Endosc ; 35(1): 182-191, 2021 01.
Article in English | MEDLINE | ID: mdl-31953733

ABSTRACT

BACKGROUND: Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery. METHODS: ANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model. RESULTS: The study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001). CONCLUSIONS: ANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted.


Subject(s)
Anastomotic Leak/etiology , Bariatric Surgery/adverse effects , Machine Learning , Postoperative Complications/etiology , Venous Thromboembolism/etiology , Adult , Cohort Studies , Databases, Factual , Diagnosis, Computer-Assisted , Humans , Logistic Models , Neural Networks, Computer
13.
Comput Biol Med ; 128: 104095, 2021 01.
Article in English | MEDLINE | ID: mdl-33217660

ABSTRACT

While coronary angiography is the gold standard diagnostic tool for coronary artery disease (CAD), but it is associated with procedural risk, it is an invasive technique requiring arterial puncture, and it subjects the patient to radiation and iodinated contrast exposure. Artificial intelligence (AI) can provide a pretest probability of disease that can be used to triage patients for angiography. This review comprehensively investigates published papers in the domain of CAD detection using different AI techniques from 1991 to 2020, in order to discern broad trends and geographical differences. Moreover, key decision factors affecting CAD diagnosis are identified for different parts of the world by aggregating the results from different studies. In this study, all datasets that have been used for the studies for CAD detection, their properties, and achieved performances using various AI techniques, are presented, compared, and analyzed. In particular, the effectiveness of machine learning (ML) and deep learning (DL) techniques to diagnose and predict CAD are reviewed. From PubMed, Scopus, Ovid MEDLINE, and Google Scholar search, 500 papers were selected to be investigated. Among these selected papers, 256 papers met our criteria and hence were included in this study. Our findings demonstrate that AI-based techniques have been increasingly applied for the detection of CAD since 2008. AI-based techniques that utilized electrocardiography (ECG), demographic characteristics, symptoms, physical examination findings, and heart rate signals, reported high accuracy for the detection of CAD. In these papers, the authors ranked the features based on their assessed clinical importance with ML techniques. The results demonstrate that the attribution of the relative importance of ML features for CAD diagnosis is different among countries. More recently, DL methods have yielded high CAD detection performance using ECG signals, which drives its burgeoning adoption.


Subject(s)
Coronary Artery Disease , Artificial Intelligence , Coronary Angiography , Coronary Artery Disease/diagnostic imaging , Coronary Artery Disease/epidemiology , Electrocardiography , Humans , Machine Learning
14.
J Vis Exp ; (93): e51743, 2014 Nov 13.
Article in English | MEDLINE | ID: mdl-25490614

ABSTRACT

Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.


Subject(s)
Aerospace Medicine/instrumentation , Blood Chemical Analysis/instrumentation , Flow Cytometry/instrumentation , Microfluidics/instrumentation , Weightlessness Simulation/instrumentation , Aerospace Medicine/methods , Blood Chemical Analysis/methods , Flow Cytometry/methods , Humans , Hypogravity , Microfluidics/methods , Point-of-Care Systems , Space Flight , Weightlessness Simulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...