RESUMEN
Importance: Machine learning tools are increasingly deployed for risk prediction and clinical decision support in surgery. Class imbalance adversely impacts predictive performance, especially for low-incidence complications. Objective: To evaluate risk-prediction model performance when trained on risk-specific cohorts. Design, Setting, and Participants: This cross-sectional study performed from February 2024 to July 2024 deployed a deep learning model, which generated risk scores for common postoperative complications. A total of 109â¯445 inpatient operations performed at 2 University of Florida Health hospitals from June 1, 2014, to May 5, 2021 were examined. Exposures: The model was trained de novo on separate cohorts for high-risk, medium-risk, and low-risk Common Procedure Terminology codes defined empirically by incidence of 5 postoperative complications: (1) in-hospital mortality; (2) prolonged intensive care unit (ICU) stay (≥48 hours); (3) prolonged mechanical ventilation (≥48 hours); (4) sepsis; and (5) acute kidney injury (AKI). Low-risk and high-risk cutoffs for complications were defined by the lower-third and upper-third prevalence in the dataset, except for mortality, cutoffs for which were set at 1% or less and greater than 3%, respectively. Main Outcomes and Measures: Model performance metrics were assessed for each risk-specific cohort alongside the baseline model. Metrics included area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), F1 scores, and accuracy for each model. Results: A total of 109â¯445 inpatient operations were examined among patients treated at 2 University of Florida Health hospitals in Gainesville (77â¯921 procedures [71.2%]) and Jacksonville (31â¯524 procedures [28.8%]). Median (IQR) patient age was 58 (43-68) years, and median (IQR) Charlson Comorbidity Index score was 2 (0-4). Among 109â¯445 operations, 55â¯646 patients were male (50.8%), and 66â¯495 patients (60.8%) underwent a nonemergent, inpatient operation. Training on the high-risk cohort had variable impact on AUROC, but significantly improved AUPRC (as assessed by nonoverlapping 95% confidence intervals) for predicting mortality (0.53; 95% CI, 0.43-0.64), AKI (0.61; 95% CI, 0.58-0.65), and prolonged ICU stay (0.91; 95% CI, 0.89-0.92). It also significantly improved F1 score for mortality (0.42; 95% CI, 0.36-0.49), prolonged mechanical ventilation (0.55; 95% CI, 0.52-0.58), sepsis (0.46; 95% CI, 0.43-0.49), and AKI (0.57; 95% CI, 0.54-0.59). After controlling for baseline model performance on high-risk cohorts, AUPRC increased significantly for in-hospital mortality only (0.53; 95% CI, 0.42-0.65 vs 0.29; 95% CI, 0.21-0.40). Conclusion and Relevance: In this cross-sectional study, by training separate models using a priori knowledge for procedure-specific risk classes, improved performance in standard evaluation metrics was observed, especially for low-prevalence complications like in-hospital mortality. Used cautiously, this approach may represent an optimal training strategy for surgical risk-prediction models.
RESUMEN
The objective of this project was to develop a standardized list of renally eliminated and potentially nephrotoxic drugs that will help inform initiatives to improve medication safety. Several available lists of medications from the published literature including original research articles and reviews, and from regulatory agencies, tertiary references, and clinical decision support systems were compiled, consolidated, and compared. Only systemically administered medications were included. Medication combinations were included if at least 1 active ingredient was considered renally dosed or potentially nephrotoxic. The medication list was reviewed for completeness and clinical appropriateness by a multidisciplinary team of individuals with expertise in critical care, nephrology, and pharmacy. An initial list of renally dosed and nephrotoxic drugs was created. After reconciliation and consensus from clinical experts, a standardized list of 681 drugs is proposed. The proposed evidence-based standardized list of renally dosed and potentially nephrotoxic drugs will be useful to harmonize epidemiologic and medication quality improvement studies. In addition, the list can be used for clinical purposes with surveillance in nephrotoxin stewardship programs. We suggest an iterative re-evaluation of the list with emerging literature and new medications on an approximately annual basis.
RESUMEN
BACKGROUND: Acute kidney injury (AKI) is a multifaceted disease characterized by diverse clinical presentations and mechanisms. Advances in artificial intelligence have propelled the identification of AKI subphenotypes, enhancing our capacity to customize treatments and predict disease trajectories. METHODS: We conducted a systematic review of the literature from 2017 to 2022, focusing on studies that utilized machine learning techniques to identify AKI subphenotypes in adult patients. Data were extracted regarding patient demographics, clustering methodologies, discriminators, and validation efforts from selected studies. RESULTS: The review highlights significant variability in subphenotype identification across different populations. All studies utilized clinical data such as comorbidities and laboratory variables to group patients. Two studies incorporated biomarkers of endothelial activation and inflammation into the clinical data to identify subphenotypes. The primary discriminators were comorbidities and laboratory trajectories. The association of AKI subphenotypes with mortality, renal recovery and treatment response was heterogeneous across studies. The use of diverse clustering techniques contributed to variability, complicating the application of findings across different patient populations. CONCLUSIONS: Identifying AKI subphenotypes enables clinicians to better understand and manage individual patient trajectories. Future research should focus on validating these phenotypes in larger, more diverse cohorts to enhance their clinical applicability and support personalized medicine in AKI management.
RESUMEN
On average, more than 5 million patients are admitted to intensive care units (ICUs) in the US, with mortality rates ranging from 10 to 29%. The acuity state of patients in the ICU can quickly change from stable to unstable, sometimes leading to life-threatening conditions. Early detection of deteriorating conditions can assist in more timely interventions and improved survival rates. While Artificial Intelligence (AI)-based models show potential for assessing acuity in a more granular and automated manner, they typically use mortality as a proxy of acuity in the ICU. Furthermore, these methods do not determine the acuity state of a patient (i.e., stable or unstable), the transition between acuity states, or the need for life-sustaining therapies. In this study, we propose APRICOT-M (Acuity Prediction in Intensive Care Unit-Mamba), a 1M-parameter state space-based neural network to predict acuity state, transitions, and the need for life-sustaining therapies in real-time among ICU patients. The model integrates ICU data in the preceding four hours (including vital signs, laboratory results, assessment scores, and medications) and patient characteristics (age, sex, race, and comorbidities) to predict the acuity outcomes in the next four hours. Our state space-based model can process sparse and irregularly sampled data without manual imputation, thus reducing the noise in input data and increasing inference speed. The model was trained on data from 107,473 patients (142,062 ICU admissions) from 55 hospitals between 2014-2017 and validated externally on data from 74,901 patients (101,356 ICU admissions) from 143 hospitals. Additionally, it was validated temporally on data from 12,927 patients (15,940 ICU admissions) from one hospital in 2018-2019 and prospectively on data from 215 patients (369 ICU admissions) from one hospital in 2021-2023. Three datasets were used for training and evaluation: the University of Florida Health (UFH) dataset, the electronic ICU Collaborative Research Database (eICU), and the Medical Information Mart for Intensive Care (MIMIC)-IV dataset. APRICOT-M significantly outperforms the baseline acuity assessment, Sequential Organ Failure Assessment (SOFA), for mortality prediction in both external (AUROC 0.95 CI: 0.94-0.95 compared to 0.78 CI: 0.78-0.79) and prospective (AUROC 0.99 CI: 0.97-1.00 compared to 0.80 CI: 0.65-0.92) cohorts, as well as for instability prediction (external AUROC 0.75 CI: 0.74-0.75 compared to 0.51 CI: 0.51-0.51, and prospective AUROC 0.69 CI: 0.64-0.74 compared to 0.53 CI: 0.50-0.57). This tool has the potential to help clinicians make timely interventions by predicting the transition between acuity states and decision-making on life-sustaining within the next four hours in the ICU.
RESUMEN
The degree to which artificial intelligence healthcare research is informed by data and stakeholders from community settings has not been previously described. As communities are the principal location of healthcare delivery, engaging them could represent an important opportunity to improve scientific quality. This scoping review systematically maps what is known and unknown about community-engaged artificial intelligence research and identifies opportunities to optimize the generalizability of these applications through involvement of community stakeholders and data throughout model development, validation, and implementation. Embase, PubMed, and MEDLINE databases were searched for articles describing artificial intelligence or machine learning healthcare applications with community involvement in model development, validation, or implementation. Model architecture and performance, the nature of community engagement, and barriers or facilitators to community engagement were reported according to PRISMA extension for Scoping Reviews guidelines. Of approximately 10,880 articles describing artificial intelligence healthcare applications, 21 (0.2%) described community involvement. All articles derived data from community settings, most commonly by leveraging existing datasets and sources that included community subjects, and often bolstered by internet-based data acquisition and subject recruitment. Only one article described inclusion of community stakeholders in designing an application-a natural language processing model that detected cases of likely child abuse with 90% accuracy using harmonized electronic health record notes from both hospital and community practice settings. The primary barrier to including community-derived data was small sample sizes, which may have affected 11 of the 21 studies (53%), introducing substantial risk for overfitting that threatens generalizability. Community engagement in artificial intelligence healthcare application development, validation, or implementation is rare. As healthcare delivery occurs primarily in community settings, investigators should consider engaging community stakeholders in user-centered design, usability, and clinical implementation studies to optimize generalizability.
RESUMEN
BACKGROUND: Surrogates, proxies, and clinicians making shared treatment decisions for patients who have lost decision-making capacity often fail to honor patients' wishes, due to stress, time pressures, misunderstanding patient values, and projecting personal biases. Advance directives intend to align care with patient values but are limited by low completion rates and application to only a subset of medical decisions. Here, we investigate the potential of large language models (LLMs) to incorporate patient values in supporting critical care clinical decision-making for incapacitated patients in a proof-of-concept study. METHODS: We simulated text-based scenarios for 50 decisionally incapacitated patients for whom a medical condition required imminent clinical decisions regarding specific interventions. For each patient, we also simulated five unique value profiles captured using alternative formats: numeric ranking questionnaires, text-based questionnaires, and free-text narratives. We used pre-trained generative LLMs for two tasks: 1) text extraction of the treatments under consideration and 2) prompt-based question-answering to generate a recommendation in response to the scenario information, extracted treatment, and patient value profiles. Model outputs were compared with adjudications by three domain experts who independently evaluated each scenario and decision. RESULTS AND CONCLUSIONS: Automated extractions of the treatment in question were accurate for 88% (n = 44/50) of scenarios. LLM treatment recommendations received an average Likert score by the adjudicators of 3.92 of 5.00 (five being best) across all patients for being medically plausible and reasonable treatment recommendations, and 3.58 of 5.00 for reflecting the documented values of the patient. Scores were highest when patient values were captured as short, unstructured, and free-text narratives based on simulated patient profiles. This proof-of-concept study demonstrates the potential for LLMs to function as support tools for surrogates, proxies, and clinicians aiming to honor the wishes and values of decisionally incapacitated patients.
Asunto(s)
Apoderado , Humanos , Directivas Anticipadas , Toma de Decisiones , Toma de Decisiones Clínicas/métodos , Prueba de Estudio Conceptual , Encuestas y Cuestionarios , Lenguaje , Cuidados Críticos/métodosRESUMEN
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
Asunto(s)
Aprendizaje Profundo , Procesamiento de Lenguaje Natural , Humanos , Inteligencia Artificial , Atención a la Salud/organización & administración , Redes Neurales de la Computación , Registros Electrónicos de SaludRESUMEN
Objective: To determine whether certain patients are vulnerable to errant triage decisions immediately after major surgery and whether there are unique sociodemographic phenotypes within overtriaged and undertriaged cohorts. Background: In a fair system, overtriage of low-acuity patients to intensive care units (ICUs) and undertriage of high-acuity patients to general wards would affect all sociodemographic subgroups equally. Methods: This multicenter, longitudinal cohort study of hospital admissions immediately after major surgery compared hospital mortality and value of care (risk-adjusted mortality/total costs) across 4 cohorts: overtriage (N = 660), risk-matched overtriage controls admitted to general wards (N = 3077), undertriage (N = 2335), and risk-matched undertriage controls admitted to ICUs (N = 4774). K-means clustering identified sociodemographic phenotypes within overtriage and undertriage cohorts. Results: Compared with controls, overtriaged admissions had a predominance of male patients (56.2% vs 43.1%, P < 0.001) and commercial insurance (6.4% vs 2.5%, P < 0.001); undertriaged admissions had a predominance of Black patients (28.4% vs 24.4%, P < 0.001) and greater socioeconomic deprivation. Overtriage was associated with increased total direct costs [$16.2K ($11.4K-$23.5K) vs $14.1K ($9.1K-$20.7K), P < 0.001] and low value of care; undertriage was associated with increased hospital mortality (1.5% vs 0.7%, P = 0.002) and hospice care (2.2% vs 0.6%, P < 0.001) and low value of care. Unique sociodemographic phenotypes within both overtriage and undertriage cohorts had similar outcomes and value of care, suggesting that triage decisions, rather than patient characteristics, drive outcomes and value of care. Conclusions: Postoperative triage decisions should ensure equality across sociodemographic groups by anchoring triage decisions to objective patient acuity assessments, circumventing cognitive shortcuts and mitigating bias.
RESUMEN
Acuity assessments are vital for timely interventions and fair resource allocation in critical care settings. Conventional acuity scoring systems heavily depend on subjective patient assessments, leaving room for implicit bias and errors. These assessments are often manual, time-consuming, intermittent, and challenging to interpret accurately, especially for healthcare providers. This risk of bias and error is likely most pronounced in time-constrained and high-stakes environments, such as critical care settings. Furthermore, such scores do not incorporate other information, such as patients' mobility level, which can indicate recovery or deterioration in the intensive care unit (ICU), especially at a granular level. We hypothesized that wearable sensor data could assist in assessing patient acuity granularly, especially in conjunction with clinical data from electronic health records (EHR). In this prospective study, we evaluated the impact of integrating mobility data collected from wrist-worn accelerometers with clinical data obtained from EHR for estimating acuity. Accelerometry data were collected from 87 patients wearing accelerometers on their wrists in an academic hospital setting. The data was evaluated using five deep neural network models: VGG, ResNet, MobileNet, SqueezeNet, and a custom Transformer network. These models outperformed a rule-based clinical score (Sequential Organ Failure Assessment, SOFA) used as a baseline when predicting acuity state (for ground truth we labeled as unstable patients if they needed life-supporting therapies, and as stable otherwise), particularly regarding the precision, sensitivity, and F1 score. The results demonstrate that integrating accelerometer data with demographics and clinical variables improves predictive performance compared to traditional scoring systems in healthcare. Deep learning models consistently outperformed the SOFA score baseline across various scenarios, showing notable enhancements in metrics such as the area under the receiver operating characteristic (ROC) Curve (AUC), precision, sensitivity, specificity, and F1 score. The most comprehensive scenario, leveraging accelerometer, demographics, and clinical data, achieved the highest AUC of 0.73, compared to 0.53 when using SOFA score as the baseline, with significant improvements in precision (0.80 vs. 0.23), specificity (0.79 vs. 0.73), and F1 score (0.77 vs. 0.66). This study demonstrates a novel approach beyond the simplistic differentiation between stable and unstable conditions. By incorporating mobility and comprehensive patient information, we distinguish between these states in critically ill patients and capture essential nuances in physiology and functional status. Unlike rudimentary definitions, such as equating low blood pressure with instability, our methodology delves deeper, offering a more holistic understanding and potentially valuable insights for acuity assessment.
RESUMEN
Objective: Patients and clinicians rarely experience healthcare decisions as snapshots in time, but clinical decision support (CDS) systems often represent decisions as snapshots. This scoping review systematically maps challenges and facilitators to longitudinal CDS that are applied at two or more timepoints for the same decision made by the same patient or clinician. Methods: We searched Embase, PubMed, and Medline databases for articles describing development, validation, or implementation of patient- or clinician-facing longitudinal CDS. Validated quality assessment tools were used for article selection. Challenges and facilitators to longitudinal CDS are reported according to PRISMA-ScR guidelines. Results: Eight articles met inclusion criteria; each article described a unique CDS. None used entirely automated data entry, none used living guidelines for updating the evidence base or knowledge engine as new evidence emerged during the longitudinal study, and one included formal readiness for change assessments. Seven of eight CDS were implemented and evaluated prospectively. Challenges were primarily related to suboptimal study design (with unique challenges for each study) or user interface. Facilitators included use of randomized trial designs for prospective enrollment, increased CDS uptake during longitudinal exposure, and machine-learning applications that are tailored to the CDS use case. Conclusions: Despite the intuitive advantages of representing healthcare decisions longitudinally, peer-reviewed literature on longitudinal CDS is sparse. Existing reports suggest opportunities to incorporate longitudinal CDS frameworks, automated data entry, living guidelines, and user readiness assessments. Generating best practice guidelines for longitudinal CDS would require a greater depth and breadth of published work and expert opinion.
RESUMEN
BACKGROUND: Perhaps nowhere else in the healthcare system than in the intensive care unit environment are the challenges to create useful models with direct time-critical clinical applications more relevant and the obstacles to achieving those goals more massive. Machine learning-based artificial intelligence (AI) techniques to define states and predict future events are commonplace activities of modern life. However, their penetration into acute care medicine has been slow, stuttering and uneven. Major obstacles to widespread effective application of AI approaches to the real-time care of the critically ill patient exist and need to be addressed. MAIN BODY: Clinical decision support systems (CDSSs) in acute and critical care environments support clinicians, not replace them at the bedside. As will be discussed in this review, the reasons are many and include the immaturity of AI-based systems to have situational awareness, the fundamental bias in many large databases that do not reflect the target population of patient being treated making fairness an important issue to address and technical barriers to the timely access to valid data and its display in a fashion useful for clinical workflow. The inherent "black-box" nature of many predictive algorithms and CDSS makes trustworthiness and acceptance by the medical community difficult. Logistically, collating and curating in real-time multidimensional data streams of various sources needed to inform the algorithms and ultimately display relevant clinical decisions support format that adapt to individual patient responses and signatures represent the efferent limb of these systems and is often ignored during initial validation efforts. Similarly, legal and commercial barriers to the access to many existing clinical databases limit studies to address fairness and generalizability of predictive models and management tools. CONCLUSIONS: AI-based CDSS are evolving and are here to stay. It is our obligation to be good shepherds of their use and further development.
Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Cuidados Críticos , Unidades de Cuidados Intensivos , Atención a la SaludRESUMEN
Using clustering analysis for early vital signs, unique patient phenotypes with distinct pathophysiological signatures and clinical outcomes may be revealed and support early clinical decision-making. Phenotyping using early vital signs has proven challenging, as vital signs are typically sampled sporadically. We proposed a novel, deep temporal interpolation and clustering network to simultaneously extract latent representations from irregularly sampled vital signs and derive phenotypes. Four distinct clusters were identified. Phenotype A (18%) had the greatest prevalence of comorbid disease with increased prevalence of prolonged respiratory insufficiency, acute kidney injury, sepsis, and long-term (3-year) mortality. Phenotypes B (33%) and C (31%) had a diffuse pattern of mild organ dysfunction. Phenotype B's favorable short-term clinical outcomes were tempered by the second highest rate of long-term mortality. Phenotype C had favorable clinical outcomes. Phenotype D (17%) exhibited early and persistent hypotension, high incidence of early surgery, and substantial biomarker incidence of inflammation. Despite early and severe illness, phenotype D had the second lowest long-term mortality. After comparing the sequential organ failure assessment scores, the clustering results did not simply provide a recapitulation of previous acuity assessments. This tool may impact triage decisions and have significant implications for clinical decision-support under time constraints and uncertainty.
Asunto(s)
Puntuaciones en la Disfunción de Órganos , Sepsis , Humanos , Enfermedad Aguda , Fenotipo , Biomarcadores , Análisis por ConglomeradosRESUMEN
Standard race adjustments for estimating glomerular filtration rate (GFR) and reference creatinine can yield a lower acute kidney injury (AKI) and chronic kidney disease (CKD) prevalence among African American patients than non-race adjusted estimates. We developed two race-agnostic computable phenotypes that assess kidney health among 139,152 subjects admitted to the University of Florida Health between 1/2012-8/2019 by removing the race modifier from the estimated GFR and estimated creatinine formula used by the race-adjusted algorithm (race-agnostic algorithm 1) and by utilizing 2021 CKD-EPI refit without race formula (race-agnostic algorithm 2) for calculations of the estimated GFR and estimated creatinine. We compared results using these algorithms to the race-adjusted algorithm in African American patients. Using clinical adjudication, we validated race-agnostic computable phenotypes developed for preadmission CKD and AKI presence on 300 cases. Race adjustment reclassified 2,113 (8%) to no CKD and 7,901 (29%) to a less severe CKD stage compared to race-agnostic algorithm 1 and reclassified 1,208 (5%) to no CKD and 4,606 (18%) to a less severe CKD stage compared to race-agnostic algorithm 2. Of 12,451 AKI encounters based on race-agnostic algorithm 1, race adjustment reclassified 591 to No AKI and 305 to a less severe AKI stage. Of 12,251 AKI encounters based on race-agnostic algorithm 2, race adjustment reclassified 382 to No AKI and 196 (1.6%) to a less severe AKI stage. The phenotyping algorithm based on refit without race formula performed well in identifying patients with CKD and AKI with a sensitivity of 100% (95% confidence interval [CI] 97%-100%) and 99% (95% CI 97%-100%) and a specificity of 88% (95% CI 82%-93%) and 98% (95% CI 93%-100%), respectively. Race-agnostic algorithms identified substantial proportions of additional patients with CKD and AKI compared to race-adjusted algorithm in African American patients. The phenotyping algorithm is promising in identifying patients with kidney disease and improving clinical decision-making.
Asunto(s)
Lesión Renal Aguda , Negro o Afroamericano , Tasa de Filtración Glomerular , Hospitalización , Insuficiencia Renal Crónica , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/epidemiología , Algoritmos , Creatinina/sangre , Riñón/fisiopatología , Fenotipo , Insuficiencia Renal Crónica/fisiopatología , Insuficiencia Renal Crónica/epidemiología , Insuficiencia Renal Crónica/diagnósticoRESUMEN
Acute kidney injury (AKI) often complicates sepsis and is associated with high morbidity and mortality. In recent years, several important clinical trials have improved our understanding of sepsis-associated AKI (SA-AKI) and impacted clinical care. Advances in sub-phenotyping of sepsis and AKI and clinical trial design offer unprecedented opportunities to fill gaps in knowledge and generate better evidence for improving the outcome of critically ill patients with SA-AKI. In this manuscript, we review the recent literature of clinical trials in sepsis with focus on studies that explore SA-AKI as a primary or secondary outcome. We discuss lessons learned and potential opportunities to improve the design of clinical trials and generate actionable evidence in future research. We specifically discuss the role of enrichment strategies to target populations that are most likely to derive benefit and the importance of patient-centered clinical trial endpoints and appropriate trial designs with the aim to provide guidance in designing future trials.
Asunto(s)
Lesión Renal Aguda , Sepsis , Humanos , Lesión Renal Aguda/terapia , Lesión Renal Aguda/complicaciones , Enfermedad Crítica/terapia , Sepsis/complicaciones , Sepsis/terapia , Ensayos Clínicos como AsuntoRESUMEN
BACKGROUND: There is no consensus regarding safe intraoperative blood pressure thresholds that protect against postoperative acute kidney injury (AKI). This review aims to examine the existing literature to delineate safe intraoperative hypotension (IOH) parameters to prevent postoperative AKI. METHODS: PubMed, Cochrane Central, and Web of Science were systematically searched for articles published between 2015 and 2022 relating the effects of IOH on postoperative AKI. RESULTS: Our search yielded 19 articles. IOH risk thresholds ranged from <50 to <75 âmmHg for mean arterial pressure (MAP) and from <70 to <100 âmmHg for systolic blood pressure (SBP). MAP below 65 âmmHg for over 5 âmin was the most cited threshold (N â= â13) consistently associated with increased postoperative AKI. Greater magnitude and duration of MAP and SBP below the thresholds were generally associated with a dose-dependent increase in postoperative AKI incidence. CONCLUSIONS: While a consistent definition for IOH remains elusive, the evidence suggests that MAP below 65 âmmHg for over 5 âmin is strongly associated with postoperative AKI, with the risk increasing with the magnitude and duration of IOH.
Asunto(s)
Lesión Renal Aguda , Hipotensión , Complicaciones Intraoperatorias , Complicaciones Posoperatorias , Humanos , Lesión Renal Aguda/etiología , Lesión Renal Aguda/epidemiología , Lesión Renal Aguda/prevención & control , Hipotensión/etiología , Hipotensión/epidemiología , Hipotensión/prevención & control , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/prevención & control , Complicaciones Posoperatorias/etiología , Complicaciones Intraoperatorias/prevención & control , Complicaciones Intraoperatorias/epidemiología , Complicaciones Intraoperatorias/etiologíaRESUMEN
BACKGROUND: Postoperative acute kidney injury (AKI) is common after major surgery and is associated with increased morbidity, mortality, and cost. Additionally, there are recent studies demonstrating that time to renal recovery may have a substantial impact on clinical outcomes. We hypothesized that patients with delayed renal recovery after major vascular surgery will have increased complications, mortality, and hospital cost. METHODS: A single-center retrospective cohort of patients undergoing nonemergent major vascular surgery between 6/1/2014 and 10/1/2020 was analyzed. Development of postoperative AKI (defined using Kidney Disease Improving Global Outcomes (KDIGO) criteria: >50% or > 0.3 mg/dl absolute increase in serum creatinine relative to reference after surgery and before discharge) was evaluated. Patients were divided into 3 groups: no AKI, rapidly reversed AKI (<48 hours), and persistent AKI (≥48 hours). Multivariable generalized linear models were used to evaluate the association between AKI groups and postoperative complications, 90-day mortality, and hospital cost. RESULTS: A total of 1,881 patients undergoing 1,980 vascular procedures were included. Thirty five percent of patients developed postoperative AKI. Patients with persistent AKI had longer intensive care unit and hospital stays, as well as more mechanical ventilation days. In multivariable logistic regression analysis, persistent AKI was a major predictor of 90-day mortality (odds ratio 4.1, 95% confidence interval 2.4-7.1). Adjusted average cost was higher for patients with any type of AKI. The incremental cost of having any AKI ranged from $3,700 to $9,100, even after adjustment for comorbidities and other postoperative complications. The adjusted average cost for patients stratified by type of AKI was higher among patients with persistent AKI compared to those with no or rapidly reversed AKI. CONCLUSIONS: Persistent AKI after vascular surgery is associated with increased complications, mortality, and cost. Strategies to prevent and aggressively treat AKI, specifically persistent AKI, in the perioperative setting are imperative to optimize care for this population.
Asunto(s)
Lesión Renal Aguda , Costos de Hospital , Humanos , Estudios Retrospectivos , Factores de Riesgo , Resultado del Tratamiento , Complicaciones Posoperatorias , Procedimientos Quirúrgicos Vasculares/efectos adversos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/etiología , Mortalidad HospitalariaRESUMEN
Drug-induced kidney disease (DIKD) accounts for about one-fourth of all cases of acute kidney injury (AKI) in hospitalized patients, especially in critically ill setting. There is no standard definition or classification system of DIKD. To address this, a phenotype definition of DIKD using expert consensus was introduced in 2015. Recently, a novel framework for DIKD classification was proposed that incorporated functional change and tissue damage biomarkers. Medications were stratified into four categories, including "dysfunction without damage," "damage without dysfunction," "both dysfunction and damage," and "neither dysfunction nor damage" using this novel framework along with predominant mechanism(s) of nephrotoxicity for drugs and drug classes. Here, we briefly describe mechanisms and provide examples of drugs/drug classes related to the categories in the proposed framework. In addition, the possible movement of a patient's kidney disease between certain categories in specific conditions is considered. Finally, opportunities and barriers to adoption of this framework for DIKD classification in real clinical practice are discussed. This new classification system allows congruencies for DIKD with the proposed categorization of AKI, offering clarity as well as consistency for clinicians and researchers.