Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38470976

RESUMO

BACKGROUND: Estimating the risk of revision after arthroplasty could inform patient and surgeon decision-making. However, there is a lack of well-performing prediction models assisting in this task, which may be due to current conventional modeling approaches such as traditional survivorship estimators (such as Kaplan-Meier) or competing risk estimators. Recent advances in machine learning survival analysis might improve decision support tools in this setting. Therefore, this study aimed to assess the performance of machine learning compared with that of conventional modeling to predict revision after arthroplasty. QUESTION/PURPOSE: Does machine learning perform better than traditional regression models for estimating the risk of revision for patients undergoing hip or knee arthroplasty? METHODS: Eleven datasets from published studies from the Dutch Arthroplasty Register reporting on factors associated with revision or survival after partial or total knee and hip arthroplasty between 2018 and 2022 were included in our study. The 11 datasets were observational registry studies, with a sample size ranging from 3038 to 218,214 procedures. We developed a set of time-to-event models for each dataset, leading to 11 comparisons. A set of predictors (factors associated with revision surgery) was identified based on the variables that were selected in the included studies. We assessed the predictive performance of two state-of-the-art statistical time-to-event models for 1-, 2-, and 3-year follow-up: a Fine and Gray model (which models the cumulative incidence of revision) and a cause-specific Cox model (which models the hazard of revision). These were compared with a machine-learning approach (a random survival forest model, which is a decision tree-based machine-learning algorithm for time-to-event analysis). Performance was assessed according to discriminative ability (time-dependent area under the receiver operating curve), calibration (slope and intercept), and overall prediction error (scaled Brier score). Discrimination, known as the area under the receiver operating characteristic curve, measures the model's ability to distinguish patients who achieved the outcomes from those who did not and ranges from 0.5 to 1.0, with 1.0 indicating the highest discrimination score and 0.50 the lowest. Calibration plots the predicted versus the observed probabilities; a perfect plot has an intercept of 0 and a slope of 1. The Brier score calculates a composite of discrimination and calibration, with 0 indicating perfect prediction and 1 the poorest. A scaled version of the Brier score, 1 - (model Brier score/null model Brier score), can be interpreted as the amount of overall prediction error. RESULTS: Using machine learning survivorship analysis, we found no differences between the competing risks estimator and traditional regression models for patients undergoing arthroplasty in terms of discriminative ability (patients who received a revision compared with those who did not). We found no consistent differences between the validated performance (time-dependent area under the receiver operating characteristic curve) of different modeling approaches because these values ranged between -0.04 and 0.03 across the 11 datasets (the time-dependent area under the receiver operating characteristic curve of the models across 11 datasets ranged between 0.52 to 0.68). In addition, the calibration metrics and scaled Brier scores produced comparable estimates, showing no advantage of machine learning over traditional regression models. CONCLUSION: Machine learning did not outperform traditional regression models. CLINICAL RELEVANCE: Neither machine learning modeling nor traditional regression methods were sufficiently accurate in order to offer prognostic information when predicting revision arthroplasty. The benefit of these modeling approaches may be limited in this context.

2.
Crit Care Med ; 51(2): 291-300, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36524820

RESUMO

OBJECTIVES: Many machine learning (ML) models have been developed for application in the ICU, but few models have been subjected to external validation. The performance of these models in new settings therefore remains unknown. The objective of this study was to assess the performance of an existing decision support tool based on a ML model predicting readmission or death within 7 days after ICU discharge before, during, and after retraining and recalibration. DESIGN: A gradient boosted ML model was developed and validated on electronic health record data from 2004 to 2021. We performed an independent validation of this model on electronic health record data from 2011 to 2019 from a different tertiary care center. SETTING: Two ICUs in tertiary care centers in The Netherlands. PATIENTS: Adult patients who were admitted to the ICU and stayed for longer than 12 hours. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: We assessed discrimination by area under the receiver operating characteristic curve (AUC) and calibration (slope and intercept). We retrained and recalibrated the original model and assessed performance via a temporal validation design. The final retrained model was cross-validated on all data from the new site. Readmission or death within 7 days after ICU discharge occurred in 577 of 10,052 ICU admissions (5.7%) at the new site. External validation revealed moderate discrimination with an AUC of 0.72 (95% CI 0.67-0.76). Retrained models showed improved discrimination with AUC 0.79 (95% CI 0.75-0.82) for the final validation model. Calibration was poor initially and good after recalibration via isotonic regression. CONCLUSIONS: In this era of expanding availability of ML models, external validation and retraining are key steps to consider before applying ML models to new settings. Clinicians and decision-makers should take this into account when considering applying new ML models to their local settings.


Assuntos
Alta do Paciente , Readmissão do Paciente , Adulto , Humanos , Unidades de Terapia Intensiva , Hospitalização , Aprendizado de Máquina
3.
Health Qual Life Outcomes ; 18(1): 240, 2020 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-32690011

RESUMO

BACKGROUND: Cost-effectiveness models require quality of life utilities calculated from generic preference-based questionnaires, such as EQ-5D. We evaluated the performance of available algorithms for QLQ-C30 conversion into EQ-5D-3L based utilities in a metastatic colorectal cancer (mCRC) patient population and subsequently developed a mCRC specific algorithm. Influence of mapping on cost-effectiveness was evaluated. METHODS: Three available algorithms were compared with observed utilities from the CAIRO3 study. Six models were developed using 5-fold cross-validation: predicting EQ-5D-3L tariffs from QLQ-C30 functional scale scores, continuous QLQ-C30 scores or dummy levels with a random effects model (RE), a most likely probability method on EQ-5D-3L functional scale scores, a beta regression model on QLQ-C30 functional scale scores and a separate equations subgroup approach on QLQ-C30 functional scale scores. Performance was assessed, and algorithms were tested on incomplete QLQ-C30 questionnaires. Influence of utility mapping on incremental cost/QALY gained (ICER) was evaluated in an existing Dutch mCRC cost-effectiveness model. RESULTS: The available algorithms yielded mean utilities of 1: 0.87 ± sd:0.14,2: 0.81 ± 0.15 (both Dutch tariff) and 3: 0.81 ± sd:0.19. Algorithm 1 and 3 were significantly different from the mean observed utility (0.83 ± 0.17 with Dutch tariff, 0.80 ± 0.20 with U.K. tariff). All new models yielded predicted utilities drawing close to observed utilities; differences were not statistically significant. The existing algorithms resulted in an ICER difference of €10,140 less and €1765 more compared to the observed EQ-5D-3L based ICER (€168,048). The preferred newly developed algorithm was €5094 higher than the observed EQ-5D-3L based ICER. Disparity was explained by minimal diffences in incremental QALYs between models. CONCLUSION: Available mapping algorithms sufficiently accurately predict utilities. With the commonly used statistical methods, we did not succeed in developping an improved mapping algorithm. Importantly, cost-effectiveness outcomes in this study were comparable to the original model outcomes between different mapping algorithms. Therefore, mapping can be an adequate solution for cost-effectiveness studies using either a previously designed and validated algorithm or an algorithm developed in this study.


Assuntos
Algoritmos , Neoplasias Colorretais/psicologia , Qualidade de Vida , Análise Custo-Benefício , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Anos de Vida Ajustados por Qualidade de Vida , Inquéritos e Questionários/normas
4.
JMIR Med Inform ; 12: e51925, 2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38236635

RESUMO

BACKGROUND: Patients with cancer starting systemic treatment programs, such as chemotherapy, often develop depression. A prediction model may assist physicians and health care workers in the early identification of these vulnerable patients. OBJECTIVE: This study aimed to develop a prediction model for depression risk within the first month of cancer treatment. METHODS: We included 16,159 patients diagnosed with cancer starting chemo- or radiotherapy treatment between 2008 and 2021. Machine learning models (eg, least absolute shrinkage and selection operator [LASSO] logistic regression) and natural language processing models (Bidirectional Encoder Representations from Transformers [BERT]) were used to develop multimodal prediction models using both electronic health record data and unstructured text (patient emails and clinician notes). Model performance was assessed in an independent test set (n=5387, 33%) using area under the receiver operating characteristic curve (AUROC), calibration curves, and decision curve analysis to assess initial clinical impact use. RESULTS: Among 16,159 patients, 437 (2.7%) received a depression diagnosis within the first month of treatment. The LASSO logistic regression models based on the structured data (AUROC 0.74, 95% CI 0.71-0.78) and structured data with email classification scores (AUROC 0.74, 95% CI 0.71-0.78) had the best discriminative performance. The BERT models based on clinician notes and structured data with email classification scores had AUROCs around 0.71. The logistic regression model based on email classification scores alone performed poorly (AUROC 0.54, 95% CI 0.52-0.56), and the model based solely on clinician notes had the worst performance (AUROC 0.50, 95% CI 0.49-0.52). Calibration was good for the logistic regression models, whereas the BERT models produced overly extreme risk estimates even after recalibration. There was a small range of decision thresholds for which the best-performing model showed promising clinical effectiveness use. The risks were underestimated for female and Black patients. CONCLUSIONS: The results demonstrated the potential and limitations of machine learning and multimodal models for predicting depression risk in patients with cancer. Future research is needed to further validate these models, refine the outcome label and predictors related to mental health, and address biases across subgroups.

5.
J Clin Epidemiol ; 172: 111387, 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38729274

RESUMO

Clinical prediction models provide risks of health outcomes that can inform patients and support medical decisions. However, most models never make it to actual implementation in practice. A commonly heard reason for this lack of implementation is that prediction models are often not externally validated. While we generally encourage external validation, we argue that an external validation is often neither sufficient nor required as an essential step before implementation. As such, any available external validation should not be perceived as a license for model implementation. We clarify this argument by discussing 3 common misconceptions about external validation. We argue that there is not one type of recommended validation design, not always a necessity for external validation, and sometimes a need for multiple external validations. The insights from this paper can help readers to consider, design, interpret, and appreciate external validation studies.

6.
EBioMedicine ; 92: 104632, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37269570

RESUMO

BACKGROUND: Machine learning (ML) predictions are becoming increasingly integrated into medical practice. One commonly used method, ℓ1-penalised logistic regression (LASSO), can estimate patient risk for disease outcomes but is limited by only providing point estimates. Instead, Bayesian logistic LASSO regression (BLLR) models provide distributions for risk predictions, giving clinicians a better understanding of predictive uncertainty, but they are not commonly implemented. METHODS: This study evaluates the predictive performance of different BLLRs compared to standard logistic LASSO regression, using real-world, high-dimensional, structured electronic health record (EHR) data from cancer patients initiating chemotherapy at a comprehensive cancer centre. Multiple BLLR models were compared against a LASSO model using an 80-20 random split using 10-fold cross-validation to predict the risk of acute care utilization (ACU) after starting chemotherapy. FINDINGS: This study included 8439 patients. The LASSO model predicted ACU with an area under the receiver operating characteristic curve (AUROC) of 0.806 (95% CI: 0.775-0.834). BLLR with a Horseshoe+ prior and a posterior approximated by Metropolis-Hastings sampling showed similar performance: 0.807 (95% CI: 0.780-0.834) and offers the advantage of uncertainty estimation for each prediction. In addition, BLLR could identify predictions too uncertain to be automatically classified. BLLR uncertainties were stratified by different patient subgroups, demonstrating that predictive uncertainties significantly differ across race, cancer type, and stage. INTERPRETATION: BLLRs are a promising yet underutilised tool that increases explainability by providing risk estimates while offering a similar level of performance to standard LASSO-based models. Additionally, these models can identify patient subgroups with higher uncertainty, which can augment clinical decision-making. FUNDING: This work was supported in part by the National Library Of Medicine of the National Institutes of Health under Award Number R01LM013362. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.


Assuntos
Tomada de Decisão Clínica , Humanos , Teorema de Bayes , Incerteza , Modelos Logísticos
7.
Stud Health Technol Inform ; 302: 817-818, 2023 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-37203503

RESUMO

When patients with cancer develop depression, it is often left untreated. We developed a prediction model for depression risk within the first month after starting cancer treatment using machine learning and Natural Language Processing (NLP) models. The LASSO logistic regression model based on structured data performed well, whereas the NLP model based on only clinician notes did poorly. After further validation, prediction models for depression risk could lead to earlier identification and treatment of vulnerable patients, ultimately improving cancer care and treatment adherence.


Assuntos
Depressão , Neoplasias , Humanos , Depressão/diagnóstico , Pacientes , Aprendizado de Máquina , Medição de Risco , Processamento de Linguagem Natural , Registros Eletrônicos de Saúde , Neoplasias/complicações
8.
JMIR Hum Factors ; 10: e39114, 2023 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-36602843

RESUMO

BACKGROUND: Artificial intelligence-based clinical decision support (AI-CDS) tools have great potential to benefit intensive care unit (ICU) patients and physicians. There is a gap between the development and implementation of these tools. OBJECTIVE: We aimed to investigate physicians' perspectives and their current decision-making behavior before implementing a discharge AI-CDS tool for predicting readmission and mortality risk after ICU discharge. METHODS: We conducted a survey of physicians involved in decision-making on discharge of patients at two Dutch academic ICUs between July and November 2021. Questions were divided into four domains: (1) physicians' current decision-making behavior with respect to discharging ICU patients, (2) perspectives on the use of AI-CDS tools in general, (3) willingness to incorporate a discharge AI-CDS tool into daily clinical practice, and (4) preferences for using a discharge AI-CDS tool in daily workflows. RESULTS: Most of the 64 respondents (of 93 contacted, 69%) were familiar with AI (62/64, 97%) and had positive expectations of AI, with 55 of 64 (86%) believing that AI could support them in their work as a physician. The respondents disagreed on whether the decision to discharge a patient was complex (23/64, 36% agreed and 22/64, 34% disagreed); nonetheless, most (59/64, 92%) agreed that a discharge AI-CDS tool could be of value. Significant differences were observed between physicians from the 2 academic sites, which may be related to different levels of involvement in the development of the discharge AI-CDS tool. CONCLUSIONS: ICU physicians showed a favorable attitude toward the integration of AI-CDS tools into the ICU setting in general, and in particular toward a tool to predict a patient's risk of readmission and mortality within 7 days after discharge. The findings of this questionnaire will be used to improve the implementation process and training of end users.

9.
J Am Med Inform Assoc ; 29(12): 2178-2181, 2022 11 14.
Artigo em Inglês | MEDLINE | ID: mdl-36048021

RESUMO

The lack of diversity, equity, and inclusion continues to hamper the artificial intelligence (AI) field and is especially problematic for healthcare applications. In this article, we expand on the need for diversity, equity, and inclusion, specifically focusing on the composition of AI teams. We call to action leaders at all levels to make team inclusivity and diversity the centerpieces of AI development, not the afterthought. These recommendations take into consideration mitigation at several levels, including outreach programs at the local level, diversity statements at the academic level, and regulatory steps at the federal level.


Assuntos
Inteligência Artificial , Médicos , Humanos , Atenção à Saúde
10.
Sci Rep ; 12(1): 20363, 2022 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-36437306

RESUMO

Early detection of severe asthma exacerbations through home monitoring data in patients with stable mild-to-moderate chronic asthma could help to timely adjust medication. We evaluated the potential of machine learning methods compared to a clinical rule and logistic regression to predict severe exacerbations. We used daily home monitoring data from two studies in asthma patients (development: n = 165 and validation: n = 101 patients). Two ML models (XGBoost, one class SVM) and a logistic regression model provided predictions based on peak expiratory flow and asthma symptoms. These models were compared with an asthma action plan rule. Severe exacerbations occurred in 0.2% of all daily measurements in the development (154/92,787 days) and validation cohorts (94/40,185 days). The AUC of the best performing XGBoost was 0.85 (0.82-0.87) and 0.88 (0.86-0.90) for logistic regression in the validation cohort. The XGBoost model provided overly extreme risk estimates, whereas the logistic regression underestimated predicted risks. Sensitivity and specificity were better overall for XGBoost and logistic regression compared to one class SVM and the clinical rule. We conclude that ML models did not beat logistic regression in predicting short-term severe asthma exacerbations based on home monitoring data. Clinical application remains challenging in settings with low event incidence and high false alarm rates with high sensitivity.


Assuntos
Asma , Humanos , Modelos Logísticos , Fatores de Tempo , Asma/diagnóstico , Aprendizado de Máquina , Sensibilidade e Especificidade
11.
NPJ Digit Med ; 5(1): 2, 2022 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-35013569

RESUMO

While the opportunities of ML and AI in healthcare are promising, the growth of complex data-driven prediction models requires careful quality and applicability assessment before they are applied and disseminated in daily practice. This scoping review aimed to identify actionable guidance for those closely involved in AI-based prediction model (AIPM) development, evaluation and implementation including software engineers, data scientists, and healthcare professionals and to identify potential gaps in this guidance. We performed a scoping review of the relevant literature providing guidance or quality criteria regarding the development, evaluation, and implementation of AIPMs using a comprehensive multi-stage screening strategy. PubMed, Web of Science, and the ACM Digital Library were searched, and AI experts were consulted. Topics were extracted from the identified literature and summarized across the six phases at the core of this review: (1) data preparation, (2) AIPM development, (3) AIPM validation, (4) software development, (5) AIPM impact assessment, and (6) AIPM implementation into daily healthcare practice. From 2683 unique hits, 72 relevant guidance documents were identified. Substantial guidance was found for data preparation, AIPM development and AIPM validation (phases 1-3), while later phases clearly have received less attention (software development, impact assessment and implementation) in the scientific literature. The six phases of the AIPM development, evaluation and implementation cycle provide a framework for responsible introduction of AI-based prediction models in healthcare. Additional domain and technology specific research may be necessary and more practical experience with implementing AIPMs is needed to support further guidance.

12.
Crit Care Explor ; 3(5): e0402, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34079945

RESUMO

BACKGROUND: Acute respiratory failure occurs frequently in hospitalized patients and often begins outside the ICU, associated with increased length of stay, cost, and mortality. Delays in decompensation recognition are associated with worse outcomes. OBJECTIVES: The objective of this study is to predict acute respiratory failure requiring any advanced respiratory support (including noninvasive ventilation). With the advent of the coronavirus disease pandemic, concern regarding acute respiratory failure has increased. DERIVATION COHORT: All admission encounters from January 2014 to June 2017 from three hospitals in the Emory Healthcare network (82,699). VALIDATION COHORT: External validation cohort: all admission encounters from January 2014 to June 2017 from a fourth hospital in the Emory Healthcare network (40,143). Temporal validation cohort: all admission encounters from February to April 2020 from four hospitals in the Emory Healthcare network coronavirus disease tested (2,564) and coronavirus disease positive (389). PREDICTION MODEL: All admission encounters had vital signs, laboratory, and demographic data extracted. Exclusion criteria included invasive mechanical ventilation started within the operating room or advanced respiratory support within the first 8 hours of admission. Encounters were discretized into hour intervals from 8 hours after admission to discharge or advanced respiratory support initiation and binary labeled for advanced respiratory support. Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment, our eXtreme Gradient Boosting-based algorithm, was compared against Modified Early Warning Score. RESULTS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment had significantly better discrimination than Modified Early Warning Score (area under the receiver operating characteristic curve 0.85 vs 0.57 [test], 0.84 vs 0.61 [external validation]). Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment maintained a positive predictive value (0.31-0.21) similar to that of Modified Early Warning Score greater than 4 (0.29-0.25) while identifying 6.62 (validation) to 9.58 (test) times more true positives. Furthermore, Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment performed more effectively in temporal validation (area under the receiver operating characteristic curve 0.86 [coronavirus disease tested], 0.93 [coronavirus disease positive]), while achieving identifying 4.25-4.51× more true positives. CONCLUSIONS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment is more effective than Modified Early Warning Score in predicting respiratory failure requiring advanced respiratory support at external validation and in coronavirus disease 2019 patients. Silent prospective validation necessary before local deployment.

13.
Int J Med Inform ; 152: 104496, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34020171

RESUMO

OBJECTIVE: Early identification of emergency department (ED) patients who need hospitalization is essential for quality of care and patient safety. We aimed to compare machine learning (ML) models predicting the hospitalization of ED patients and conventional regression techniques at three points in time after ED registration. METHODS: We analyzed consecutive ED patients of three hospitals using the Netherlands Emergency Department Evaluation Database (NEED). We developed prediction models for hospitalization using an increasing number of data available at triage, ∼30 min (including vital signs) and ∼2 h (including laboratory tests) after ED registration, using ML (random forest, gradient boosted decision trees, deep neural networks) and multivariable logistic regression analysis (including spline transformations for continuous predictors). Demographics, urgency, presenting complaints, disease severity and proxies for comorbidity, and complexity were used as covariates. We compared the performance using the area under the ROC curve in independent validation sets from each hospital. RESULTS: We included 172,104 ED patients of whom 66,782 (39 %) were hospitalized. The AUC of the multivariable logistic regression model was 0.82 (0.78-0.86) at triage, 0.84 (0.81-0.86) at ∼30 min and 0.83 (0.75-0.92) after ∼2 h. The best performing ML model over time was the gradient boosted decision trees model with an AUC of 0.84 (0.77-0.88) at triage, 0.86 (0.82-0.89) at ∼30 min and 0.86 (0.74-0.93) after ∼2 h. CONCLUSIONS: Our study showed that machine learning models had an excellent but similar predictive performance as the logistic regression model for predicting hospital admission. In comparison to the 30-min model, the 2-h model did not show a performance improvement. After further validation, these prediction models could support management decisions by real-time feedback to medical personal.


Assuntos
Serviço Hospitalar de Emergência , Triagem , Hospitalização , Hospitais , Humanos , Aprendizado de Máquina
14.
JAMA Netw Open ; 4(11): e2131674, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34730820

RESUMO

Importance: Discrepancies in oxygen saturation measured by pulse oximetry (Spo2), when compared with arterial oxygen saturation (Sao2) measured by arterial blood gas (ABG), may differentially affect patients according to race and ethnicity. However, the association of these disparities with health outcomes is unknown. Objective: To examine racial and ethnic discrepancies between Sao2 and Spo2 measures and their associations with clinical outcomes. Design, Setting, and Participants: This multicenter, retrospective, cross-sectional study included 3 publicly available electronic health record (EHR) databases (ie, the Electronic Intensive Care Unit-Clinical Research Database and Medical Information Mart for Intensive Care III and IV) as well as Emory Healthcare (2014-2021) and Grady Memorial (2014-2020) databases, spanning 215 hospitals and 382 ICUs. From 141 600 hospital encounters with recorded ABG measurements, 87 971 participants with first ABG measurements and an Spo2 of at least 88% within 5 minutes before the ABG test were included. Exposures: Patients with hidden hypoxemia (ie, Spo2 ≥88% but Sao2 <88%). Main Outcomes and Measures: Outcomes, stratified by race and ethnicity, were Sao2 for each Spo2, hidden hypoxemia prevalence, initial demographic characteristics (age, sex), clinical outcomes (in-hospital mortality, length of stay), organ dysfunction by scores (Sequential Organ Failure Assessment [SOFA]), and laboratory values (lactate and creatinine levels) before and 24 hours after the ABG measurement. Results: The first Spo2-Sao2 pairs from 87 971 patient encounters (27 713 [42.9%] women; mean [SE] age, 62.2 [17.0] years; 1919 [2.3%] Asian patients; 26 032 [29.6%] Black patients; 2397 [2.7%] Hispanic patients, and 57 632 [65.5%] White patients) were analyzed, with 4859 (5.5%) having hidden hypoxemia. Hidden hypoxemia was observed in all subgroups with varying incidence (Black: 1785 [6.8%]; Hispanic: 160 [6.0%]; Asian: 92 [4.8%]; White: 2822 [4.9%]) and was associated with greater organ dysfunction 24 hours after the ABG measurement, as evidenced by higher mean (SE) SOFA scores (7.2 [0.1] vs 6.29 [0.02]) and higher in-hospital mortality (eg, among Black patients: 369 [21.1%] vs 3557 [15.0%]; P < .001). Furthermore, patients with hidden hypoxemia had higher mean (SE) lactate levels before (3.15 [0.09] mg/dL vs 2.66 [0.02] mg/dL) and 24 hours after (2.83 [0.14] mg/dL vs 2.27 [0.02] mg/dL) the ABG test, with less lactate clearance (-0.54 [0.12] mg/dL vs -0.79 [0.03] mg/dL). Conclusions and Relevance: In this study, there was greater variability in oxygen saturation levels for a given Spo2 level in patients who self-identified as Black, followed by Hispanic, Asian, and White. Patients with and without hidden hypoxemia were demographically and clinically similar at baseline ABG measurement by SOFA scores, but those with hidden hypoxemia subsequently experienced higher organ dysfunction scores and higher in-hospital mortality.


Assuntos
Etnicidade/estatística & dados numéricos , Hipóxia/complicações , Hipóxia/etnologia , Insuficiência de Múltiplos Órgãos/complicações , Insuficiência de Múltiplos Órgãos/epidemiologia , Grupos Raciais/estatística & dados numéricos , Idoso , Creatinina/sangue , Estudos Transversais , Feminino , Georgia/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Insuficiência de Múltiplos Órgãos/mortalidade , Oximetria , Saturação de Oxigênio , Estudos Retrospectivos
15.
Soc Sci Med ; 222: 180-187, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30658291

RESUMO

Chronic diseases and functional limitations may have serious and persistent consequences for one's quality of life (QoL). Over time, however, their negative impact on QoL may diminish because of adaptation. Understanding how much people adapt helps to correctly separate the effects attributable to interventions from those arising from adaptation and thus facilitates a better estimation of the effects of disease and treatment on QoL. To date, however, there is little empirical evidence on adaptation in older populations. In particular, it is unclear to which extent dimensions of QoL like health and overall experience with life are influenced by adaptation. This paper studies adaptation to functional limitations in 5000 respondents of the Survey of Health, Ageing and Retirement in Europe (SHARE) who develop disabilities during the span of the 5 waves of data collection between 2004 and 2015. To examine the association between time since the onset of functional limitations and self-perceived health and life satisfaction, a fixed effects ordered logit model is used. We found evidence supporting adaptation in life satisfaction, corresponding to a return to pre-onset levels of life satisfaction. Also in the self-perceived health dimension, adaptation does occur, but it does not occur fast enough to offset the negative changes in underlying health. This means that observational studies that measure one of these two outcome measures should be aware that part or all of the effects found are due to adaptation.


Assuntos
Atividades Cotidianas/psicologia , Nível de Saúde , Satisfação Pessoal , Qualidade de Vida/psicologia , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Doença Crônica , Feminino , Humanos , Masculino , Resiliência Psicológica , Fatores Sexuais , Fatores Socioeconômicos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA