Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
1.
BMJ Open ; 14(4): e074604, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38609314

RESUMEN

RATIONALE: Intensive care units (ICUs) admit the most severely ill patients. Once these patients are discharged from the ICU to a step-down ward, they continue to have their vital signs monitored by nursing staff, with Early Warning Score (EWS) systems being used to identify those at risk of deterioration. OBJECTIVES: We report the development and validation of an enhanced continuous scoring system for predicting adverse events, which combines vital signs measured routinely on acute care wards (as used by most EWS systems) with a risk score of a future adverse event calculated on discharge from the ICU. DESIGN: A modified Delphi process identified candidate variables commonly available in electronic records as the basis for a 'static' score of the patient's condition immediately after discharge from the ICU. L1-regularised logistic regression was used to estimate the in-hospital risk of future adverse event. We then constructed a model of physiological normality using vital sign data from the day of hospital discharge. This is combined with the static score and used continuously to quantify and update the patient's risk of deterioration throughout their hospital stay. SETTING: Data from two National Health Service Foundation Trusts (UK) were used to develop and (externally) validate the model. PARTICIPANTS: A total of 12 394 vital sign measurements were acquired from 273 patients after ICU discharge for the development set, and 4831 from 136 patients in the validation cohort. RESULTS: Outcome validation of our model yielded an area under the receiver operating characteristic curve of 0.724 for predicting ICU readmission or in-hospital death within 24 hours. It showed an improved performance with respect to other competitive risk scoring systems, including the National EWS (0.653). CONCLUSIONS: We showed that a scoring system incorporating data from a patient's stay in the ICU has better performance than commonly used EWS systems based on vital signs alone. TRIAL REGISTRATION NUMBER: ISRCTN32008295.


Asunto(s)
Readmisión del Paciente , Medicina Estatal , Humanos , Mortalidad Hospitalaria , Unidades de Cuidados Intensivos , Cuidados Críticos
2.
J Crit Care ; 74: 154218, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36494257

RESUMEN

PURPOSE: Many intensive care units (ICUs) have transitioned from systemic heparin anticoagulation (SHA) to regional citrate anticoagulation (RCA) for continuous kidney replacement therapy (CKRT). We evaluated the clinical and health economic impacts of ICU transition to RCA. MATERIALS AND METHODS: We surveyed all adult general ICUs in England and Wales to identify transition dates and conducted a micro-costing study in eight ICUs. We then conducted an interrupted time-series analysis of linked, routinely collected health records. RESULTS: In 69,001 patients who received CKRT (8585 RCA, 60,416 SHA) in 181 ICUs between 2009 and 2017, transition to RCA was not associated with a change in 90-day mortality (adjusted odds ratio 0.98, 95% CI 0.89-1.08) but was associated with step-increases in duration of kidney support (0.53 days, 95% CI 0.28-0.79), advanced cardiovascular support (0.23 days, 95% CI 0.09-0.38) and ICU length of stay (0.86 days, 95% CI 0.24-1.49). The estimated one-year incremental net monetary benefit per patient was £ - 2376 (95% CI £ - 3841-£ - 911), with an estimated likelihood of cost-effectiveness of <0.1%. CONCLUSIONS: Transition to RCA was associated with significant increases in healthcare resource use, without corresponding clinical benefit, and is highly unlikely to be cost-effective over a one-year time horizon.


Asunto(s)
Lesión Renal Aguda , Heparina , Adulto , Humanos , Heparina/uso terapéutico , Ácido Cítrico/uso terapéutico , Anticoagulantes/uso terapéutico , Citratos , Terapia de Reemplazo Renal , Cuidados Críticos , Lesión Renal Aguda/terapia
4.
Health Technol Assess ; 26(13): 1-58, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35212260

RESUMEN

BACKGROUND: In the UK, 10% of admissions to intensive care units receive continuous renal replacement therapy with regional citrate anticoagulation replacing systemic heparin anticoagulation over the last decade. Regional citrate anticoagulation is now used in > 50% of intensive care units, despite little evidence of safety or effectiveness. AIM: The aim of the Renal Replacement Anticoagulant Management study was to evaluate the clinical and health economic impacts of intensive care units moving from systemic heparin anticoagulation to regional citrate anticoagulation for continuous renal replacement therapy. DESIGN: This was an observational comparative effectiveness study. SETTING: The setting was NHS adult general intensive care units in England and Wales. PARTICIPANTS: Participants were adults receiving continuous renal replacement therapy in an intensive care unit participating in the Intensive Care National Audit & Research Centre Case Mix Programme national clinical audit between 1 April 2009 and 31 March 2017. INTERVENTIONS: Exposure - continuous renal replacement therapy in an intensive care unit after completion of transition to regional citrate anticoagulation. Comparator - continuous renal replacement therapy in an intensive care unit before starting transition to regional citrate anticoagulation or had not transitioned. OUTCOME MEASURES: Primary effectiveness - all-cause mortality at 90 days. Primary economic - incremental net monetary benefit at 1 year. Secondary outcomes - mortality at hospital discharge, 30 days and 1 year; days of renal, cardiovascular and advanced respiratory support in intensive care unit; length of stay in intensive care unit and hospital; bleeding and thromboembolic events; prevalence of end-stage renal disease at 1 year; and estimated lifetime incremental net monetary benefit. DATA SOURCES: Individual patient data from the Intensive Care National Audit & Research Centre Case Mix Programme were linked with the UK Renal Registry, Hospital Episode Statistics (for England), Patient Episodes Data for Wales and Civil Registrations (Deaths) data sets, and combined with identified periods of systemic heparin anticoagulation and regional citrate anticoagulation (survey of intensive care units). Staff time and consumables were obtained from micro-costing. Continuous renal replacement therapy system failures were estimated from the Post-Intensive Care Risk-adjusted Alerting and Monitoring data set. EuroQol-3 Dimensions, three-level version, health-related quality of life was obtained from the Intensive Care Outcomes Network study. RESULTS: Out of the 188 (94.9%) units that responded to the survey, 182 (96.8%) use continuous renal replacement therapy. After linkage, data were available from 69,001 patients across 181 intensive care units (60,416 during periods of systemic heparin anticoagulation use and 8585 during regional citrate anticoagulation use). The change to regional citrate anticoagulation was not associated with a step change in 90-day mortality (odds ratio 0.98, 95% confidence interval 0.89 to 1.08). Secondary outcomes showed step increases in days of renal support (difference in means 0.53 days, 95% confidence interval 0.28 to 0.79 days), advanced cardiovascular support (difference in means 0.23 days, 95% confidence interval 0.09 to 0.38 days) and advanced respiratory support (difference in means, 0.53 days, 95% CI 0.03 to 1.03 days) with a trend toward fewer bleeding episodes (odds ratio 0.90, 95% confidence interval 0.76 to 1.06) with transition to regional citrate anticoagulation. The micro-costing study indicated that regional citrate anticoagulation was more expensive and was associated with an estimated incremental net monetary loss (step change) of -£2376 (95% confidence interval -£3841 to -£911). The estimated likelihood of cost-effectiveness at 1 year was less than 0.1%. LIMITATIONS: Lack of patient-level treatment data means that the results represent average effects of changing to regional citrate anticoagulation in intensive care units. Administrative data are subject to variation in data quality over time, which may contribute to observed trends. CONCLUSIONS: The introduction of regional citrate anticoagulation has not improved outcomes for patients and is likely to have substantially increased costs. This study demonstrates the feasibility of evaluating effects of changes in practice using routinely collected data. FUTURE WORK: (1) Prioritise other changes in clinical practice for evaluation and (2) methodological research to understand potential implications of trends in data quality. TRIAL REGISTRATION: This trial is registered as ClinicalTrials.gov NCT03545750. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 26, No. 13. See the NIHR Journals Library website for further project information.


Acute kidney injury, which prevents kidneys from working properly, is common in critically ill patients being treated in an intensive care unit. Patients with acute kidney injury are treated with a machine that takes over kidney functions, a process called continuous renal replacement therapy. Traditionally, as part of continuous renal replacement therapy, heparin (an anticoagulant that stops the blood from clotting) is added to the blood as it enters the continuous renal replacement therapy machine. Recently, citrate anticoagulation (an alternative to heparin) has been increasingly used in intensive care units in the UK. However, the increased use of citrate is happening without evidence that this is better for patients and cost-effective for the NHS. We aimed to find out whether or not changing to citrate anticoagulation for continuous renal replacement therapy is more beneficial than heparin anticoagulation for patients with acute kidney injury treated in an intensive care unit. We also looked at whether or not changing to citrate is cost-effective for the NHS. We used routinely collected data from the Intensive Care National Audit & Research Centre Case Mix Programme national clinical audit to identify 69,001 patients who received continuous renal replacement therapy in an intensive care unit in England or Wales between 1 April 2009 and 31 March 2017. To get a more comprehensive view of the long-term effects of changing to citrate, we 'linked' data from the 69,001 patients together with other routinely collected data sets to get information on their hospital admissions, longer-term kidney problems and survival after leaving the intensive care unit. We combined this information with a survey of anticoagulant use in intensive care units in England and Wales to compare patients who received continuous renal replacement therapy with heparin and citrate. We found that the change to citrate was not associated with a significant change in the death rate at 90 days, but that it was more expensive for hospitals. Our findings suggest that the change to citrate-based anticoagulation may have been premature and should cause clinicians in intensive care units that are still using systemic heparin anticoagulation to pause before making this change.


Asunto(s)
Terapia de Reemplazo Renal Continuo , Heparina , Adulto , Anticoagulantes/efectos adversos , Ácido Cítrico , Análisis Costo-Beneficio , Cuidados Críticos , Heparina/efectos adversos , Humanos , Calidad de Vida
5.
J Crit Care ; 67: 149-156, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34798373

RESUMEN

BACKGROUND: New-onset atrial fibrillation (NOAF) is common in patients on an intensive care unit (ICU). Evidence guiding treatments is limited, though recent reports suggest beta blocker (BB) therapy is associated with reduced mortality. METHODS: We conducted a multicentre cohort study of adult patients admitted to 3 ICUs in the UK and 5 ICUs in the USA. We analysed the haemodynamic changes associated with NOAF. We analysed rate control, rhythm control, and hospital mortality associated with common NOAF treatments. We balanced admission and post-NOAF, pre-treatment covariates across treatment groups. RESULTS: NOAF was followed by a systolic blood pressure reduction of 5 mmHg (p < 0.001). After adjustment, digoxin therapy was associated with inferior rate control versus amiodarone (adjusted hazard ratio (aHR) 0.56, [95% CI 0.34-0.92]). Calcium channel blocker (CCB) therapy was associated with inferior rhythm control versus amiodarone (aHR 0.59 (0.37-0.92). No difference was detected between BBs and amiodarone in rate control (aHR 1.15 [0.91-1.46]), rhythm control (aHR 0.85, [0.69-1.05]), or hospital mortality (aHR 1.03 [0.53-2.03]). CONCLUSIONS: NOAF in ICU patients is followed by decreases in blood pressure. BBs and amiodarone are associated with similar cardiovascular control and appear superior to digoxin and CCBs. Accounting for key confounders removes previously reported mortality benefits associated with BB treatment.


Asunto(s)
Fibrilación Atrial , Fibrilación Atrial/tratamiento farmacológico , Estudios de Cohortes , Humanos , Unidades de Cuidados Intensivos
6.
Br J Anaesth ; 128(2): 272-282, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34872717

RESUMEN

BACKGROUND: Anaemia is common and associated with poor outcomes in survivors of critical illness. However, the optimal treatment strategy is unclear. METHODS: We conducted a multicentre, feasibility RCT to compare either a single dose of ferric carboxymaltose 1000 mg i.v. or usual care in patients being discharged from the ICU with moderate or severe anaemia (haemoglobin ≤100 g L-1). We collected data on feasibility (recruitment, randomisation, follow-up), biological efficacy, and clinical outcomes. RESULTS: Ninety-eight participants were randomly allocated (49 in each arm). The overall recruitment rate was 34% with 6.5 participants recruited on average per month. Forty-seven of 49 (96%) participants received the intervention. Patient-reported outcome measures were available for 79/93 (85%) survivors at 90 days. Intravenous iron resulted in a higher mean (standard deviation [sd]) haemoglobin at 28 days (119.8 [13.3] vs 106.7 [14.9] g L-1) and 90 days (130.5 [15.1] vs 122.7 [17.3] g L-1), adjusted mean difference (10.98 g L-1; 95% confidence interval [CI], 4.96-17.01; P<0.001) over 90 days after randomisation. Infection rates were similar in both groups. Hospital readmissions at 90 days post-ICU discharge were lower in the i.v. iron group (7/40 vs 15/39; risk ratio=0.46; 95% CI, 0.21-0.99; P=0.037). The median (inter-quartile range) post-ICU hospital stay was shorter in the i.v. iron group but did not reach statistical significance (5.0 [3.0-13.0] vs 9.0 [5.0-16.0] days, P=0.15). CONCLUSION: A large, multicentre RCT of i.v. iron to treat anaemia in survivors of critical illness appears feasible and is necessary to determine the effects on patient-centred outcomes. CLINICAL TRIAL REGISTRATION: ISRCTN13721808 (www.isrctn.com).


Asunto(s)
Anemia/tratamiento farmacológico , Compuestos Férricos/administración & dosificación , Hematínicos/administración & dosificación , Maltosa/análogos & derivados , Administración Intravenosa , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Cuidados Críticos , Estudios de Factibilidad , Femenino , Estudios de Seguimiento , Hemoglobinas/análisis , Humanos , Tiempo de Internación , Masculino , Maltosa/administración & dosificación , Persona de Mediana Edad , Readmisión del Paciente/estadística & datos numéricos , Medición de Resultados Informados por el Paciente , Adulto Joven
7.
J Intensive Care Soc ; 23(4): 414-424, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36751347

RESUMEN

Background: New-onset atrial fibrillation (NOAF) is common during critical illness and is associated with poor outcomes. Many risk factors for NOAF during critical illness have been identified, overlapping with risk factors for atrial fibrillation in patients in community settings. To develop interventions to prevent NOAF during critical illness, modifiable risk factors must be identified. These have not been studied in detail and it is not clear which variables warrant further study. Methods: We undertook an international three-round Delphi process using an expert panel to identify important predictors of NOAF risk during critical illness. Results: Of 22 experts invited, 12 agreed to participate. Participants were located in Europe, North America and South America and shared 110 publications on the subject of atrial fibrillation. All 12 completed the three Delphi rounds. Potentially modifiable risk factors identified include 15 intervention-related variables. Conclusions: We present the results of the first Delphi process to identify important predictors of NOAF risk during critical illness. These results support further research into modifiable risk factors including optimal plasma electrolyte concentrations, rates of change of these electrolytes, fluid balance, choice of vasoactive medications and the use of preventative medications in high-risk patients. We also hope our findings will aid the development of predictive models for NOAF.

8.
Int J Med Inform ; 153: 104538, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34343956

RESUMEN

BACKGROUND: Intensive care units (ICU) are busy round the clock and it is difficult to maintain low sound levels that support patient rest. To help ICU staff manage activities we developed a visual display that monitors and reports sound levels in real-time. This facilitates immediate feedback, encouraging proactive behavior change to limit disturbances. METHODS: Following the principles of user-centered design practices we created our 'user persona' to understand the needs and goals of potential users of the system. We then conducted iterative user testing with current members of the ICU team, primarily using the 'think aloud' method to refine the design and functionality of our novel system. Ethnography evaluated team use of the display. RESULTS: The final design was simple, clear, and efficient, and both functional and aesthetically pleasing for the key user demographic. We identified challenges in the implementation and adoption process that were separate from the 'usability' of the system itself. CONCLUSIONS: Embedding the design process within the core user demographic ensured the final product delivered relevant information for key users, and that this information was intuitive to interpret. Initiating sustainable change is not straightforward. It requires recognition of cultural practices within teams, departments, professions, organizations, and strategies to maximize engagement.


Asunto(s)
Atención a la Salud , Unidades de Cuidados Intensivos , Antropología Cultural , Electrónica , Humanos , Proyectos de Investigación
9.
J Eval Clin Pract ; 27(6): 1403-1416, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-33982356

RESUMEN

BACKGROUND AND OBJECTIVES: Electronic healthcare records have become central to patient care. Evaluation of new systems include a variety of usability evaluation methods or usability metrics (often referred to interchangeably as usability components or usability attributes). This study reviews the breadth of usability evaluation methods, metrics, and associated measurement techniques that have been reported to assess systems designed for hospital staff to assess inpatient clinical condition. METHODS: Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, we searched Medline, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, and Open Grey from 1986 to 2019. For included studies, we recorded usability evaluation methods or usability metrics as appropriate, and any measurement techniques applied to illustrate these. We classified and described all usability evaluation methods, usability metrics, and measurement techniques. Study quality was evaluated using a modified Downs and Black checklist. RESULTS: The search identified 1336 studies. After abstract screening, 130 full texts were reviewed. In the 51 included studies 11 distinct usability evaluation methods were identified. Within these usability evaluation methods, seven usability metrics were reported. The most common metrics were ISO9241-11 and Nielsen's components. An additional "usefulness" metric was reported in almost 40% of included studies. We identified 70 measurement techniques used to evaluate systems. Overall study quality was reflected in a mean modified Downs and Black checklist score of 6.8/10 (range 1-9) 33% studies classified as "high-quality" (scoring eight or higher), 51% studies "moderate-quality" (scoring 6-7), and the remaining 16% (scoring below five) were "low-quality." CONCLUSION: There is little consistency within the field of electronic health record systems evaluation. This review highlights the variability within usability methods, metrics, and reporting. Standardized processes may improve evaluation and comparison electronic health record systems and improve their development and implementation.


Asunto(s)
Benchmarking , Telemedicina , Electrónica , Hospitales , Humanos , Programas Informáticos
10.
Health Technol Assess ; 25(14): 1-90, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33648623

RESUMEN

BACKGROUND: Vasopressors are administered to critical care patients to avoid hypotension, which is associated with myocardial injury, kidney injury and death. However, they work by causing vasoconstriction, which may reduce blood flow and cause other adverse effects. A mean arterial pressure target typically guides administration. An individual patient data meta-analysis (Lamontagne F, Day AG, Meade MO, Cook DJ, Guyatt GH, Hylands M, et al. Pooled analysis of higher versus lower blood pressure targets for vasopressor therapy septic and vasodilatory shock. Intensive Care Med 2018;44:12-21) suggested that greater exposure, through higher mean arterial pressure targets, may increase risk of death in older patients. OBJECTIVE: To estimate the clinical effectiveness and cost-effectiveness of reduced vasopressor exposure through permissive hypotension (i.e. a lower mean arterial pressure target of 60-65 mmHg) in older critically ill patients. DESIGN: A pragmatic, randomised clinical trial with integrated economic evaluation. SETTING: Sixty-five NHS adult general critical care units. PARTICIPANTS: Critically ill patients aged ≥ 65 years receiving vasopressors for vasodilatory hypotension. INTERVENTIONS: Intervention - permissive hypotension (i.e. a mean arterial pressure target of 60-65 mmHg). Control (usual care) - a mean arterial pressure target at the treating clinician's discretion. MAIN OUTCOME MEASURES: The primary clinical outcome was 90-day all-cause mortality. The primary cost-effectiveness outcome was 90-day incremental net monetary benefit. Secondary outcomes included receipt and duration of advanced respiratory and renal support, mortality at critical care and acute hospital discharge, and questionnaire assessment of cognitive decline and health-related quality of life at 90 days and 1 year. RESULTS: Of 2600 patients randomised, 2463 (permissive hypotension, n = 1221; usual care, n = 1242) were analysed for the primary clinical outcome. Permissive hypotension resulted in lower exposure to vasopressors than usual care [mean duration 46.0 vs. 55.9 hours, difference -9.9 hours (95% confidence interval -14.3 to -5.5 hours); total noradrenaline-equivalent dose 31.5 mg vs. 44.3 mg, difference -12.8 mg (95% CI -18.0 mg to -17.6 mg)]. By 90 days, 500 (41.0%) patients in the permissive hypotension group and 544 (43.8%) patients in the usual-care group had died (absolute risk difference -2.85%, 95% confidence interval -6.75% to 1.05%; p = 0.154). Adjustment for prespecified baseline variables resulted in an odds ratio for 90-day mortality of 0.82 (95% confidence interval 0.68 to 0.98) favouring permissive hypotension. There were no significant differences in prespecified secondary outcomes or subgroups; however, patients with chronic hypertension showed a mortality difference favourable to permissive hypotension. At 90 days, permissive hypotension showed similar costs to usual care. However, with higher incremental life-years and quality-adjusted life-years in the permissive hypotension group, the incremental net monetary benefit was positive, but with high statistical uncertainty (£378, 95% confidence interval -£1347 to £2103). LIMITATIONS: The intervention was unblinded, with risk of bias minimised through central allocation concealment and a primary outcome not subject to observer bias. The control group event rate was higher than anticipated. CONCLUSIONS: In critically ill patients aged ≥ 65 years receiving vasopressors for vasodilatory hypotension, permissive hypotension did not significantly reduce 90-day mortality compared with usual care. The absolute treatment effect on 90-day mortality, based on 95% confidence intervals, was between a 6.8-percentage reduction and a 1.1-percentage increase in mortality. FUTURE WORK: Future work should (1) update the individual patient data meta-analysis, (2) explore approaches for evaluating heterogeneity of treatment effect and (3) explore 65 trial conduct, including use of deferred consent, to inform future trials. TRIAL REGISTRATION: Current Controlled Trials ISRCTN10580502. FUNDING: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 25, No. 14. See the NIHR Journals Library website for further project information.


Low blood pressure is common in patients in intensive care. It is associated with a high risk of death. It can be treated with drugs called vasopressors. These drugs raise blood pressure, but also come with risks and side effects. Usually, a blood pressure target is used to guide how much of the drugs to give to patients. Two previous clinical trials suggested that using a lower blood pressure target (and therefore giving less of the drugs) might reduce the number of deaths among older patients. However, although these results were promising, more research was needed to find out if they were correct. The 65 trial was carried out to test if using a lower blood pressure target really did improve outcomes for older patients. The trial also looked at whether or not it would provide value for money for the NHS. A total of 2600 patients aged ≥ 65 years who had low blood pressure in intensive care joined the trial. Half were randomly assigned to the new lower blood pressure target (less drugs). The other half were assigned to usual care (control group). As we had hoped, patients in the low blood pressure target group received less vasopressor drugs than the usual-care group. After 90 days, 41% of patients in the new low blood pressure target group had died, compared with 44% in the usual-care group. Although fewer patients died in the low blood pressure target group, the difference was small and may have occurred by chance. On average, the new target saved a small amount of money for the NHS. Although we could not prove that use of a lower blood pressure target saves lives for older patients in intensive care, our trial suggests that it might. Receiving less vasopressor drugs appeared safe for patients.


Asunto(s)
Enfermedad Crítica , Hipotensión , Adulto , Anciano , Análisis Costo-Beneficio , Humanos , Hipotensión/tratamiento farmacológico , Calidad de Vida , Años de Vida Ajustados por Calidad de Vida , Ensayos Clínicos Controlados Aleatorios como Asunto
12.
Am J Respir Crit Care Med ; 204(1): 44-52, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-33525997

RESUMEN

Rationale: Late recognition of patient deterioration in hospital is associated with worse outcomes, including higher mortality. Despite the widespread introduction of early warning score (EWS) systems and electronic health records, deterioration still goes unrecognized. Objectives: To develop and externally validate a Hospital- wide Alerting via Electronic Noticeboard (HAVEN) system to identify hospitalized patients at risk of reversible deterioration. Methods: This was a retrospective cohort study of patients 16 years of age or above admitted to four UK hospitals. The primary outcome was cardiac arrest or unplanned admission to the ICU. We used patient data (vital signs, laboratory tests, comorbidities, and frailty) from one hospital to train a machine-learning model (gradient boosting trees). We internally and externally validated the model and compared its performance with existing scoring systems (including the National EWS, laboratory-based acute physiology score, and electronic cardiac arrest risk triage score). Measurements and Main Results: We developed the HAVEN model using 230,415 patient admissions to a single hospital. We validated HAVEN on 266,295 admissions to four hospitals. HAVEN showed substantially higher discrimination (c-statistic, 0.901 [95% confidence interval, 0.898-0.903]) for the primary outcome within 24 hours of each measurement than other published scoring systems (which range from 0.700 [0.696-0.704] to 0.863 [0.860-0.865]). With a precision of 10%, HAVEN was able to identify 42% of cardiac arrests or unplanned ICU admissions with a lead time of up to 48 hours in advance, compared with 22% by the next best system. Conclusions: The HAVEN machine-learning algorithm for early identification of in-hospital deterioration significantly outperforms other published scores such as the National EWS.


Asunto(s)
Deterioro Clínico , Puntuación de Alerta Temprana , Guías como Asunto , Medición de Riesgo/normas , Signos Vitales/fisiología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Estudios de Cohortes , Femenino , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos , Factores de Riesgo , Reino Unido , Adulto Joven
14.
Crit Care ; 25(1): 10, 2021 01 06.
Artículo en Inglés | MEDLINE | ID: mdl-33407702

RESUMEN

BACKGROUND: Over 138,000 patients are discharged to hospital wards from intensive care units (ICUs) in England, Wales and Northern Ireland annually. More than 8000 die before leaving hospital. In hospital-wide populations, 6.7-18% of deaths have some degree of avoidability. For patients discharged from ICU, neither the proportion of avoidable deaths nor the reasons underlying avoidability have been determined. We undertook a retrospective case record review within the REFLECT study, examining how post-ICU ward care might be improved. METHODS: A multi-centre retrospective case record review of 300 consecutive post-ICU in-hospital deaths, between January 2015 and March 2018, in 3 English hospitals. Trained multi-professional researchers assessed the degree to which each death was avoidable and determined care problems using the established Structured Judgement Review method. RESULTS: Agreement between reviewers was good (weighted Kappa 0.77, 95% CI 0.64-0.88). Discharge from an ICU for end-of-life care occurred in 50/300 patients. Of the remaining 250 patients, death was probably avoidable in 20 (8%, 95% CI 5.0-12.1) and had some degree of avoidability in 65 (26%, 95% CI 20.7-31.9). Common problems included out-of-hours discharge from ICU (168/250, 67.2%), suboptimal rehabilitation (167/241, 69.3%), absent nutritional planning (76/185, 41.1%) and incomplete sepsis management (50/150, 33.3%). CONCLUSIONS: The proportion of deaths in hospital with some degree of avoidability is higher in patients discharged from an ICU than reported in hospital-wide populations. Extrapolating our findings suggests around 550 probably avoidable deaths occur annually in hospital following ICU discharge in England, Wales and Northern Ireland. This avoidability occurs in an elderly frail population with complex needs that current strategies struggle to meet. Problems in post-ICU care are rectifiable but multi-disciplinary. TRIAL REGISTRATION: ISRCTN14658054.


Asunto(s)
Unidades de Cuidados Intensivos/estadística & datos numéricos , Mortalidad/tendencias , Alta del Paciente/normas , Anciano , Anciano de 80 o más Años , Enfermedad Crítica/epidemiología , Enfermedad Crítica/mortalidad , Femenino , Humanos , Unidades de Cuidados Intensivos/organización & administración , Masculino , Persona de Mediana Edad , Alta del Paciente/estadística & datos numéricos , Estudios Retrospectivos , Reino Unido/epidemiología
15.
IEEE Trans Biomed Eng ; 68(1): 276-288, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32746016

RESUMEN

Skin temperature has long been used as a natural indicator of vascular diseases in the extremities. Considerable correlation between oscillations in skin surface temperature and oscillations of skin blood flow has previously been demonstrated. We hypothesised that the impairment of blood flow in stenotic (subcutaneous) peripheral arteries would influence cutaneous temperature such that, by measuring gradients in the temperature distribution over skin surfaces, one may be able to diagnose or quantify the progression of vascular conditions in whose pathogenesis a reduction in subcutaneous blood perfusion plays a critical role (e.g. peripheral artery disease). As proof of principle, this study investigates the local changes in the skin temperature of healthy humans (15 male, [Formula: see text] years old, BMI [Formula: see text] kg/m 2) undergoing two physical challenges designed to vary their haemodynamic status. Skin temperature was measured in four central regions (forehead, neck, chest, and left shoulder) and four peripheral regions (left upper arm, forearm, wrist, and hand) using an infrared thermal camera. We compare inter-region patterns. Median temperature over the peripheral regions decreased from baseline after both challenges (maximum decrease: [Formula: see text] °C at 60 s after exercise; [Formula: see text] and [Formula: see text] °C at 180 s of cold-water immersion; [Formula: see text]). Median temperature over the central regions showed no significant changes. Our results show that the non-contact measurement of perfusion-related changes in peripheral temperature from infrared video data is feasible. Further research will be directed towards the thermographic study of patients with symptomatic peripheral vascular disease.


Asunto(s)
Temperatura Cutánea , Termografía , Arterias , Ejercicio Físico , Hemodinámica , Humanos , Masculino
16.
Crit Care Med ; 49(1): 102-111, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-33116052

RESUMEN

OBJECTIVES: To identify characteristics that predict 30-day mortality among patients critically ill with coronavirus disease 2019 in England, Wales, and Northern Ireland. DESIGN: Observational cohort study. SETTING: A total of 258 adult critical care units. PATIENTS: A total of 10,362 patients with confirmed coronavirus disease 2019 with a start of critical care between March 1, 2020, and June 22, 2020, of whom 9,990 were eligible (excluding patients with a duration of critical care less than 24 hr or missing core variables). MEASUREMENTS AND MAIN RESULTS: The main outcome measure was time to death within 30 days of the start of critical care. Of 9,990 eligible patients (median age 60 yr, 70% male), 3,933 died within 30 days of the start of critical care. As of July 22, 2020, 189 patients were still receiving critical care and a further 446 were still in acute hospital. Data were missing for between 0.1% and 7.2% of patients across prognostic factors. We imputed missing data ten-fold, using fully conditional specification and continuous variables were modeled using restricted cubic splines. Associations between the candidate prognostic factors and time to death within 30 days of the start of critical care were determined after adjustment for multiple variables with Cox proportional hazards modeling. Significant associations were identified for age, ethnicity, deprivation, body mass index, prior dependency, immunocompromise, lowest systolic blood pressure, highest heart rate, highest respiratory rate, Pao2/Fio2 ratio (and interaction with mechanical ventilation), highest blood lactate concentration, highest serum urea, and lowest platelet count over the first 24 hours of critical care. Nonsignificant associations were found for sex, sedation, highest temperature, and lowest hemoglobin concentration. CONCLUSIONS: We identified patient characteristics that predict an increased likelihood of death within 30 days of the start of critical care for patients with coronavirus disease 2019. These findings may support development of a prediction model for benchmarking critical care providers.


Asunto(s)
COVID-19/mortalidad , Enfermedad Crítica/mortalidad , Índice de Severidad de la Enfermedad , Adulto , COVID-19/terapia , Estudios de Cohortes , Enfermedad Crítica/terapia , Inglaterra , Femenino , Mortalidad Hospitalaria , Humanos , Masculino , Persona de Mediana Edad , Irlanda del Norte , Pronóstico , Respiración Artificial/mortalidad , Gales
17.
F1000Res ; 9: 172, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33299545

RESUMEN

Background: Patients with chronic obstructive pulmonary disease (COPD) are at increased risk of complications and death following surgery. Pulmonary complications are particularly prominent.  Pulmonary rehabilitation is a course of physical exercise and education that helps people with COPD manage their condition.  Although proven to improve health outcomes in patients with stable COPD, it has never been formally tested as a pre-surgical intervention in patients scheduled for non-cardiothoracic surgery.  If a beneficial effect were to be demonstrated, pulmonary rehabilitation for pre-surgical patients with COPD might be rapidly implemented across the National Health Service, as pulmonary rehabilitation courses are already well established across much of the United Kingdom (UK). Methods: We performed a feasibility study to test study procedures and barriers to identification and recruitment to a randomised controlled trial testing whether pulmonary rehabilitation, delivered before major abdominal surgery in a population of people with COPD, would reduce the incidence of post-operative pulmonary complications.  This study was run in two UK centres (Oxford and Newcastle upon Tyne). Results:  We determined that a full randomised controlled trial would not be feasible, due to failure to identify and recruit participants.  We identified an unmet need to identify more effectively patients with COPD earlier in the surgical pathway.  Service evaluations suggested that barriers to identification and recruitment would likely be the same across other UK hospitals. Conclusions:  Although pulmonary rehabilitation is a potentially beneficial intervention to prevent post-operative pulmonary complications, a randomised controlled trial is unlikely to recruit sufficient participants to answer our study question conclusively at the present time, when spirometry is not automatically conducted in all patients planned for surgery.  As pulmonary rehabilitation is a recommended treatment for all people with COPD, alternative study methods combined with earlier identification of candidate patients in the surgical pathway should be considered. Trial registration: ISRCTN29696295, 31/08/2017.


Asunto(s)
Procedimientos Quirúrgicos del Sistema Digestivo/efectos adversos , Complicaciones Posoperatorias/prevención & control , Cuidados Preoperatorios/métodos , Enfermedad Pulmonar Obstructiva Crónica/rehabilitación , Adulto , Terminación Anticipada de los Ensayos Clínicos , Estudios de Factibilidad , Humanos , Selección de Paciente , Estudios Prospectivos , Enfermedad Pulmonar Obstructiva Crónica/complicaciones , Medicina Estatal , Reino Unido
18.
Resuscitation ; 156: 99-106, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32918984

RESUMEN

BACKGROUND: The global pandemic of coronavirus disease 2019 (COVID-19) has placed a huge strain on UK hospitals. Early studies suggest that patients can deteriorate quickly after admission to hospital. The aim of this study was to model changes in vital signs for patients hospitalised with COVID-19. METHODS: This was a retrospective observational study of adult patients with COVID-19 admitted to one acute hospital trust in the UK (CV) and a cohort of patients admitted to the same hospital between 2013-2017 with viral pneumonia (VI). The primary outcome was the start of continuous positive airway pressure/non-invasive positive pressure ventilation, ICU admission or death in hospital. We used non-linear mixed-effects models to compare changes in vital sign observations prior to the primary outcome. Using observations and FiO2 measured at discharge in the VI cohort as the model of normality, we also combined individual vital signs into a single novelty score. RESULTS: There were 497 cases of COVID-19, of whom 373 had been discharged from hospital. 135 (36.2%) of patients experienced the primary outcome, of whom 99 died in hospital. In-hospital mortality was over 4-times higher in the CV than the VI cohort (26.5% vs 6%). For those patients who experienced the primary outcome, CV patients became increasingly hypoxaemic, with a median estimated FiO2 (0.75) higher than that of the VI cohort (estimated FiO2 of 0.35). Prior to the primary outcome, blood pressure remained within normal range, and there was only a small rise in heart rate. The novelty score showed that patients with COVID-19 deteriorated more rapidly that patients with viral pneumonia. CONCLUSIONS: Patients with COVID-19 who deteriorate in hospital experience rapidly-worsening respiratory failure, with low SpO2 and high FiO2, but only minor abnormalities in other vital signs. This has potential implications for the ability of early warning scores to identify deteriorating patients.


Asunto(s)
Betacoronavirus , Infecciones por Coronavirus/diagnóstico , Neumonía Viral/diagnóstico , Triaje/métodos , Signos Vitales , Anciano , Anciano de 80 o más Años , COVID-19 , Infecciones por Coronavirus/epidemiología , Femenino , Mortalidad Hospitalaria/tendencias , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Neumonía Viral/epidemiología , Estudios Retrospectivos , SARS-CoV-2 , Tasa de Supervivencia/tendencias , Reino Unido/epidemiología
19.
J Crit Care ; 60: 72-78, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32763777

RESUMEN

PURPOSE: New onset atrial fibrillation (NOAF) in critically ill patients has been associated with increased short-term mortality. Analyses that do not take into account the time-varying nature of NOAF can underestimate its association with hospital outcomes. We investigated the prognostic association of NOAF with hospital outcomes using competing risks methods. MATERIALS AND METHODS: We undertook a retrospective cohort study in three general adult intensive care units (ICUs) in the UK from June 2008 to December 2015. We excluded patients with known prior atrial fibrillation or an arrhythmia within four hours of ICU admission. To account for the effect of NOAF on the rate of death per unit time and the rate of discharge alive per unit time we calculated subdistribution hazard ratios (SDHRs). RESULTS: Of 7541 patients that fulfilled our inclusion criteria, 831 (11.0%) developed NOAF during their ICU admission. NOAF was associated with an increased duration of hospital stay (CSHR 0.68 (95% CI 0.63-0.73)) and an increased rate of in-hospital death per unit time (CSHR 1.57 (95% CI 1.37-1.1.81)). This resulted in a strong prognostic association with dying in hospital (adjusted SDHR 2.04 (1.79-2.32)). NOAF lasting over 30 min was associated with increased hospital mortality. CONCLUSIONS: Using robust methods we demonstrate a stronger prognostic association between NOAF and hospital outcomes than previously reported.


Asunto(s)
Fibrilación Atrial/mortalidad , Cuidados Críticos/métodos , Mortalidad Hospitalaria , Unidades de Cuidados Intensivos , Admisión del Paciente , Anciano , Fibrilación Atrial/epidemiología , Comorbilidad , Enfermedad Crítica , Femenino , Humanos , Tiempo de Internación , Masculino , Persona de Mediana Edad , Alta del Paciente , Pronóstico , Modelos de Riesgos Proporcionales , Estudios Retrospectivos , Medición de Riesgo , Factores de Riesgo , Reino Unido/epidemiología
20.
BMJ Open ; 10(6): e037762, 2020 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-32513895

RESUMEN

OBJECTIVE: To investigate the short-term mortality effect of discharge from an intensive care unit (ICU) with a tracheostomy in place in comparison to delaying discharge until after tracheostomy removal. DESIGN: A propensity score matched cohort study using data from the TracMan study. SETTING: Seventy-two UK ICUs taking part in the TracMan study, a randomised controlled trial comparing early tracheostomy (within 4 days of critical care admission) with deferred tracheostomy (after 10 days if still indicated). PARTICIPANTS: 622 patients who underwent a tracheostomy while in the TracMan study between November 2004 and November 2008. 144 patients left ICU with a tracheostomy. 999 days of observation from 294 patients were included in the control pool. INTERVENTIONS: We matched patients discharged with a tracheostomy in place 1:1 with patients who remained in an ICU until either their tracheostomy was removed or they died with the tracheostomy in place. Propensity models were developed according to discharge destination, accounting for likely confounding factors. PRIMARY OUTCOME MEASURE: The primary outcome was 30-day mortality from the matching day. For the 'discharged with a tracheostomy' group, this was death within 30 days after the discharge day. For the 'remained in ICU' group, this was death within 30 days after the matched day. RESULTS: 22 (15.3%) patients who left ICU with a tracheostomy died within 30 days compared with 26 (18.1%) who remained in ICU (relative risk 0.98, 95% CI 0.43 to 2.23). CONCLUSION: Keeping patients on an ICU to provide tracheostomy care was not found to affect mortality. Tracheostomy presence may indicate a higher risk of mortality due to underlying diseases and conditions rather than posing a risk in itself.The TracMan trial was registered on the ISRCTN database (ISRCTN28588190).


Asunto(s)
Tiempo de Internación , Alta del Paciente , Puntaje de Propensión , Traqueostomía/mortalidad , Estudios de Cohortes , Humanos , Unidades de Cuidados Intensivos , Medicina Estatal , Reino Unido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...