ABSTRACT
BACKGROUND: This study aimed to assess the differences between the United States and the United Kingdom in the characteristics and posttransplant survival of patients who received donation after circulatory death (DCD) liver allografts from donors aged >60 y. METHODS: Data were collected from the UK Transplant Registry and the United Network for Organ Sharing databases. Cohorts were dichotomized into donor age subgroups (donor >60 y [D >60]; donor ≤60 y [D ≤60]). Study period: January 1, 2001, to December 31, 2015. RESULTS: 1157 DCD LTs were performed in the United Kingdom versus 3394 in the United States. Only 13.8% of US DCD donors were aged >50 y, contrary to 44.3% in the United Kingdom. D >60 were 22.6% in the United Kingdom versus 2.4% in the United States. In the United Kingdom, 64.2% of D >60 clustered in 2 metropolitan centers. In the United States, there was marked inter-regional variation. A total of 78.3% of the US DCD allografts were used locally. One- and 5-y unadjusted DCD graft survival was higher in the United Kingdom versus the United States (87.3% versus 81.4%, and 78.0% versus 71.3%, respectively; P < 0.001). One- and 5-y D >60 graft survival was higher in the United Kingdom (87.3% versus 68.1%, and 77.9% versus 51.4%, United Kingdom versus United States, respectively; P < 0.001). In both groups, grafts from donors ≤30 y had the best survival. Survival was similar for donors aged 41 to 50 versus 51 to 60 in both cohorts. CONCLUSIONS: Compared with the United Kingdom, older DCD LT utilization remained low in the United States, with worse D >60 survival. Nonetheless, present data indicate similar survivals for older donors aged ≤60, supporting an extension to the current US DCD age cutoff.
Subject(s)
Liver Transplantation , Tissue and Organ Procurement , Allografts , Death , Graft Survival , Humans , Liver , Liver Transplantation/adverse effects , Retrospective Studies , Tissue Donors , Treatment Outcome , United Kingdom , United StatesABSTRACT
To measure the frequency of withdrawal of life-sustaining therapy for perceived poor neurologic prognosis among decedents in hospitals of different sizes and teaching statuses. DESIGN: We performed a multicenter, retrospective cohort study. SETTING: Four large teaching hospitals, four affiliated small teaching hospitals, and nine affiliated nonteaching hospitals in the United States. PATIENTS: We included a sample of all adult inpatient decedents between August 2017 and August 2019. MEASUREMENTS AND MAIN RESULTS: We reviewed inpatient notes and categorized the immediately preceding circumstances as withdrawal of life-sustaining therapy for perceived poor neurologic prognosis, withdrawal of life-sustaining therapy for nonneurologic reasons, limitations or withholding of life support or resuscitation, cardiac death despite full treatment, or brain death. Of 2,100 patients, median age was 71 years (interquartile range, 60-81 yr), median hospital length of stay was 5 days (interquartile range, 2-11 d), and 1,326 (63%) were treated at four large teaching hospitals. Withdrawal of life-sustaining therapy for perceived poor neurologic prognosis occurred in 516 patients (25%) and was the sole contributing factor to death in 331 (15%). Withdrawal of life-sustaining therapy for perceived poor neurologic prognosis was common in all hospitals: 30% of deaths at large teaching hospitals, 19% of deaths in small teaching hospitals, and 15% of deaths at nonteaching hospitals. Withdrawal of life-sustaining therapy for perceived poor neurologic prognosis happened frequently across all hospital units. Withdrawal of life-sustaining therapy for perceived poor neurologic prognosis contributed to one in 12 deaths in patients without a primary neurologic diagnosis. After accounting for patient and hospital characteristics, significant between-hospital variability in the odds of withdrawal of life-sustaining therapy for perceived poor neurologic prognosis persisted. CONCLUSIONS: A quarter of inpatient deaths in this cohort occurred after withdrawal of life-sustaining therapy for perceived poor neurologic prognosis. The rate of withdrawal of life-sustaining therapy for perceived poor neurologic prognosis occurred commonly in all type of hospital settings. We observed significant unexplained variation in the odds of withdrawal of life-sustaining therapy for perceived poor neurologic prognosis across participating hospitals.
Subject(s)
Critical Illness , Decision Making , Cost-Benefit Analysis , Health Behavior , Humans , Intensive Care UnitsABSTRACT
OBJECTIVE: To determine the geographic accessibility of emergency departments (EDs) with high pediatric readiness by assessing the percentage of US children living within a 30-minute drive time of an ED with high pediatric readiness, as defined by collaboratively developed published guidelines. STUDY DESIGN: In this cross-sectional analysis, we examined geographic access to an ED with high pediatric readiness among US children. Pediatric readiness was assessed using the weighted pediatric readiness score (WPRS) of US hospitals based on the 2013 National Pediatric Readiness Project (NPRP) survey. A WPRS of 100 indicates that the ED meets the essential guidelines for pediatric readiness. Using estimated drive time from ZIP code centroids, we determined the proportions of US children living within a 30-minute drive time of an ED with a WPRS of 100 (maximum), 94.3 (90th percentile), and 83.6 (75th percentile). RESULTS: Although 93.7% of children could travel to any ED within 30 minutes, only 33.7% of children could travel to an ED with a WPRS of 100, 55.3% could travel to an ED with a WPRS at or above the 90th percentile, and 70.2% could travel to an ED with a WPRS at or above the 75th percentile. Among children within a 30-minute drive of an ED with the maximum WPRS, 90.9% lived closer to at least 1 alternative ED with a WPRS below the maximum. Access varied across census divisions, ranging from 14.9% of children in the East South Center to 56.2% in the Mid-Atlantic for EDs scoring a maximum WPRS. CONCLUSION: A significant proportion of US children do not have timely access to EDs with high pediatric readiness.
Subject(s)
Emergency Service, Hospital/statistics & numerical data , Health Services Accessibility/statistics & numerical data , Adolescent , Automobile Driving , Censuses , Child , Child, Preschool , Cross-Sectional Studies , Health Surveys , Humans , Infant , Time Factors , Travel/statistics & numerical data , United StatesABSTRACT
OBJECTIVE: The use of machine-learning algorithms to classify alerts as real or artifacts in online noninvasive vital sign data streams to reduce alarm fatigue and missed true instability. DESIGN: Observational cohort study. SETTING: Twenty-four-bed trauma step-down unit. PATIENTS: Two thousand one hundred fifty-three patients. INTERVENTION: Noninvasive vital sign monitoring data (heart rate, respiratory rate, peripheral oximetry) recorded on all admissions at 1/20 Hz, and noninvasive blood pressure less frequently, and partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were vital sign deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained machine-learning algorithms. The best model was evaluated on test set alerts to enact online alert classification over time. MEASUREMENTS AND MAIN RESULTS: The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve performance of 0.79 (95% CI, 0.67-0.93) for peripheral oximetry at the instant the vital sign first crossed threshold and increased to 0.87 (95% CI, 0.71-0.95) at 3 minutes into the alerting period. Blood pressure area under the curve started at 0.77 (95% CI, 0.64-0.95) and increased to 0.87 (95% CI, 0.71-0.98), whereas respiratory rate area under the curve started at 0.85 (95% CI, 0.77-0.95) and increased to 0.97 (95% CI, 0.94-1.00). Heart rate alerts were too few for model development. CONCLUSIONS: Machine-learning models can discern clinically relevant peripheral oximetry, blood pressure, and respiratory rate alerts from artifacts in an online monitoring dataset (area under the curve > 0.87).
Subject(s)
Artifacts , Clinical Alarms/classification , Monitoring, Physiologic/methods , Supervised Machine Learning , Vital Signs , Blood Pressure Determination , Cohort Studies , Heart Rate , Humans , Oximetry , Respiratory RateABSTRACT
OBJECTIVE: To understand hospital-level variation in triage practices for patients with moderate-to-severe injuries presenting initially to nontrauma centers. BACKGROUND: Many patients with moderate-to-severe traumatic injuries receive care at nontrauma hospitals, despite evidence of a survival benefit from treatment at trauma centers. METHODS: We used claims from the Centers for Medicare and Medicaid Services to identify patients with moderate-to-severe injuries who presented initially to nontrauma centers. We determined whether or not they were transferred to a level I or II trauma center within 24 hours of presentation, and used multivariate regression to assess the influence of hospital-level factors on triage practices, after adjusting for differences in case mix. RESULTS: Transfer of patients with moderate-to-severe injuries to trauma centers occurred infrequently, with significant variation among hospitals (median 2%; interquartile range 1%-6%). Greater resource availability at nontrauma centers was associated with lower rates of successful triage, including the presence of neurosurgeons (relative reduction in transfer rate: 76%, P < 0.01), more than 20 intensive care unit beds (relative reduction 30%, P < 0.01) and a high resident-to-bed ratio (relative reduction 23%, P < 0.01). However, patients were more likely to survive if they presented to hospitals with higher triage rates (odds of death for patients cared for at hospitals with the highest tercile of triage rates, compared with lowest tercile: 0.92; 95% confidence interval: 0.85-0.99, P = 0.02). CONCLUSIONS: Injured Medicare beneficiaries presenting to nontrauma centers experience high rates of undertriage, determined in part by increasing availability of resources. Care at hospitals with low rates of successful triage is associated with worse outcomes.
Subject(s)
Emergency Service, Hospital/statistics & numerical data , Medicare , Patient Transfer/statistics & numerical data , Triage/statistics & numerical data , Wounds and Injuries/therapy , Aged , Aged, 80 and over , Cohort Studies , Emergency Service, Hospital/organization & administration , Female , Humans , Injury Severity Score , Logistic Models , Male , Multivariate Analysis , Retrospective Studies , Trauma Centers/organization & administration , Trauma Centers/statistics & numerical data , Treatment Outcome , United States , Wounds and Injuries/mortalityABSTRACT
BACKGROUND: ICUs are increasingly staffed with nurse practitioners/physician assistants (NPs/PAs), but it is unclear how NPs/PAs influence quality of care. We examined the association between NP/PA staffing and in-hospital mortality for patients in the ICU. METHODS: We used retrospective cohort data from the 2009 to 2010 APACHE (Acute Physiology and Chronic Health Evaluation) clinical information system and an ICU-level survey. We included patients aged ≥ 17 years admitted to one of 29 adult medical and mixed medical/surgical ICUs in 22 US hospitals. Because this survey could not assign NPs/PAs to individual patients, the primary exposure was admission to an ICU where NPs/PAs participated in patient care. The primary outcome was patient-level in-hospital mortality. We used multivariable relative risk regression to examine the effect of NPs/PAs on in-hospital mortality, accounting for differences in case mix, ICU characteristics, and clustering of patients within ICUs. We also examined this relationship in the following subgroups: patients on mechanical ventilation, patients with the highest quartile of Acute Physiology Score (> 55), and ICUs with low-intensity physician staffing and with physician trainees. RESULTS: Twenty-one ICUs (72.4%) reported NP/PA participation in direct patient care. Patients in ICUs with NPs/PAs had lower mean Acute Physiology Scores (42.4 vs 46.7, P < .001) and mechanical ventilation rates (38.8% vs 44.2%, P < .001) than ICUs without NPs/PAs. Unadjusted and risk-adjusted mortality was similar between groups (adjusted relative risk, 1.10; 95% CI, 0.92-1.31). This result was consistent in all examined subgroups. CONCLUSIONS: NPs/PAs appear to be a safe adjunct to the ICU team. The findings support NP/PA management of critically ill patients.
Subject(s)
Critical Illness/mortality , Hospital Mortality/trends , Intensive Care Units , Nurse Practitioners/statistics & numerical data , Personnel Staffing and Scheduling , Physician Assistants/statistics & numerical data , APACHE , Adolescent , Adult , Aged , Aged, 80 and over , Benchmarking , Critical Care/methods , Critical Illness/nursing , Databases, Factual , Female , Health Care Surveys , Humans , Male , Middle Aged , Outcome Assessment, Health Care , Patient Care Team/organization & administration , Patient Safety/statistics & numerical data , Retrospective Studies , Risk Assessment , United States , Workforce , Young AdultABSTRACT
OBJECTIVES: Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. METHODS: This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. RESULTS: The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p < 0.01). Estimation accuracy was lower for each method among transports more than 20 minutes (mean [±SD] absolute error was 12.7 [±11.7] minutes for linear arc, 9.8 [±10.5] minutes for Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. CONCLUSIONS: Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.