Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 89
Filtrar
1.
J Neurosurg ; : 1-7, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38996404

RESUMEN

OBJECTIVE: Previous studies of neurosurgical transfers indicate that substantial numbers of patients may not need to be transferred, suggesting an opportunity to provide more patient-centered care by treating patients in their communities, while probably saving thousands of dollars in transport and duplicative workup. This study of neurosurgical transfers, the largest to date, aimed to better characterize how often transfers were potentially avoidable and which patient factors might affect whether transfer is needed. METHODS: This was a retrospective cohort study of neurosurgical transfers to an urban, tertiary-care, level I trauma center between October 1, 2017, and October 1, 2022. Prior to data analysis, the authors devised criteria to differentiate necessary neurosurgical transfers from potentially avoidable ones. A transfer was considered necessary if 1) the patient went to the operating room within 12 hours of arrival at the emergency department (ED); 2) a neurological MRI study was conducted in the ED; 3) the patient was admitted to the ICU from the ED; or 4) the patient was admitted to either neurology or a surgical service (including neurosurgery). Transfers not meeting any of the above criteria were deemed potentially avoidable. Patient and clinical characteristics, including diagnostic groupings from Clinical Classification Software categories, were collected retrospectively via electronic health record data abstraction and stratified by whether the transfer was necessary or potentially avoidable. Statistical differences were assessed with a chi-square test. RESULTS: A total of 5113 neurosurgical transfers were included in the study, of which 1701 (33.3%) were classified as potentially avoidable. Four percent of all transferred patients went to the operating room within 12 hours of reaching the receiving ED, 23.4% were admitted to the ICU from the ED, 26.6% had a neurological MRI study performed in the ED, and 54.4% were admitted to a surgical service or to neurology. Potentially avoidable transfers had a higher proportion of traumatic brain injury, headache, and syncope (p < 0.0001), as well as of spondylopathies/spondyloarthropathies (p = 0.0402), whereas patients needing transfer had a higher proportion of acute hemorrhagic cerebrovascular disease and cerebral infarction (p < 0.0001). CONCLUSIONS: This study demonstrates that a large number of neurosurgical transfers can probably be treated in their home hospitals and highlights that the vast majority of patients transferred for neurosurgical conditions do not receive emergency neurosurgery. Further research is needed to better guide transferring and receiving facilities in reducing the burden of excessive transfers.

2.
Neuromodulation ; 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38819342

RESUMEN

OBJECTIVES: This study aimed to indicate the feasibility of a prototype electrical neuromodulation system using a closed-loop energy-efficient ultrasound-based mechanism for communication, data transmission, and recharging. MATERIALS AND METHODS: Closed-loop deep brain stimulation (DBS) prototypes were designed and fabricated with ultrasonic wideband (UsWB) communication technology and miniaturized custom electronics. Two devices were implanted short term in anesthetized Göttingen minipigs (N = 2). Targeting was performed using preoperative magnetic resonance imaging, and locations were confirmed postoperatively by computerized tomography. DBS systems were tested over a wide range of stimulation settings to mimic minimal, typical, and/or aggressive clinical settings, and evaluated for their ability to transmit data through scalp tissue and to recharge the DBS system using UsWB. RESULTS: Stimulation, communication, reprogramming, and recharging protocols were successfully achieved in both subjects for amplitude (1V-6V), frequency (50-250 Hz), and pulse width (60-200 µs) settings and maintained for ≥six hours. The precision of pulse settings was verified with <5% error. Communication rates of 64 kbit/s with an error rate of 0.05% were shown, with no meaningful throughput degradation observed. Time to recharge to 80% capacity was <9 minutes. Two DBS systems also were implanted in the second test animal, and independent bilateral stimulation was successfully shown. CONCLUSIONS: The system performed at clinically relevant implant depths and settings. Independent bilateral stimulation for the duration of the study with a 4F energy storage and full rapid recharge were achieved. Continuous function extrapolates to six days of continuous stimulation in future design iterations implementing application specific integrated circuit level efficiency and 15F storage capacitance. UsWB increases energy efficiency, reducing storage requirements and thereby enabling device miniaturization. The device can enable intelligent closed-loop stimulation, remote system monitoring, and optimization and can serve as a power/data gateway to interconnect the intrabody network with the Internet of Medical Things.

3.
J Am Coll Emerg Physicians Open ; 5(3): e13154, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38721036

RESUMEN

Objectives: This study aimed to compare the different respiratory rate (RR) monitoring methods used in the emergency department (ED): manual documentation, telemetry, and capnography. Methods: This is a retrospective study using recorded patient monitoring data. The study population includes patients who presented to a tertiary care ED between January 2020 and December 2022. Inclusion and exclusion criteria were patients with simultaneous recorded RR data from all three methods and less than 10 min of recording, respectively. Linear regression and Bland-Altman analysis were performed between different methods. Results: A total of 351 patient encounters met study criteria. Linear regression yielded an R-value of 0.06 (95% confidence interval [CI] 0.00-0.12) between manual documentation and telemetry, 0.07 (95% CI 0.01-0.13) between manual documentation and capnography, and 0.82 (95% CI 0.79-0.85) between telemetry and capnography. The Bland-Altman analysis yielded a bias of -0.8 (95% limits of agreement [LOA] -12.2 to 10.6) between manual documentation and telemetry, bias of -0.6 (95% LOA -13.5 to 12.3) between manual documentation and capnography, and bias of 0.2 (95% LOA -6.2 to 6.6) between telemetry and capnography. Conclusion: There is a poor correlation between manual documentation and both automated methods, while there is relatively good agreement between the automated methods. This finding highlights the need to further investigate the methodology used by the ED staff in monitoring and documenting RR and ways to improve its reliability given that many important clinical decisions are made based on these assessments.

4.
JAMA Netw Open ; 7(5): e2414213, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38819823

RESUMEN

Importance: Emergency department (ED) visits by older adults with life-limiting illnesses are a critical opportunity to establish patient care end-of-life preferences, but little is known about the optimal screening criteria for resource-constrained EDs. Objectives: To externally validate the Geriatric End-of-Life Screening Tool (GEST) in an independent population and compare it with commonly used serious illness diagnostic criteria. Design, Setting, and Participants: This prognostic study assessed a cohort of patients aged 65 years and older who were treated in a tertiary care ED in Boston, Massachusetts, from 2017 to 2021. Patients arriving in cardiac arrest or who died within 1 day of ED arrival were excluded. Data analysis was performed from August 1, 2023, to March 27, 2024. Exposure: GEST, a logistic regression algorithm that uses commonly available electronic health record (EHR) datapoints and was developed and validated across 9 EDs, was compared with serious illness diagnoses as documented in the EHR. Serious illnesses included stroke/transient ischemic attack, liver disease, cancer, lung disease, and age greater than 80 years, among others. Main Outcomes and Measures: The primary outcome was 6-month mortality following an ED encounter. Statistical analyses included area under the receiver operating characteristic curve, calibration analyses, Kaplan-Meier survival curves, and decision curves. Results: This external validation included 82 371 ED encounters by 40 505 unique individuals (mean [SD] age, 76.8 [8.4] years; 54.3% women, 13.8% 6-month mortality rate). GEST had an external validation area under the receiver operating characteristic curve of 0.79 (95% CI, 0.78-0.79) that was stable across years and demographic subgroups. Of included encounters, 53.4% had a serious illness, with a sensitivity of 77.4% (95% CI, 76.6%-78.2%) and specificity of 50.5% (95% CI, 50.1%-50.8%). Varying GEST cutoffs from 5% to 30% increased specificity (5%: 49.1% [95% CI, 48.7%-49.5%]; 30%: 92.2% [95% CI, 92.0%-92.4%]) at the cost of sensitivity (5%: 89.3% [95% CI, 88.8-89.9]; 30%: 36.2% [95% CI, 35.3-37.1]). In a decision curve analysis, GEST outperformed serious illness criteria across all tested thresholds. When comparing patients referred to intervention by GEST with serious illness criteria, GEST reclassified 45.1% of patients with serious illness as having low risk of mortality with an observed mortality rate 8.1% and 2.6% of patients without serious illness as having high mortality risk with an observed mortality rate of 34.3% for a total reclassification rate of 25.3%. Conclusions and Relevance: The findings of this study suggest that both serious illness criteria and GEST identified older ED patients at risk for 6-month mortality, but GEST offered more useful screening characteristics. Future trials of serious illness interventions for high mortality risk in older adults may consider transitioning from diagnosis code criteria to GEST, an automatable EHR-based algorithm.


Asunto(s)
Servicio de Urgencia en Hospital , Cuidado Terminal , Humanos , Anciano , Femenino , Masculino , Anciano de 80 o más Años , Cuidado Terminal/estadística & datos numéricos , Servicio de Urgencia en Hospital/estadística & datos numéricos , Evaluación Geriátrica/métodos , Evaluación Geriátrica/estadística & datos numéricos , Boston/epidemiología , Pronóstico , Mortalidad
5.
J Am Coll Emerg Physicians Open ; 5(2): e13162, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38659596

RESUMEN

Objectives: One of the most pivotal decisions an emergency physician (EP) makes is whether to admit or discharge a patient. The emergency department (ED) work-up leading to this decision involves several resource-intensive tests. Previous studies have demonstrated significant differences in EP resource utilization, measured by lab tests, advanced imaging (magnetic resonance imaging [MRI], computed tomography [CT], ultrasound), consultations, and propensity to admit a patient. However, how an EP's years of experience may impact their resource utilization and propensity to admit patients has not been well characterized. This study seeks to better understand how EPs' years of experience, post-residency, relates to their use of advanced imaging and patient disposition. Methods: Ten years of ED visits were analyzed for this study from a single, academic tertiary care center in the urban Northeast United States. The primary outcomes were utilization of advanced imaging during the visit (CT, MRI, or formal ultrasound) and whether the patient was admitted. EP years of experience was categorized into 0-2 years, 3-5 years, 6-8 years, 9-11 years, and 12 or more years. Patient age, sex, Emergency Severity Index (ESI), and the attending EP's years of experience were collected. The relationship between EP years of experience and each outcome was assessed with a linear mixed model with a random effect for provider and patient age, sex, and ESI as covariates. Results: A total of 460,937 visits seen by 65 EPs were included in the study. Over one-third (37.6%) of visits had an advanced imaging study ordered and nearly half (49.5%) resulted in admission. Compared to visits with EPs with 0-2 years of experience, visits with EPs with 3-5 or 6-8 years of experience had significantly lower odds of advanced imaging occurring. Visits seen by EPs with more than 2 years of experience had lower odds of admission than visits by EPs with 0-2 years of experience. Conclusion: More junior EPs tend to order more advanced imaging studies and have a higher propensity to admit patients. This may be due to less comfort in decision-making without advanced imaging or a lower risk tolerance. Conversely, the additional clinical experience of the most senior EPs, with greater than 9 years of experience, likely impacts their resource utilization patterns such that their use of advanced imaging does not significantly differ from the most junior EPs.

6.
J Am Geriatr Soc ; 72(5): 1442-1452, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38546202

RESUMEN

BACKGROUND: There has been a marked rise in the use of observation care for Medicare beneficiaries visiting the emergency department (ED) in recent years. Whether trends in observation use differ for people with Alzheimer's disease and Alzheimer's disease-related dementias (AD/ADRD) is unknown. METHODS: Using a national 20% sample of Medicare beneficiaries ages 68+ from 2012 to 2018, we compared trends in ED visits and observation stays by AD/ADRD status for beneficiaries visiting the ED. We then examined the degree to which trends differed by nursing home (NH) residency status, assigning beneficiaries to four groups: AD/ADRD residing in NH (AD/ADRD+ NH+), AD/ADRD not residing in NH (AD/ADRD+ NH-), no AD/ADRD residing in NH (AD/ADRD- NH+), and no AD/ADRD not residing in NH (AD/ADRD- NH-). RESULTS: Of 7,489,780 unique beneficiaries, 18.6% had an AD/ADRD diagnosis. Beneficiaries with AD/ADRD had more than double the number of ED visits per 1000 in all years compared to those without AD/ADRD and saw a faster adjusted increase over time (+26.7 vs. +8.2 visits/year; p < 0.001 for interaction). The annual increase in the adjusted proportion of ED visits ending in observation was also greater among people with AD/ADRD (+0.78%/year, 95% CI 0.77-0.80%) compared to those without AD/ADRD (+0.63%/year, 95% CI 0.59-0.66%; p < 0.001 for interaction). Observation utilization was greatest for the AD/ADRD+ NH+ population and lowest for the AD/ADRD- NH- population, but the AD/ADRD+ NH- group saw the greatest increase in observation stays over time (+15.4 stays per 1000 people per year, 95% CI 15.0-15.7). CONCLUSIONS: Medicare beneficiaries with AD/ADRD have seen a disproportionate increase in observation utilization in recent years, driven by both an increase in ED visits and an increase in the proportion of ED visits ending in observation.


Asunto(s)
Enfermedad de Alzheimer , Servicio de Urgencia en Hospital , Medicare , Casas de Salud , Humanos , Medicare/estadística & datos numéricos , Estados Unidos/epidemiología , Masculino , Femenino , Enfermedad de Alzheimer/epidemiología , Anciano , Servicio de Urgencia en Hospital/estadística & datos numéricos , Servicio de Urgencia en Hospital/tendencias , Anciano de 80 o más Años , Casas de Salud/estadística & datos numéricos , Demencia/epidemiología , Hospitalización/estadística & datos numéricos , Hospitalización/tendencias
7.
JAMA Netw Open ; 7(2): e2356189, 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38363570

RESUMEN

Importance: Much remains unknown about the extent of and factors that influence clinician-level variation in rates of admission from the emergency department (ED). In particular, emergency clinician risk tolerance is a potentially important attribute, but it is not well defined in terms of its association with the decision to admit. Objective: To further characterize this variation in rates of admission from the ED and to determine whether clinician risk attitudes are associated with the propensity to admit. Design, Setting, and Participants: In this observational cohort study, data were analyzed from the Massachusetts All Payer Claims Database to identify all ED visits from October 2015 through December 2017 with any form of commercial insurance or Medicaid. ED visits were then linked to treating clinicians and their risk tolerance scores obtained in a separate statewide survey to examine the association between risk tolerance and the decision to admit. Statistical analysis was performed from 2022 to 2023. Main Outcomes and Measures: The ratio between observed and projected admission rates was computed, controlling for hospital, and then plotted against the projected admission rates to find the extent of variation. Pearson correlation coefficients were then used to examine the association between the mean projected rate of admission and the difference between actual and projected rates of admission. The consistency of clinician admission practices across a range of the most common conditions resulting in admission were then assessed to understand whether admission decisions were consistent across different conditions. Finally, an assessment was made as to whether the extent of deviation from the expected admission rates at an individual level was associated with clinician risk tolerance. Results: The study sample included 392 676 ED visits seen by 691 emergency clinicians. Among patients seen for ED visits, 221 077 (56.3%) were female, and 236 783 (60.3%) were 45 years of age or older; 178 890 visits (46.5%) were for patients insured by Medicaid, 96 947 (25.2%) were for those with commercial insurance, 71 171 (18.5%) were Medicare Part B or Medicare Advantage, and the remaining 37 702 (9.8%) were other insurance category. Of the 691 clinicians, 429 (62.6%) were male; mean (SD) age was 46.5 (9.8) years; and 72 (10.4%) were Asian, 13 (1.9%) were Black, 577 (83.5%) were White, and 29 (4.2%) were other race. Admission rates across the clinicians included ranged from 36.3% at the 25th percentile to 48.0% at the 75th percentile (median, 42.1%). Overall, there was substantial variation in admission rates across clinicians; physicians were just as likely to overadmit or underadmit across the range of projected rates of admission (Pearson correlation coefficient, 0.046 [P = .23]). There also was weak consistency in admission rates across the most common clinical conditions, with intraclass correlations ranging from 0.09 (95% CI, 0.02-0.17) for genitourinary/syncope to 0.48 (95% CI, 0.42-0.53) for cardiac/syncope. Greater clinician risk tolerance (as measured by the Risk Tolerance Scale) was associated with a statistically significant tendency to admit less than the projected admission rate (coefficient, -0.09 [P = .04]). The other scales studied revealed no significant associations. Conclusions and Relevance: In this cohort study of ED visits from Massachusetts, there was statistically significant variation between ED clinicians in admission rates and little consistency in admission tendencies across different conditions. Admission tendencies were minimally associated with clinician innate risk tolerance as assessed by this study's measures; further research relying on a broad range of measures of risk tolerance is needed to better understand the role of clinician attitudes toward risk in explaining practice patterns and to identify additional factors that may be associated with variation at the clinician level.


Asunto(s)
Hospitalización , Medicare , Humanos , Masculino , Femenino , Anciano , Estados Unidos/epidemiología , Persona de Mediana Edad , Estudios de Cohortes , Servicio de Urgencia en Hospital , Síncope
8.
J Heart Lung Transplant ; 43(7): 1118-1125, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38373559

RESUMEN

BACKGROUND: Endomyocardial biopsy (EMB)-based traditional microscopy remains the gold standard for the detection of cardiac allograft rejection, despite its limitation of inherent subjectivity leading to inter-reader variability. Alternative techniques now exist to surveil for allograft injury and classify rejection. Donor-derived cell-free DNA (dd-cfDNA) testing is now a validated blood-based assay used to surveil for allograft injury. The molecular microscope diagnostic system (MMDx) utilizes intragraft rejection-associated transcripts (RATs) to classify allograft rejection and identify injury. The use of dd-cfDNA and MMDx together provides objective molecular insight into allograft injury and rejection. The aim of this study was to measure the diagnostic agreement between dd-cfDNA and MMDx and assess the relationship between dd-cfDNA and MMDx-derived RATs, which may provide further insight into the pathophysiology of allograft rejection and injury. METHODS: This is a retrospective observational study of 156 EMB evaluated with traditional microscopy and MMDx. All samples were paired with dd-cfDNA from peripheral blood before EMB (up to 9 days). Diagnostic agreement between traditional histopathology, MMDx, and dd-cfDNA (threshold of 0.20%) was compared for assessment of allograft injury. In addition, the relationship between dd-cfDNA and individual RAT expression levels from MMDx was evaluated. RESULTS: MMDx characterized allograft tissue as no rejection (62.8%), antibody-mediated rejection (ABMR) (26.9%), T-cell-mediated rejection (TCMR) (5.8%), and mixed ABMR/TCMR (4.5%). For the diagnosis of any type of rejection (TCMR, ABMR, and mixed rejection), there was substantial agreement between MMDx and dd-cfDNA (76.3% agreement). All transcript clusters (group of gene sets designated by MMDx) and individual transcripts considered abnormal from MMDx had significantly elevated dd-cfDNA. In addition, a positive correlation between dd-cfDNA levels and certain MMDx-derived RATs was observed. Tissue transcript clusters were correlated with dd-cfDNA scores, including DSAST, GRIT, HT1, QCMAT, and S4. For individual transcripts, tissue ROBO4 was significantly correlated with dd-cfDNA in both nonrejection and rejection as assessed by MMDx. CONCLUSIONS: Collectively, we have shown substantial diagnostic agreement between dd-cfDNA and MMDx. Furthermore, based on the findings presented, we postulate a common pathway between the release of dd-cfDNA and expression of ROBO4 (a vascular endothelial-specific gene that stabilizes the vasculature) in the setting of antibody-mediated rejection, which may provide a mechanistic rationale for observed elevations in dd-cfDNA in AMR, compared to acute cellular rejection.


Asunto(s)
Ácidos Nucleicos Libres de Células , Rechazo de Injerto , Trasplante de Corazón , Donantes de Tejidos , Rechazo de Injerto/diagnóstico , Ácidos Nucleicos Libres de Células/sangre , Estudios Retrospectivos , Masculino , Humanos , Persona de Mediana Edad , Femenino , Adulto , Biopsia , Miocardio/patología , Miocardio/metabolismo
9.
PeerJ ; 12: e16777, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38274324

RESUMEN

Background: Based on emerging evidence that brief periods of cessation from resistance training (RT) may re-sensitize muscle to anabolic stimuli, we aimed to investigate the effects of a 1-week deload interval at the midpoint of a 9-week RT program on muscular adaptations in resistance-trained individuals. Methods: Thirty-nine young men (n = 29) and women (n = 10) were randomly assigned to 1 of 2 experimental, parallel groups: An experimental group that abstained from RT for 1 week at the midpoint of a 9-week, high-volume RT program (DELOAD) or a traditional training group that performed the same RT program continuously over the study period (TRAD). The lower body routines were directly supervised by the research staff while upper body training was carried out in an unsupervised fashion. Muscle growth outcomes included assessments of muscle thickness along proximal, mid and distal regions of the middle and lateral quadriceps femoris as well as the mid-region of the triceps surae. Adaptions in lower body isometric and dynamic strength, local muscular endurance of the quadriceps, and lower body muscle power were also assessed. Results: Results indicated no appreciable differences in increases of lower body muscle size, local endurance, and power between groups. Alternatively, TRAD showed greater improvements in both isometric and dynamic lower body strength compared to DELOAD. Additionally, TRAD showed some slight psychological benefits as assessed by the readiness to train questionnaire over DELOAD. Conclusion: In conclusion, our findings suggest that a 1-week deload period at the midpoint of a 9-week RT program appears to negatively influence measures of lower body muscle strength but has no effect on lower body hypertrophy, power or local muscular endurance.


Asunto(s)
Entrenamiento de Fuerza , Masculino , Humanos , Femenino , Entrenamiento de Fuerza/métodos , Músculo Esquelético/fisiología , Músculo Cuádriceps/fisiología , Fuerza Muscular/fisiología , Adaptación Fisiológica
10.
Neurology ; 102(4): e208031, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38295353

RESUMEN

BACKGROUND AND OBJECTIVES: Intubation for acute stroke is common in the United States, with few established guidelines. METHODS: This is a retrospective observational study of acute stroke admissions from 2011 to 2018 among fee-for-service Medicare beneficiaries aged 65-100 years. Patient demographics and chronic conditions as well as hospital characteristics were identified. We identified patient intubation, stroke subtype (ischemic vs intracerebral hemorrhage), and thrombectomy. Factors associated with intubation were identified by a linear probability model with intubation as the outcome and patient characteristics, stroke subtype, and thrombectomy as predictors, adjusting for within-hospital correlation. We compared hospital characteristics between adjusted intubation rate quartiles. We specified a linear probability model with 30-day mortality as the patient-level outcome and hospital intubation rate quartile as the categorical predictor, again adjusting for patient characteristics. We specified an analogous model for quartiles of hospital referral regions. RESULTS: There were 800,467 stroke hospitalizations at 3,581 hospitals. Among 2,588 hospitals with 25 or more stroke hospitalizations, the median intubation rate was 4.8%, while a quarter had intubation rates below 2.4% and 10% had rates above 12.5%. Ischemic strokes had a 21% lower adjusted intubation risk than intracerebral hemorrhages (risk difference [RD] -21.1%, 95% CI -21.3% to -20.9%; p < 0.001), whereas thrombectomy was associated with a 19.2% higher adjusted risk (95% CI RD 18.8%-19.6%; p < 0.001). Women and older patients had lower intubation rates. Large, urban hospitals and academic medical centers were overrepresented in the top quartile of hospital adjusted intubation rates. Even after adjusting for available characteristics, intubated patients had a 44% higher mortality risk than non-intubated patients (p < 0.001). Hospitals in the highest intubation quartile had higher adjusted 30-day mortality (19.3%) than hospitals in the lowest quartile (16.7%), a finding that was similar when restricting to major teaching hospitals (22.3% vs 18.1% in the 4th vs 1st quartiles, respectively). There was no association between market quartile of intubation and patient 30-day mortality. DISCUSSION: Intubation for acute stroke varied by patient and hospital characteristics. Hospitals with higher adjusted rates of intubation had higher patient-level 30-day mortality, but much of the difference may be due to unmeasured patient severity given that no such association was observed for health care markets.


Asunto(s)
Medicare , Accidente Cerebrovascular , Anciano , Humanos , Femenino , Estados Unidos , Accidente Cerebrovascular/epidemiología , Accidente Cerebrovascular/terapia , Hospitalización , Hospitales de Enseñanza , Estudios Retrospectivos , Intubación
11.
Healthc (Amst) ; 11(4): 100718, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37913606

RESUMEN

BACKGROUND: United States healthcare has increasingly transitioned to outpatient care delivery. The degree to which Academic Medical Centers (AMCs) have been able to shift surgical procedures from inpatient to outpatient settings despite higher patient complexity is unknown. METHODS: This observational study used a 20% sample of fee-for-service Medicare beneficiaries age 65 and older undergoing eight elective procedures from 2011 to 2018 to model trends in procedure site (hospital outpatient vs. inpatient) and 30-day standardized Medicare costs, overall and by hospital teaching status. RESULTS: Of the 1,222,845 procedures, 15.9% occurred at AMCs. There was a 2.42% per-year adjusted increase (95% CI 2.39%-2.45%; p < .001) in proportion of outpatient hospital procedures, from 68.9% in 2011 to 85.4% in 2018. Adjusted 30-day standardized costs declined from $18,122 to $14,353, (-$560/year, 95% CI -$573 to -$547; p < .001). Patients at AMCs had more chronic conditions and higher predicted annual mortality. AMCs had a lower proportion of outpatient procedures in all years compared to non-AMCs, a difference that was statistically significant but small in magnitude. AMCs had higher costs compared to non-AMCs and a lesser decline over time (p < .001 for the interaction). AMCs and non-AMCs saw a similar decline in 30-day mortality. CONCLUSIONS: There has been a substantial shift toward outpatient procedures among Medicare beneficiaries with a decrease in total 30-day Medicare spending as well as 30-day mortality. Despite a higher complexity population, AMCs shifted procedures to the outpatient hospital setting at a similar rate as non-AMCs. IMPLICATIONS: The trend toward outpatient procedural care and lower spending has been observed broadly across AMCs and non-AMCs, suggesting that Medicare beneficiaries have benefited from more efficient delivery of procedural care across academic and community hospitals.


Asunto(s)
Gastos en Salud , Pacientes Ambulatorios , Humanos , Anciano , Estados Unidos , Medicare , Costos y Análisis de Costo , Hospitales de Enseñanza
12.
J Emerg Med ; 65(6): e568-e579, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37879972

RESUMEN

BACKGROUND: Incidental finding (IF) follow-up is of critical importance for patient safety and is a source of malpractice risk. Laboratory, imaging, or other types of IFs are often uncovered incidentally and are missed, not addressed, or only result after hospital discharge. Despite a growing IF notification literature, a need remains to study cost-effective non-electronic health record (EHR)-specific solutions that can be used across different types of IFs and EHRs. OBJECTIVE: The objective of this study was to evaluate the utility and cost-effectiveness of an EHR-independent emergency medicine-based quality assurance (QA) follow-up program in which an experienced nurse reviewed laboratory and imaging studies and ensured appropriate follow-up of results. METHODS: A QA nurse reviewed preceding-day abnormal studies from a tertiary care hospital, a community hospital, and an urgent care center. Laboratory values outside preset parameters or radiology over-reads resulting in clinically actionable changes triggered contact with an on-call emergency physician to determine an appropriate intervention and its implementation. RESULTS: Of 104,125 visits with 1,351,212 laboratory studies and 95,000 imaging studies, 6530 visits had IFs, including 2659 laboratory and 4004 imaging results. The most common intervention was contacting a primary care physician (5783 cases [88.6%]). Twenty-one cases resulted in a patient returning to the ED, at an average cost of $28,000 per potential life-/limb-saving intervention. CONCLUSIONS: Although abnormalities in laboratory results and imaging are often incidental to patient care, a dedicated emergency department QA follow-up program resulted in the identification and communication of numerous laboratory and imaging abnormalities and may result in changes to patients' subsequent clinical course, potentially increasing patient safety.


Asunto(s)
Hallazgos Incidentales , Alta del Paciente , Humanos , Estudios de Seguimiento , Servicio de Urgencia en Hospital , Costos y Análisis de Costo , Atención Ambulatoria
13.
J Sports Sci ; 41(12): 1207-1217, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37789670

RESUMEN

This study compared the effects of supervised versus unsupervised resistance training (RT) on measures of muscle strength and hypertrophy in resistance-trained individuals. Thirty-six young men and women were randomly assigned to one of two experimental, parallel groups to complete an 8-week RT programme: One group received direct supervision for their RT sessions (SUP); the other group performed the same RT programme in an unsupervised manner (UNSUP). Programme variables were kept constant between groups. We obtained pre- and post-study assessments of body composition via multi-frequency bioelectrical impedance analysis (MF-BIA), muscle thickness of the upper and lower limbs via ultrasound, 1 repetition maximum (RM) in the back squat and bench press, isometric knee extension strength, and countermovement jump (CMJ) height. Results showed the SUP group generally achieved larger increases in muscle thickness for the triceps brachii, all sites of the rectus femoris, and the proximal region of the vastus lateralis. MF-BIA indicated increases in lean mass favoured SUP. Squat 1RM was greater for SUP; bench press 1RM and isometric knee extension were similar between conditions. CMJ increases modestly favoured UNSUP. In conclusion, our findings suggest that supervised RT promotes greater muscular adaptations and enhances exercise adherence in young, resistance-trained individuals.


Asunto(s)
Entrenamiento de Fuerza , Masculino , Humanos , Femenino , Entrenamiento de Fuerza/métodos , Músculo Cuádriceps/fisiología , Músculo Esquelético/fisiología , Extremidad Inferior , Brazo , Fuerza Muscular/fisiología , Adaptación Fisiológica
14.
Acad Emerg Med ; 30(12): 1237-1245, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37682564

RESUMEN

OBJECTIVE: The objective was to evaluate available characteristics and financial costs of malpractice cases among advanced practice providers (APPs; nurse practitioners [NPs] and physician assistants [PAs]), trainees (medical students, residents, fellows), and attending physicians. METHODS: This study was a retrospective analysis of claims occurring in the emergency department (ED) from January 1, 2010, to December 31, 2019, contained in the Candello database. Cases were classified according to the provider type(s) involved: NP, PA, trainee, or cases that did not identify an extender as being substantially involved in the adverse event that resulted in the case ("no extender"). RESULTS: There were 5854 cases identified with a total gross indemnity paid of $1,007,879,346. Of these cases, 193 (3.3%) involved an NP, 513 (8.8%) involved a PA, 535 (9.1%) involved a trainee, and 4568 (78.0%) were no extender. Cases where a trainee was involved account for the highest average gross indemnity paid whereas no-extender cases are the lowest. NP and PA cases differed by contributing factors compared to no-extender cases: clinical judgment (NP 89.1% vs. no extender 76.8%, p < 0.0001; PA 84.6% vs. no extender, p < 0.0001), documentation (NP 23.3% vs. no extender 17.8%, p = 0.0489; PA 25.9% vs. no extender, p < 0.0001), and supervision (NP 22.3% vs. no extender 1.8%, p < 0.0001; PA 25.7% vs. no extender p < 0.0001). Cases involving NPs and PAs had a lower percentage of high-severity cases such as loss of limb or death (NP 45.6% vs. no extender 50.2%, p = 0.0004; PA 48.3% vs. no extender, p < 0.0001). CONCLUSIONS: APPs and trainees comprise approximately 21% of malpractice cases and 33% of total gross indemnity paid in this large national ED data set. Understanding differences in characteristics of malpractice claims that occur in emergency care settings can be used to help to mitigate provider risk.


Asunto(s)
Mala Praxis , Enfermeras Practicantes , Médicos , Humanos , Estados Unidos , Estudios Retrospectivos , Personal de Salud , Servicio de Urgencia en Hospital
15.
Nutrients ; 15(9)2023 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-37432300

RESUMEN

The purpose of this paper was to carry out a systematic review with a meta-analysis of randomized controlled trials that examined the combined effects of resistance training (RT) and creatine supplementation on regional changes in muscle mass, with direct imaging measures of hypertrophy. Moreover, we performed regression analyses to determine the potential influence of covariates. We included trials that had a duration of at least 6 weeks and examined the combined effects of creatine supplementation and RT on site-specific direct measures of hypertrophy (magnetic resonance imaging (MRI), computed tomography (CT), or ultrasound) in healthy adults. A total of 44 outcomes were analyzed across 10 studies that met the inclusion criteria. A univariate analysis of all the standardized outcomes showed a pooled mean estimate of 0.11 (95% Credible Interval (CrI): -0.02 to 0.25), providing evidence for a very small effect favoring creatine supplementation when combined with RT compared to RT and a placebo. Multivariate analyses found similar small benefits for the combination of creatine supplementation and RT on changes in the upper and lower body muscle thickness (0.10-0.16 cm). Analyses of the moderating effects indicated a small superior benefit for creatine supplementation in younger compared to older adults (0.17 (95%CrI: -0.09 to 0.45)). In conclusion, the results suggest that creatine supplementation combined with RT promotes a small increase in the direct measures of skeletal muscle hypertrophy in both the upper and lower body.


Asunto(s)
Creatina , Entrenamiento de Fuerza , Humanos , Anciano , Hipertrofia , Músculos , Suplementos Dietéticos
16.
Circ Res ; 133(3): 271-287, 2023 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-37409456

RESUMEN

BACKGROUND: Cardiomyopathy is characterized by the pathological accumulation of resident cardiac fibroblasts that deposit ECM (extracellular matrix) and generate a fibrotic scar. However, the mechanisms that control the timing and extent of cardiac fibroblast proliferation and ECM production are not known, hampering the development of antifibrotic strategies to prevent heart failure. METHODS: We used the Tcf21 (transcription factor 21)MerCreMer mouse line for fibroblast-specific lineage tracing and p53 (tumor protein p53) gene deletion. We characterized cardiac physiology and used single-cell RNA-sequencing and in vitro studies to investigate the p53-dependent mechanisms regulating cardiac fibroblast cell cycle and fibrosis in left ventricular pressure overload induced by transaortic constriction. RESULTS: Cardiac fibroblast proliferation occurs primarily between days 7 and 14 following transaortic constriction in mice, correlating with alterations in p53-dependent gene expression. p53 deletion in fibroblasts led to a striking accumulation of Tcf21-lineage cardiac fibroblasts within the normal proliferative window and precipitated a robust fibrotic response to left ventricular pressure overload. However, excessive interstitial and perivascular fibrosis does not develop until after cardiac fibroblasts exit the cell cycle. Single-cell RNA sequencing revealed p53 null fibroblasts unexpectedly express lower levels of genes encoding important ECM proteins while they exhibit an inappropriately proliferative phenotype. in vitro studies establish a role for p53 in suppressing the proliferative fibroblast phenotype, which facilitates the expression and secretion of ECM proteins. Importantly, Cdkn2a (cyclin-dependent kinase inhibitor 2a) expression and the p16Ink4a-retinoblastoma cell cycle control pathway is induced in p53 null cardiac fibroblasts, which may eventually contribute to cell cycle exit and fulminant scar formation. CONCLUSIONS: This study reveals a mechanism regulating cardiac fibroblast accumulation and ECM secretion, orchestrated in part by p53-dependent cell cycle control that governs the timing and extent of fibrosis in left ventricular pressure overload.


Asunto(s)
Cicatriz , Ventrículos Cardíacos , Ratones , Animales , Ventrículos Cardíacos/patología , Cicatriz/metabolismo , Proteína p53 Supresora de Tumor/genética , Proteína p53 Supresora de Tumor/metabolismo , Fibrosis , Fibroblastos/metabolismo , Proliferación Celular , Miocardio/metabolismo
18.
J Am Geriatr Soc ; 71(10): 3122-3133, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37300394

RESUMEN

BACKGROUND: Older adults, particularly those with Alzheimer's Disease and Alzheimer's Disease Related Dementias (AD/ADRD), have high rates of emergency department (ED) visits and are at risk for poor outcomes. How best to measure quality of care for this population has been debated. Healthy Days at Home (HDAH) is a broad outcome measure reflecting mortality and time spent in facility-based healthcare settings versus home. We examined trends in 30-day HDAH for Medicare beneficiaries after visiting the ED and compared trends by AD/ADRD status. METHODS: We identified all ED visits among a national 20% sample of Medicare beneficiaries ages 68 and older from 2012 to 2018. For each visit, we calculated 30-day HDAH by subtracting mortality days and days spent in facility-based healthcare settings within 30 days of an ED visit. We calculated adjusted rates of HDAH using linear regression, accounting for hospital random effects, visit diagnosis, and patient characteristics. We compared rates of HDAH among beneficiaries with and without AD/ADRD, including accounting for nursing home (NH) residency status. RESULTS: We found fewer adjusted 30-day HDAH after ED visits among patients with AD/ADRD compared to those without AD/ADRD (21.6 vs. 23.0). This difference was driven by a greater number of mortality days, SNF days, and, to a lesser degree, hospital observation days, ED visits, and long-term hospital days. From 2012 to 2018, individuals living with AD/ADRD had fewer HDAH each year but a greater mean annual increase over time (p < 0.001 for the interaction between year and AD/ADRD status). Being a NH resident was associated with fewer adjusted 30-day HDAH for beneficiaries with and without AD/ADRD. CONCLUSIONS: Beneficiaries with AD/ADRD had fewer HDAH following an ED visit but saw moderately greater increases in HDAH over time compared to those without AD/ADRD. This trend was visit driven by declining mortality and utilization of inpatient and post-acute care.


Asunto(s)
Enfermedad de Alzheimer , Humanos , Anciano , Estados Unidos/epidemiología , Enfermedad de Alzheimer/terapia , Enfermedad de Alzheimer/epidemiología , Medicare , Aceptación de la Atención de Salud , Servicio de Urgencia en Hospital , Instituciones de Salud
19.
J Funct Morphol Kinesiol ; 8(2)2023 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-37218848

RESUMEN

Emerging evidence indicates that the use of low-load resistance training in combination with blood flow restriction (LL-BFR) can be an effective method to elicit increases in muscle size, with most research showing similar whole muscle development of the extremities compared to high-load (HL) training. It is conceivable that properties unique to LL-BFR such as greater ischemia, reperfusion, and metabolite accumulation may enhance the stress on type I fibers during training compared to the use of LLs without occlusion. Accordingly, the purpose of this paper was to systematically review the relevant literature on the fiber-type-specific response to LL-BFR and provide insights into future directions for research. A total of 11 studies met inclusion criteria. Results of the review suggest that the magnitude of type I fiber hypertrophy is at least as great, and sometimes greater, than type II hypertrophy when performing LL-BFR. This finding is in contrast to HL training, where the magnitude of type II fiber hypertrophy tends to be substantially greater than that of type I myofibers. However, limited data directly compare training with LL-BFR to nonoccluded LL or HL conditions, thus precluding the ability to draw strong inferences as to whether the absolute magnitude of type I hypertrophy is indeed greater in LL-BFR vs. traditional HL training. Moreover, it remains unclear as to whether combining LL-BFR with traditional HL training may enhance whole muscle hypertrophy via greater increases in type I myofiber cross-sectional area.

20.
J Funct Morphol Kinesiol ; 8(2)2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37218855

RESUMEN

The present paper aimed to systematically review case studies on physique athletes to evaluate longitudinal changes in measures of body composition, neuromuscular performance, chronic hormonal levels, physiological adaptations, and psychometric outcomes during pre-contest preparation. We included studies that (1) were classified as case studies involving physique athletes during the pre-contest phase of their competitive cycle; (2) involved adults (18+ years of age) as participants; (3) were published in an English-language peer-reviewed journal; (4) had a pre-contest duration of at least 3 months; (5) reported changes across contest preparation relating to measures of body composition (fat mass, lean mass, and bone mineral density), neuromuscular performance (strength and power), chronic hormonal levels (testosterone, estrogen, cortisol, leptin, and ghrelin), physiological adaptations (maximal aerobic capacity, resting energy expenditure, heart rate, blood pressure, menstrual function, and sleep quality), and/or psychometric outcomes (mood states and food desire). Our review ultimately included 11 case studies comprising 15 ostensibly drug-free athletes (male = 8, female = 7) who competed in various physique-oriented divisions including bodybuilding, figure, and bikini. The results indicated marked alterations across the array of analyzed outcomes, sometimes with high inter-individual variability and divergent sex-specific responses. The complexities and implications of these findings are discussed herein.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...