Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 97
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38692485

RESUMO

BACKGROUND: Oral immunotherapy (OIT) is a promising treatment for food allergy. Prior studies demonstrate significant differences among food allergic individuals across race, ethnicity, and socioeconomic groups. Disparities in OIT have not been evaluated. OBJECTIVE: We assessed disparities in the use of OIT in patients with peanut allergy based on race, ethnicity, and socioeconomic status at a single academic medical center. METHODS: We identified 1028 peanut allergic patients under 18 years of age receiving care in the University of Michigan food allergy clinics. 148 patients undergoing peanut OIT (treatment group) were compared to the 880 patients avoiding peanut (control group). Pertinent demographic and socioeconomic characteristics were compared. RESULTS: There were no differences in gender or ethnicity between the OIT and control groups. However, Black patients comprised 18% of the control group but only 4.1% of the OIT treatment group (p<0.0001). The proportion of patients with private insurance was significantly higher in the treatment group compared to control group, 93.2% vs 82.2% (p=0.0004). Finally, the Neighborhood Affluence Index, a Census-based measure of the relative socioeconomic prosperity of a neighborhood, was significantly higher in the OIT group vs. the control group (0.51±0.18 vs 0.47±0.19) (p=0.015), while the Neighborhood Disadvantage Index, a Census-based measure of the relative socioeconomic disadvantage of a neighborhood, was significantly lower (0.082±0.062 vs 0.10±0.093) (p=0.020). CONCLUSION: Significant racial and economic disparities exist at our institution between peanut allergic individuals who receive OIT and those who do not. Efforts to understand the basis for these disparities are important to ensure patients have equitable access to OIT.

2.
Am J Sports Med ; 52(6): 1527-1534, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38600806

RESUMO

BACKGROUND: Patellofemoral instability commonly occurs during sports activities. The return to sports (RTS) rate for pediatric patients after bilateral medial patellofemoral ligament reconstruction (MPFLR) is unknown. PURPOSE/HYPOTHESIS: The purpose of this study was to evaluate RTS outcomes for pediatric patients undergoing bilateral MPFLR. It was hypothesized that (1) fewer pediatric patients would RTS after bilateral MPFLR compared with unilateral MPFLR and that (2) for those in the bilateral cohort who were able to RTS, fewer patients would attain the same level of play as or higher level than the preinjury level. STUDY DESIGN: Cohort study; Level of evidence, 3. METHODS: We prospectively collected RTS data on retrospectively identified matched cohorts of patients aged ≤18 years who underwent unilateral and bilateral MPFLR. We matched each participant with bilateral MPFLR at a 1 to 2 ratio with a participant with unilateral MPFLR by concomitant procedure, age, and sex. Postoperative complications and preoperative imaging measurements were collected from medical records. Patient-reported outcomes were obtained using a current Single Assessment Numeric Evaluation score collected at the time of primary outcome data. RESULTS: We matched 16 participants (mean age, 14 years) who underwent bilateral MPFLR to 32 participants (mean age, 14.3 years) in a corresponding unilateral MPFLR cohort. We found a significant decrease in RTS rates for pediatric patients after bilateral MPFLR when compared with unilateral MPFLR (69% vs 94%; P = .03). Among those who returned to sports, there was no difference in the level of play achieved. For participants who did not RTS or returned at a lower level of play after bilateral MPFLR, 57% cited fear of reinjury as the primary reason. There were no differences in postoperative complications or current Single Assessment Numeric Evaluation scores between cohorts. The bilateral cohort had a significantly higher Caton-Deschamps index compared with the unilateral cohort, although the absolute difference was small (1.3 vs 1.2; P = .005). CONCLUSION: We found that pediatric patients have a lower RTS rate after bilateral MPFLR when compared with a matched unilateral MPFLR cohort. No differences in the level of play were achieved among those who returned to sports. Fear of reinjury was a commonly cited reason for not returning to sports.


Assuntos
Articulação Patelofemoral , Volta ao Esporte , Humanos , Adolescente , Masculino , Feminino , Criança , Estudos Retrospectivos , Articulação Patelofemoral/cirurgia , Instabilidade Articular/cirurgia , Traumatismos em Atletas/cirurgia , Procedimentos de Cirurgia Plástica , Medidas de Resultados Relatados pelo Paciente , Ligamentos Articulares/cirurgia
3.
Alcohol Clin Exp Res (Hoboken) ; 48(4): 680-691, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38546532

RESUMO

BACKGROUND: While sleep and circadian rhythms are recognized contributors to the risk for alcohol use and related problems, few studies have examined whether objective sleep and circadian measures can predict future alcohol use in humans, and no such studies have been conducted in adults. This study examined whether any baseline sleep and/or circadian characteristics of otherwise healthy adults predicted their alcohol use over the subsequent 12 months. METHODS: Participants (21-42 years) included 28 light and 50 heavy drinkers. At baseline, a comprehensive range of self-reported and objective sleep/circadian measures was assessed via questionnaires, wrist actigraphy, and measurement of dim light melatonin onset and circadian photoreceptor responsivity. Following this, the number of alcoholic drinks per week and binge drinking episodes per month were assessed quarterly over the subsequent 12 months. Anticipated effects of alcohol (stimulation, sedation, and rewarding aspects) were also assessed quarterly over the 12 months. Analyses included generalized linear mixed-effects models and causal mediation analysis. RESULTS: Across the range of measures, only self-reported insomnia symptoms and a longer total sleep time at baseline predicted more drinks per week and binges per month (ps <0.02). There was a trend for the anticipated alcohol effect of wanting more alcohol at the 6-month timepoint to mediate the relationship between insomnia symptoms at baseline and drinks per week at 12 months (p = 0.069). CONCLUSIONS: These results suggest that in otherwise healthy adults, insomnia symptoms, even if subclinical, are a significant predictor of future drinking, and appear to outweigh the influence of circadian factors on future drinking, at least in otherwise healthy adults. Insomnia symptoms may be a modifiable target for reducing the risk of alcohol misuse.

4.
J Cyst Fibros ; 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38490920

RESUMO

BACKGROUND: Iron deficiency (ID) is a common extrapulmonary manifestation in cystic fibrosis (CF). CF transmembrane conductance regulator (CFTR) modulator therapies, particularly highly-effective modulator therapy (HEMT), have drastically improved health status in a majority of people with CF. We hypothesize that CFTR modulator use is associated with improved markers of ID. METHODS: In a multicenter retrospective cohort study across 4 United States CF centers 2012-2022, the association between modulator therapies and ID laboratory outcomes was estimated using multivariable linear mixed effects models overall and by key subgroups. Summary statistics describe the prevalence and trends of ID, defined a priori as transferrin saturation (TSAT) <20 % or serum iron <60 µg/dL (<10.7 µmol/L). RESULTS: A total of 568 patients with 2571 person-years of follow-up were included in analyses. Compared to off modulator therapy, HEMT was associated with +8.4 % TSAT (95 % confidence interval [CI], +6.3-10.6 %; p < 0.0001) and +34.4 µg/dL serum iron (95 % CI, +26.7-42.1 µg/dL; p < 0.0001) overall; +5.4 % TSAT (95 % CI, +2.8-8.0 %; p = 0.0001) and +22.1 µg/dL serum iron (95 % CI, +13.5-30.8 µg/dL; p < 0.0001) in females; and +11.4 % TSAT (95 % CI, +7.9-14.8 %; p < 0.0001) and +46.0 µg/dL serum iron (95 % CI, +33.3-58.8 µg/dL; p < 0.0001) in males. Ferritin was not different in those taking modulator therapy relative to off modulator therapy. Hemoglobin was overall higher with use of modulator therapy. The prevalence of ID was high throughout the study period (32.8 % in those treated with HEMT). CONCLUSIONS: ID remains a prevalent comorbidity in CF, despite availability of HEMT. Modulator use, particularly of HEMT, is associated with improved markers for ID (TSAT, serum iron) and anemia (hemoglobin).

5.
Am J Kidney Dis ; 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38452919

RESUMO

RATIONALE & OBJECTIVE: Glomerular disorders have a highly variable clinical course, and biomarkers that reflect the molecular mechanisms underlying their progression are needed. Based on our previous work identifying plasminogen as a direct cause of podocyte injury, we designed this study to test the association between urine plasmin(ogen) (ie, plasmin and its precursor plasminogen) and end-stage kidney disease (ESKD). STUDY DESIGN: Multicenter cohort study. SETTING & PARTICIPANTS: 1,010 patients enrolled in the CureGN Cohort with biopsy-proven glomerular disease (focal segmental glomerulosclerosis, membranous nephropathy, and immunoglobulin A nephropathy). PREDICTORS: The main predictor was urine plasmin(ogen) at baseline. Levels were measured by an electrochemiluminescent immunoassay developed de novo. Traditional clinical and analytical characteristics were used for adjustment. The ratio of urine plasmin(ogen)/expected plasmin(ogen) was evaluated as a predictor in a separate model. OUTCOME: Progression to ESKD. ANALYTICAL APPROACH: Cox regression was used to examine the association between urinary plasmin(ogen) and time to ESKD. Urinary markers were log2 transformed to approximate normal distribution and normalized to urinary creatinine (Log2uPlasminogen/cr, Log2 urinary protein/cr [UPCR]). Expected plasmin(ogen) was calculated by multiple linear regression. RESULTS: Adjusted Log2uPlasminogen/cr was significantly associated with ESKD (HR per doubling Log2 uPlasminogen/cr 1.31 [95% CI, 1.22-1.40], P<0.001). Comparison of the predictive performance of the models including Log2 uPlasminogen/cr, Log2 UPCR, or both markers showed the plasmin(ogen) model superiority. The ratio of measured/expected urine plasmin(ogen) was independently associated with ESKD: HR, 0.41 (95% CI, 0.22-0.77) if ratio<0.8 and HR 2.42 (95% CI, 1.54-3.78) if ratio>1.1 (compared with ratio between 0.8 and 1.1). LIMITATIONS: Single plasmin(ogen) determination does not allow for the study of changes over time. The use of a cohort of mostly white patients and the restriction to patients with 3 glomerular disorders limits the external validity of our analysis. CONCLUSIONS: Urinary plasmin(ogen) and the ratio of measured/expected plasmin(ogen) are independently associated with ESKD in a cohort of patients with glomerular disease. Taken together with our previous experimental findings, urinary plasmin(ogen) could be a useful biomarker in prognostic decision making and a target for the development of novel therapies in patients with proteinuria and glomerular disease. PLAIN-LANGUAGE SUMMARY: Glomerular diseases are an important cause of morbidity and mortality in patients of all ages. Knowing the individual risk of progression to dialysis or transplantation would help to plan the follow-up and treatment of these patients. Our work studies the usefulness of urinary plasminogen as a marker of progression in this context, since previous studies indicate that plasminogen may be involved in the mechanisms responsible for the progression of these disorders. Our work in a sample of 1,010 patients with glomerular disease demonstrates that urinary plasminogen (as well as the ratio of measured to expected plasminogen) is associated with the risk of progression to end-stage kidney disease. Urine plasminogen exhibited good performance and, if further validated, could enable risk stratification for timely interventions in patients with proteinuria and glomerular disease.

6.
Abdom Radiol (NY) ; 2024 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-38400982

RESUMO

PURPOSE: Radiologists with diverse training, specialization, and habits interpret imaging in the Emergency Department. It is necessary to understand if their variation predicts differential value. The purpose of this study was to determine whether attending radiologist variation predicts major clinical outcomes in adult Emergency Department patients imaged with ultrasound for right upper quadrant pain. METHODS: Consecutive ED patients imaged with ultrasound for RUQ pain from 10/8/2016 to 8/10/2022 were included (N = 7097). The primary outcome was prediction of hospital admission by signing attending radiologist. Secondary outcomes included: ED and hospital length of stay (LOS), 30-day mortality, 30-day re-presentation rate, subspecialty consultation, advanced imaging follow up (HIDA, MRI, CT), and intervention (ERCP, drainage or surgery). Sample size was determined a priori (detectable effect size: w = 0.06). Data were adjusted for demographic data, Elixhauser comorbidities, number of ED visits in prior year, clinical data, and system factors (38 covariates). P-values were corrected for multiple comparisons (false discovery rate-adjusted p-values). RESULTS: The included ultrasounds were read by 35 radiologists (median exams/radiologist: 145 [74.5-241.5]). Signing radiologist did not predict hospitalization (p = 0.85), abdominopelvic surgery or intervention within 30 days, re-presentation to the Emergency Department within 30 days, or subspecialty consultation. Radiologist did predict difference in Emergency Department length of stay (p < 0.001) although this difference was small and imprecise. HIDA was mentioned variably by radiologists (range 0-19%, p < 0.001), and mention of HIDA in the ultrasound report increased 10-fold the odds of HIDA being performed in the next 72 h (odds ratio 10.4 [8.0-13.4], p < 0.001). CONCLUSION: Radiologist variability did not predict meaningful outcome differences for patients with right upper quadrant pain undergoing ultrasound in the Emergency Department, but when radiologists mention HIDA in their reports, it predicts a 10-fold increase in the odds a HIDA is performed. Radiologists are relied on for interpretation that shapes subsequent patient care, and it is important to consider how radiologist variability can influence both outcome and resource utilization.

7.
Clin Microbiol Infect ; 30(4): 499-506, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38163481

RESUMO

OBJECTIVES: Diagnostic error in the use of respiratory cultures for ventilator-associated pneumonia (VAP) fuels misdiagnosis and antibiotic overuse within intensive care units. In this prospective quasi-experimental study (NCT05176353), we aimed to evaluate the safety, feasibility, and efficacy of a novel VAP-specific bundled diagnostic stewardship intervention (VAP-DSI) to mitigate VAP over-diagnosis/overtreatment. METHODS: We developed and implemented a VAP-DSI using an interruptive clinical decision support tool and modifications to clinical laboratory workflows. Interventions included gatekeeping access to respiratory culture ordering, preferential use of non-bronchoscopic bronchoalveolar lavage for culture collection, and suppression of culture results for samples with minimal alveolar neutrophilia. Rates of adverse safety outcomes, positive respiratory cultures, and antimicrobial utilization were compared between mechanically ventilated patients (MVPs) in the 1-year post-intervention study cohort (2022-2023) and 5-year pre-intervention MVP controls (2017-2022). RESULTS: VAP-DSI implementation did not associate with increases in adverse safety outcomes but did associate with a 20% rate reduction in positive respiratory cultures per 1000 MVP days (pre-intervention rate 127 [95% CI: 122-131], post-intervention rate 102 [95% CI: 92-112], p < 0.01). Significant reductions in broad-spectrum antibiotic days of therapy per 1000 MVP days were noted after VAP-DSI implementation (pre-intervention rate 1199 [95% CI: 1177-1205], post-intervention rate 1149 [95% CI: 1116-1184], p 0.03). DISCUSSION: Implementation of a VAP-DSI was safe and associated with significant reductions in rates of positive respiratory cultures and broad-spectrum antimicrobial use. This innovative trial of a VAP-DSI represents a novel avenue for intensive care unit antimicrobial stewardship. Multicentre trials of VAP-DSIs are warranted.


Assuntos
Pneumonia Associada à Ventilação Mecânica , Humanos , Antibacterianos/uso terapêutico , Unidades de Terapia Intensiva , Pneumonia Associada à Ventilação Mecânica/diagnóstico , Pneumonia Associada à Ventilação Mecânica/tratamento farmacológico , Pneumonia Associada à Ventilação Mecânica/microbiologia , Estudos Prospectivos , Estudos de Viabilidade
8.
JMIR Form Res ; 7: e43099, 2023 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-37707948

RESUMO

BACKGROUND: Caregivers of people with chronic illnesses often face negative stress-related health outcomes and are unavailable for traditional face-to-face interventions due to the intensity and constraints of their caregiver role. Just-in-time adaptive interventions (JITAIs) have emerged as a design framework that is particularly suited for interventional mobile health studies that deliver in-the-moment prompts that aim to promote healthy behavioral and psychological changes while minimizing user burden and expense. While JITAIs have the potential to improve caregivers' health-related quality of life (HRQOL), their effectiveness for caregivers remains poorly understood. OBJECTIVE: The primary objective of this study is to evaluate the dose-response relationship of a fully automated JITAI-based self-management intervention involving personalized mobile app notifications targeted at decreasing the level of caregiver strain, anxiety, and depression. The secondary objective is to investigate whether the effectiveness of this mobile health intervention was moderated by the caregiver group. We also explored whether the effectiveness of this intervention was moderated by (1) previous HRQOL measures, (2) the number of weeks in the study, (3) step count, and (4) minutes of sleep. METHODS: We examined 36 caregivers from 3 disease groups (10 from spinal cord injury, 11 from Huntington disease, and 25 from allogeneic hematopoietic cell transplantation) in the intervention arm of a larger randomized controlled trial (subjects in the other arm received no prompts from the mobile app) designed to examine the acceptability and feasibility of this intensive type of trial design. A series of multivariate linear models implementing a weighted and centered least squares estimator were used to assess the JITAI efficacy and effect. RESULTS: We found preliminary support for a positive dose-response relationship between the number of administered JITAI messages and JITAI efficacy in improving caregiver strain, anxiety, and depression; while most of these associations did not meet conventional levels of significance, there was a significant association between high-frequency JITAI and caregiver strain. Specifically, administering 5-6 messages per week as opposed to no messages resulted in a significant decrease in the HRQOL score of caregiver strain with an estimate of -6.31 (95% CI -11.76 to -0.12; P=.046). In addition, we found that the caregiver groups and the participants' levels of depression in the previous week moderated JITAI efficacy. CONCLUSIONS: This study provides preliminary evidence to support the effectiveness of the self-management JITAI and offers practical guidance for designing future personalized JITAI strategies for diverse caregiver groups. TRIAL REGISTRATION: ClinicalTrials.gov NCT04556591; https://clinicaltrials.gov/ct2/show/NCT04556591.

9.
J Clin Invest ; 133(16)2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37402149

RESUMO

BACKGROUNDFood allergy (FA) is a growing health problem requiring physiologic confirmation via the oral food challenge (OFC). Many OFCs result in clinical anaphylaxis, causing discomfort and risk while limiting OFC utility. Transepidermal water loss (TEWL) measurement provides a potential solution to detect food anaphylaxis in real time prior to clinical symptoms. We evaluated whether TEWL changes during an OFC could predict anaphylaxis onset.METHODSPhysicians and nurses blinded to the TEWL results conducted and adjudicated the results of all 209 OFCs in this study. A study coordinator measured TEWL throughout the OFC and had no input on the OFC conduct. TEWL was measured 2 ways in 2 separate groups. First, TEWL was measured using static, discrete measurements. Second, TEWL was measured using continuous monitoring. Participants who consented provided blood samples before and after the OFCs for biomarker analyses.RESULTSTEWL rose significantly (2.93 g/m2/h) during reactions and did not rise during nonreacting OFCs (-1.00 g/m2/h). Systemic increases in tryptase and IL-3 were also detected during reactions, providing supporting biochemical evidence of anaphylaxis. The TEWL rise occurred 48 minutes earlier than clinically evident anaphylaxis. Continuous monitoring detected a significant rise in TEWL that presaged positive OFCs, but no rise was seen in the OFCs that resulted in no reaction, providing high predictive specificity (96%) for anaphylaxis against nonreactions 38 minutes prior to anaphylaxis onset.CONCLUSIONSDuring OFCs, a TEWL rise anticipated a positive clinical challenge. TEWL presents a monitoring modality that may predict food anaphylaxis and facilitate improvements in OFC safety and tolerability.


Assuntos
Anafilaxia , Hipersensibilidade Alimentar , Humanos , Anafilaxia/diagnóstico , Anafilaxia/etiologia , Hipersensibilidade Alimentar/diagnóstico , Alimentos , Alérgenos
10.
J Oral Maxillofac Surg ; 81(10): 1301-1310, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37507104

RESUMO

PURPOSE: Penicillins are a potent antibiotic in managing odontogenic infections, but 10% of the population is labelled as allergic to these drugs. This has limited their use and resulted in increased utilization of health care resources as well as complications associated with alternative antibiotics. The purpose of the study was to measure the association between patients labeled as penicillin allergic and treatment outcomes in a sample of patients treated for complicated odontogenic infections. Additionally, we sought to investigate antibiotic resistance patterns in these patients. MATERIALS AND METHODS: A retrospective cohort study was performed at the Michigan Medicine health care system to include patients who were treated for complicated odontogenic infections by oral and maxillofacial surgery between 2016 and 2020. Complicated odontogenic infection was defined as any odontogenic infection requiring admission and surgical management in the operating room. The primary predictor variable was the penicillin allergy label, which was determined by chart review and not confirmed with formal testing. Outcomes were measures of disease severity. The primary outcome variable was hospital length of stay. Secondary outcome variables were ICU admission (yes/no), repeat computed tomography scan(s), repeat surgery (yes/no), and re-admission (yes/no). Co-variates included were age, sex (male/female), tobacco use status, diabetes, immunocompromised state, number of spaces involved, white blood cell count upon admission and insurance status. For our secondary aim, the primary predictor variable was again penicillin allergy and outcome variable was antibiotic resistance as determined by wound culture results following surgical intervention. Negative binomial regression and logistic regression analyses were performed. P < .05 was considered significant. RESULTS: A total of 150 patients met the inclusion criteria and of those 17.3% reported as penicillin allergic. Patients labelled as penicillin allergic did not differ significantly from patients without penicillin allergy label in terms of treatment outcomes. Age, diabetes, and immunosuppression were associated with an increased length of stay. Patients labelled as penicillin allergic were at significantly higher risk for antibiotic resistance (relative risk = 2.34; 95% confidence interval, 1.66 to 3.32; P < .001), specifically clindamycin resistance (relative risk = 3.17; 95% confidence interval, 1.93 to 5.18; P < .001). CONCLUSIONS: Penicillin allergy was significantly associated with clindamycin resistance. There were similar outcomes amongst patients with and without a penicillin allergy label despite antibiotic differences. Delabeling efforts for patients with a reported penicillin allergy must be considered and local nomograms for antibiotic selection should be used by providers when seeking alternative antibiotics.


Assuntos
Diabetes Mellitus , Hipersensibilidade a Drogas , Hipersensibilidade , Humanos , Masculino , Feminino , Clindamicina , Estudos Retrospectivos , Antibacterianos/uso terapêutico , Penicilinas/efeitos adversos , Hipersensibilidade a Drogas/tratamento farmacológico , Hipersensibilidade a Drogas/epidemiologia , Hipersensibilidade/tratamento farmacológico
11.
J Palliat Med ; 26(9): 1188-1197, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37022771

RESUMO

Aim: Our aim was to examine how code status orders for patients hospitalized with COVID-19 changed over time as the pandemic progressed and outcomes improved. Methods: This retrospective cohort study was performed at a single academic center in the United States. Adults admitted between March 1, 2020, and December 31, 2021, who tested positive for COVID-19, were included. The study period included four institutional hospitalization surges. Demographic and outcome data were collected and code status orders during admission were trended. Data were analyzed with multivariable analysis to identify predictors of code status. Results: A total of 3615 patients were included with full code (62.7%) being the most common final code status order followed by do-not-attempt-resuscitation (DNAR) (18.1%). Time of admission (per every six months) was an independent predictor of final full compared to DNAR/partial code status (p = 0.04). Limited resuscitation preference (DNAR or partial) decreased from over 20% in the first two surges to 10.8% and 15.6% of patients in the last two surges. Other independent predictors of final code status included body mass index (p < 0.05), Black versus White race (0.64, p = 0.01), time spent in the intensive care unit (4.28, p = <0.001), age (2.11, p = <0.001), and Charlson comorbidity index (1.05, p = <0.001). Conclusions: Over time, adults admitted to the hospital with COVID-19 were less likely to have a DNAR or partial code status order with persistent decrease occurring after March 2021. A trend toward decreased code status documentation as the pandemic progressed was observed.


Assuntos
COVID-19 , Humanos , Adulto , Estados Unidos , Estudos Retrospectivos , Ordens quanto à Conduta (Ética Médica) , Pandemias , Hospitalização
12.
Palliat Med Rep ; 4(1): 79-88, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36969738

RESUMO

Objective: With Huntington disease (HD), a fatal neurodegenerative disease where the prevalence of suicidal thoughts and behavior (STB) remains elevated as compared to other neurological disorders, it is unknown whether STB and health-related quality of life (HRQoL) affect plans for the end of life or more broadly, advance care planning (ACP). Conversely, it is unknown whether ACP would provoke future changes to STB and HRQoL. Therefore, we sought to evaluate whether STB and HRQoL patient-reported outcomes (PROs) contribute to ACP and whether ACP relates to changes in STB and HRQoL at 24 months. Methods: HD-validated clinician- and patient-assessments (i.e., HRQoL PROs) were obtained at baseline enrollment, 12 and 24 months through our multi-center study (HDQLIFE™) throughout the United States among people with premanifest, early-stage, and late-stage manifest HD. We used linear mixed-effects models to determine the relationships between STB and HRQoL at baseline and HDQLIFE End of Life Planning at follow-up. Separate linear mixed-effects models were used to assess the relationship between HDQLIFE End of Life Planning at baseline, and HRQoL and STB at 12 and 24 months. False discovery rate adjustments were used to account for multiple comparisons. Results: At baseline enrollment, STB and HRQoL were not related to HDQLIFE End of Life Planning at 12 or 24 months. Similarly, at baseline, HDQLIFE End of Life Planning demonstrated no association with STB or HRQoL at 12 or 24 months. Interpretation: STB and HRQoL PROs do not significantly affect patient engagement with ACP. Most importantly, engaging in ACP does not cause untoward effects on HRQoL or STB for this rare neurodegenerative disease where the lifetime prevalence of STB approaches 30%.

13.
J Gen Intern Med ; 38(9): 2164-2178, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36964423

RESUMO

BACKGROUND: Housing security is a key social determinant of behavior related to health outcomes. OBJECTIVE: The purpose of this study was to develop a new patient-reported outcome measure that evaluates aspects of housing security for use in the Re-Engineered Discharge for Diabetes-Computer Adaptive Test (REDD-CAT) measurement system. DESIGN: Qualitative data, literature reviews, and cross-sectional survey study. PARTICIPANTS: A total of 225 people with T2DM provided responses to the items in this item pool. MAIN MEASURES: A new item pool that evaluates important aspects of housing security was developed using stakeholder data from focus groups of persons with T2DM. KEY RESULTS: For the Housing Affordability scale, factor analysis (both exploratory and confirmatory) supported the retention of six items. Of these items, none exhibited sparse cells or problems with monotonicity; no items were deleted due to low item-adjusted total score correlations. For the six affordability items, a constrained graded response model indicated no items exhibited misfit; thus, all were retained. No items indicated differential item functioning (examined for age, sex, education, race, and socioeconomic status). Thus, the final Affordability item bank comprised six items. A Housing Safety index (three items) and a Home Features index (eight items) were also developed. Reliability (i.e., internal consistency and test-retest reliability) and validity (i.e., convergent, discriminant, and known-groups) of the new measures were also supported. CONCLUSIONS: The REDD-CAT Housing Security Measure provides a reliable and valid assessment of housing affordability, safety, and home features in people with type 2 diabetes mellitus. Future work is needed to establish the clinical utility of this measure in other clinical populations.


Assuntos
Diabetes Mellitus Tipo 2 , Habitação , Humanos , Computadores , Conservação dos Recursos Naturais , Estudos Transversais , Psicometria , Reprodutibilidade dos Testes , Medidas de Segurança , Inquéritos e Questionários , Masculino , Feminino
14.
Front Immunol ; 14: 1055429, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36845123

RESUMO

Importance: The degree of immune protection against severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) variants provided by infection versus vaccination with wild-type virus remains unresolved, which could influence future vaccine strategies. The gold-standard for assessing immune protection is viral neutralization; however, few studies involve a large-scale analysis of viral neutralization against the Omicron variant by sera from individuals infected with wild-type virus. Objectives: 1) To define the degree to which infection versus vaccination with wild-type SARS-CoV-2 induced neutralizing antibodies against Delta and Omicron variants.2) To determine whether clinically available data, such as infection/vaccination timing or antibody status, can predict variant neutralization. Methods: We examined a longitudinal cohort of 653 subjects with sera collected three times at 3-to-6-month intervals from April 2020 to June 2021. Individuals were categorized according to SARS-CoV-2 infection and vaccination status. Spike and nucleocapsid antibodies were detected via ADVIA Centaur® (Siemens) and Elecsys® (Roche) assays, respectively. The Healgen Scientific® lateral flow assay was used to detect IgG and IgM spike antibody responses. Pseudoviral neutralization assays were performed on all samples using human ACE2 receptor-expressing HEK-293T cells infected with SARS-CoV-2 spike protein pseudotyped lentiviral particles for wild-type (WT), B.1.617.2 (Delta), and B.1.1.529 (Omicron) variants. Results: Vaccination after infection led to the highest neutralization titers at all timepoints for all variants. Neutralization was also more durable in the setting of prior infection versus vaccination alone. Spike antibody clinical testing effectively predicted neutralization for wild-type and Delta. However, nucleocapsid antibody presence was the best independent predictor of Omicron neutralization. Neutralization of Omicron was lower than neutralization of either wild-type or Delta virus across all groups and timepoints, with significant activity only present in patients that were first infected and later immunized. Conclusions: Participants having both infection and vaccination with wild-type virus had the highest neutralizing antibody levels against all variants and had persistence of activity. Neutralization of WT and Delta virus correlated with spike antibody levels against wild-type and Delta variants, but Omicron neutralization was better correlated with evidence of prior infection. These data help explain why 'breakthrough' Omicron infections occurred in previously vaccinated individuals and suggest better protection is observed in those with both vaccination and previous infection. This study also supports the concept of future SARS-CoV-2 Omicron-specific vaccine boosters.


Assuntos
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/prevenção & controle , Técnicas e Procedimentos Diagnósticos , Anticorpos Neutralizantes , Infecções Irruptivas , Vacinas contra COVID-19 , Imunoglobulina M , Teste para COVID-19
15.
Mhealth ; 9: 5, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36760786

RESUMO

Background: The Roadmap mobile health (mHealth) app was developed to provide health-related quality of life (HRQOL) support for family caregivers of patients with cancer. Methods: Eligibility included: family caregivers (age ≥18 years) who self-reported as the primary caregiver of their pediatric patient with cancer; patients (age ≥5 years) who were receiving cancer care at the University of Michigan. Feasibility was calculated as the percentage of caregivers who logged into ONC Roadmap and engaged with it at least twice weekly for at least 50% of the 120-day study duration. Feasibility and acceptability was also assessed through a Feasibility and Acceptability questionnaire and the Mobile App Rating Scale to specifically assess app-quality. Exploratory analyses were also conducted to assess HRQOL self- or parent proxy assessments and physiological data capture. Results: Between September 2020-September 2021, 100 participants (or 50 caregiver-patient dyads) consented and enrolled in the ONC Roadmap study for 120-days. Feasibility of the study was met, wherein the majority of caregivers (N=32; 65%) logged into ONC Roadmap and engaged with it at least twice weekly for at least 50% of the study duration (defined a priori in the Protocol). The Feasibility and Acceptability questionnaire responses indicated that the study was feasible and acceptable with the majority (>50%) reporting Agree or Strongly Agree with positive Net Favorability [(Agree + Strongly Agree) - (Disagree + Totally Disagree)] in each of the domains (e.g., Fitbit use, ONC Roadmap use, completing longitudinal assessments, engaging in similar future study, study expectations). Improvements were seen across the majority of the mental HRQOL domains across all groups; even though underpowered, there were significant improvements in caregiver-specific aspects of HRQOL and anxiety and in depression and fatigue for children (ages 8-17 years), and a trend toward improvement in depression for children ages 8-17 years and in fatigue for adult patients. Conclusions: This study supports that mHealth technology may be a promising platform to provide HRQOL support for caregivers of pediatric patients with cancer. Importantly, the findings suggest that the study protocol was feasible, and participants were favorable to participate in future studies of this intervention alongside routine cancer care delivery.

16.
J Palliat Med ; 26(7): 907-914, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36607769

RESUMO

Objective: Death anxiety, represented by the HDQLIFE™ Concern with Death and Dying (CwDD) patient-reported outcome (PRO) questionnaire, captures a person's worry about the death and dying process. Previous work suggests that death anxiety remains an unremitting burden throughout all stages of Huntington disease (HD). Although palliative interventions have lessened death anxiety among people with advanced cancer, none has yet to undergo testing in the HD population. An account of how death anxiety is associated with longitudinal changes to aspects of health-related quality of life (HRQoL) would help optimize neuropalliative interventions for people with HD. Methods: HDQLIFE collected PROs concerning physical, mental, social, and cognitive HRQoL domains and clinician-rated assessments from people with HD at baseline and 12 and 24 months. Linear mixed-effects models were created to determine how baseline death anxiety was associated with follow-up changes in HRQoL PROs after controlling for baseline death anxiety and other disease and sociodemographic covariates. Results: Higher baseline HDQLIFE CwDD is associated with 12- and 24-month declines in HDQLIFE Speech Difficulties, neurology quality of life (NeuroQoL) Depression, Suicidality, HDQLIFE Meaning and Purpose, and NeuroQoL Positive Affect and Well-being. Interpretation: Death anxiety may be a risk factor for worsening mental health and speech difficulty. A further prospective study is required to evaluate whether interventions on death anxiety or mental health generally can reduce declines in HRQoL for people with HD over time.


Assuntos
Doença de Huntington , Humanos , Doença de Huntington/complicações , Doença de Huntington/psicologia , Qualidade de Vida/psicologia , Inquéritos e Questionários , Medidas de Resultados Relatados pelo Paciente , Ansiedade
17.
Qual Life Res ; 32(3): 797-811, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36282447

RESUMO

PURPOSE: The purpose of this study was to develop a new measure, the Re-Engineered Discharge for Diabetes Computer Adaptive Test (REDD-CAT) Illness Burden item bank, to evaluate the impact that a chronic condition has on independent living, the ability to work (including working at home), social activities, and relationships. METHODS: Semi-structured interviews were used to inform the development of an item pool (47 items) that captured patients' beliefs about how a diagnosis of type 2 diabetes interferes with different aspects of their lives. The Illness Burden item bank was developed and tested in 225 people with type 2 diabetes mellitus. RESULTS: No items had sparse response option cells or problems with monotonicity; two items were deleted due to low item-rest correlations. Factor analyses supported the retention of 29 items. With those 29 remaining items, a constrained (common slope) graded response model fit assessment indicated that two items had misfit; they were excluded. No items displayed differential item functioning by age, sex, education, or socio-economic status. The final item bank is comprised of 27 items. Preliminary data supported the reliability (internal consistency and test-retest reliability) and validity (convergent, discriminant, and known-groups) of the new bank. CONCLUSION: The Illness Burden item bank can be administered as a computer adaptive test or a 6-item short form. This new measure captures patients' perceptions of the impact that having type 2 diabetes has on their daily lives; it can be used in conjunction with the REDD-CAT measurement system to evaluate important social determinants of health in persons with type 2 diabetes mellitus.


Assuntos
Diabetes Mellitus Tipo 2 , Qualidade de Vida , Humanos , Qualidade de Vida/psicologia , Calibragem , Reprodutibilidade dos Testes , Efeitos Psicossociais da Doença , Computadores
18.
Qual Life Res ; 32(3): 781-796, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36315318

RESUMO

PURPOSE: The purpose of this study was to develop a new measure to evaluate the ability to receive medical services when needed among persons with type 2 diabetes mellitus. METHODS: The Healthcare Access measure was developed using data from 225 persons with type 2 diabetes mellitus who completed an item pool comprised of 54 questions pertaining to their experience accessing healthcare services. RESULTS: Exploratory and confirmatory factor analyses supported the retention of 45 items. In addition, a constrained graded response model (GRM), as well as analyses that examined item misfit and differential item functioning (investigated for age, sex, education, race, and socioeconomic status), supported the retention of 44 items in the final item bank. Expert review and GRM item calibration products were used to inform the selection of a 6-item static short form and to program the Healthcare Access computer adaptive test (CAT). Preliminary data supported the reliability (i.e., internal consistency and test-retest reliability) and validity (i.e., convergent, discriminant, and known-groups) of the new measure. CONCLUSIONS: The new Healthcare Access item bank can be used to examine the experiences that persons with type 2 diabetes mellitus have with healthcare access, to better target treatment improvements and mitigate disparities; it will be available as a part of the Neuro-Qol measurement system through healthmeasures.net and the PROMIS Application Programmable Interface (API) in early 2023.


Assuntos
Diabetes Mellitus Tipo 2 , Qualidade de Vida , Humanos , Qualidade de Vida/psicologia , Calibragem , Reprodutibilidade dos Testes , Inquéritos e Questionários , Computadores , Psicometria
19.
Arch Phys Med Rehabil ; 104(3): 430-437, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35944601

RESUMO

OBJECTIVE: To provide reliability and validity data to support the clinical utility of Economic Quality of Life Measure (Econ-QOL) scores in caregivers of civilians and service members/veterans with traumatic brain injury (TBI). DESIGN: Cross-sectional survey study. SETTING: Three academic medical centers and a Veterans Affairs treatment facility. PARTICIPANTS: 376 caregivers of civilians (n=213) and service members/veterans (n=163) with TBI (N=376). INTERVENTIONS: N/A. MAIN OUTCOME MEASURES: Econ-QOL and several patient-reported outcome measures (Traumatic Brain Injury Caregiver Quality of Life Caregiver-Specific Anxiety and Caregiver Strain, Patient-Reported Outcomes Measurement Information System sleep-related impairment, Neurological Quality of Life Measurement System positive affect and well-being) and measures of financial status (self-reported income). RESULTS: Internal consistency reliability of the Econ-QOL Short Form scores were excellent (all Cronbach's alphas ≥.92). There were no floor or ceiling effects for scores. There was evidence of convergent and discriminant validity, with the Econ-QOL scores having the strongest relationships with self-reported income (convergent validity evidence) and weak relationships with the other measures (discriminant validity evidence). Individuals with scores that were "below or possibly below" the poverty line (according to 2016 federal government poverty level thresholds) reported worse economic quality of life relative to those individuals who were definitely above the poverty line, supporting known-groups validity. CONCLUSIONS: This article establishes the clinical utility of scores on the Econ-QOL Short Form in caregivers of persons with TBI and provides evidence that it is valid and appropriate to use such scores not only in a variety of different disability populations (eg, spinal cord injury, stroke) but also in caregivers.


Assuntos
Lesões Encefálicas Traumáticas , Militares , Humanos , Qualidade de Vida , Cuidadores , Reprodutibilidade dos Testes , Estudos Transversais , Psicometria , Inquéritos e Questionários
20.
Child Obes ; 19(1): 34-45, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35447044

RESUMO

Background: The COVID-19 pandemic has brought profound changes to the health of families worldwide. Yet, there is limited research regarding its impact on children. The pandemic may exacerbate factors associated with excess weight, which is particularly concerning due to the potential association between excess weight and severity of COVID-19 infection. This study investigates parental perspectives of changes in fruit/vegetable (FV) intake, processed food (PF) intake, outdoor playtime (OP), physical activity (PA) levels, and recreational screen time (RST) among children living in Michigan during the pandemic. Methods: The study team and community partners developed and distributed a survey using snowball sampling to reach families living largely in Central and Southeastern Michigan. Nonlinear mixed-effects proportional odds models were used to examine associations between child weight status along with demographic/household factors and changes in five weight-related behaviors. Results: Parents (n = 1313; representing 2469 children) reported a decrease in OP, FV, and PA levels, while there was an increase in RST and PF intake among their children. Household income was protective against a decrease in OP, PA, and FV but was associated with increased RST. Children's weight status was associated with decreased FV. Age was negatively associated with OP and PA, and positively associated with RST. Conclusions: These findings suggest an adverse influence of the pandemic on weight-related behaviors, particularly among adolescents in families with lower incomes and those with excess weight. Further work is needed to measure any impact on BMI trajectory and to identify interventions to reverse negative effects.


Assuntos
COVID-19 , Obesidade Infantil , Adolescente , Humanos , Criança , Pandemias , COVID-19/epidemiologia , Obesidade Infantil/epidemiologia , Pais , Ingestão de Alimentos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA