RESUMEN
RATIONALE & OBJECTIVE: Optimal approaches to treat secondary hyperparathyroidism (SHPT) in patients on maintenance hemodialysis (HD) have yet to be established in randomized controlled trials (RCTs). STUDY DESIGN: Two observational clinical trial emulations. SETTING & PARTICIPANTS: Both emulations included adults receiving in-center HD from a national dialysis organization. The patients who had SHPT in the period between 2009 and 2014, were insured for≥180 days by Medicare as primary payer, and did not have contraindications or poor health status limiting theoretical trial participation. EXPOSURE: The parathyroid hormone (PTH) Target Trial emulation included patients with new-onset SHPT (first PTH 300-600pg/mL), with 2 arms defined as up-titration of either vitamin D sterols or cinacalcet within 30 days (lower target) or no up-titration (higher target). The Agent Trial emulation included patients with a PTH≥300 pg/mL while on≥6µg weekly of vitamin D sterol (paricalcitol equivalent dose) and no prior history of cinacalcet. The 2 arms were defined by the first dose or agent change within 30 days (vitamin D-favoring [vitamin-D was up-titrated] vs cinacalcet-favoring [cinacalcet was added] vs nondefined [neither applies]). Multiple trials per patient were allowed in trial 2. OUTCOME: The primary outcome was all-cause death over 24 months; secondary outcomes included cardiovascular (CV) hospitalization or the composite of CV hospitalization or death. ANALYTICAL APPROACH: Pooled logistic regression. RESULTS: There were 1,152 patients in the PTH Target Trial (635 lower target and 517 higher target). There were 2,726 unique patients with 6,727 patient trials in the Agent Trial (6,268 vitamin D-favoring trials and 459 cinacalcet-favoring trials). The lower PTH target approach was associated with reduced adjusted hazard of death (HR, 0.71 [95% CI, 0.52-0.93]), CV hospitalization (HR, 0.78 [95% CI, 0.63-0.98]), and their composite (HR, 0.74 [95% CI, 0.61-0.89]). The cinacalcet-favoring approach demonstrated lower adjusted hazard of death compared to the vitamin D-favoring approach (HR, 0.79 [95% CI, 0.62-0.99]), but not of CV hospitalization or the composite outcome. LIMITATIONS: Potential for residual confounding; low use of cinacalcet with low power. CONCLUSIONS: SHPT management that is focused on lower PTH targets may lower mortality and CV disease in patients receiving HD. These findings should be confirmed in a pragmatic randomized trial. PLAIN-LANGUAGE SUMMARY: Optimal approaches to treat secondary hyperparathyroidism (SHPT) have not been established in randomized controlled trials. Data from a national dialysis organization was used to identify patients with SHPT in whom escalated treatment may be indicated. The approach to treatment was defined based on observed upward titration of SHPT-controlling medications: earlier titration (lower target) versus delayed titration (higher target); and the choice of medication (cinacalcet vs vitamin D sterols). In the first trial emulation, we estimated a 29% lower rate of death and 26% lower rate of cardiovascular disease or death for patients managed with a lower versus higher target approach. Cinacalcet versus vitamin D-favoring approaches were not consistently associated with outcomes in the second trial emulation. This observational study suggests the need for additional clinical trials of SHPT treatment intensity.
Asunto(s)
Enfermedades Cardiovasculares , Hiperparatiroidismo Secundario , Adulto , Humanos , Cinacalcet/uso terapéutico , Naftalenos/uso terapéutico , Resultado del Tratamiento , Hiperparatiroidismo Secundario/tratamiento farmacológico , Hiperparatiroidismo Secundario/etiología , Vitamina D/uso terapéutico , Diálisis Renal/efectos adversos , Vitaminas/uso terapéutico , Hormona Paratiroidea , Esteroles/uso terapéutico , Enfermedades Cardiovasculares/etiologíaRESUMEN
OBJECTIVES: To characterize delivery of goal-concordant end-of-life (EOL) care among children with complex chronic conditions and to determine factors associated with goal-concordance. STUDY DESIGN: This was a retrospective review of goals of care discussions for 272 children with at least 1 complex chronic condition who died at a tertiary care hospital between January 1, 2014, and December 31, 2017. Goals of care and code status were assessed before and within the last 72 hours of life. Goals of care discussions were coded as full interventions; considering withdrawal of interventions (palliation); planned transition to palliation; or actively transitioning/transitioned to palliation. RESULTS: In total, 158 children had documented goals of care discussions before and within the last 72 hours of life, 18 had goals of care discussions only >72 hours before death, 54 only in the last 72 hours of life, and 42 had no documented goals of care. For children with goals of care, EOL care was goal-concordant for 82.2%, discordant in 7%, and unclear in 10.8%. Black children had a greater than 8-fold greater odds of discordant care compared with White children (OR 8.34, P = .007). Comparison of goals of care and code status before and within the last 72 hours of life revealed trends toward nonescalation of care. Specifically, rates of active palliation increased from 11.7% to 63.0%, and code status shifted from 32.6% do not resuscitate to 65.2% (P < .001). CONCLUSIONS: In this cohort, a majority of children had documented goals of care discussions and received goal-concordant EOL care. However, Black children had greater odds of receiving goal-discordant care. Goals of care and code status shifted toward palliation during the last 72 hours of life.
Asunto(s)
Cuidados Paliativos al Final de la Vida , Cuidado Terminal , Humanos , Niño , Objetivos , Órdenes de Resucitación , Enfermedad CrónicaRESUMEN
OBJECTIVES: Infection-related childhood hearing loss is one of the few preventable chronic health conditions that can affect a child's lifelong trajectory. This study sought to quantify relationships between infection-mediated hearing loss and middle ear disease and environmental factors, such as exposure to wood smoke, cigarette smoke, household crowding, and lack of access to plumbed (running) water, in a northwest region of rural Alaska. DESIGN: This study is a cross-sectional analysis to estimate environmental factors of infection-related hearing loss in children aged 3 to 21 years. School hearing screenings were performed as part of two cluster randomized trials in rural Alaska over two academic years (2017-2018 and 2018-2019). The first available screening for each child was used for this analysis. Sociodemographic questionnaires were completed by parents/guardians upon entry into the study. Multivariable regression was performed to estimate prevalence differences and prevalence ratios (PR). A priori knowledge about the prevalence of middle ear disease and the difficulty inherent in obtaining objective hearing loss data in younger children led to analysis of children by age (3 to 6 years versus 7 years and older) and a separate multiple imputation sensitivity analysis for pure-tone average (PTA)-based infection-related hearing loss measures. RESULTS: A total of 1634 children participated. Hearing loss was present in 11.1% of children sampled based on otoacoustic emission as the primary indicator of hearing loss and was not associated with exposure to cigarette smoke (PR = 1.07; 95% confidence interval [CI], 0.48 to 2.38), use of a wood-burning stove (PR = 0.85; 95% CI, 0.55 to 1.32), number of persons living in the household (PR = 1.06; 95% CI, 0.97 to 1.16), or lack of access to running water (PR = 1.38; 95% CI, 0.80 to 2.39). Using PTA as a secondary indicator of hearing loss also showed no association with environmental factors. Middle ear disease was present in 17.4% of children. There was a higher prevalence of middle ear disease in homes without running water versus those with access to running water (PR = 1.53; 95% CI, 1.03 to 2.27). There was little evidence to support any cumulative effects of environmental factors. Heterogeneity of effect models by age found sample prevalence of hearing loss higher for children aged 3 to 6 years (12.2%; 95% CI, 9.3 to 15.7) compared to children 7 years and older (10.6%; 95% CI, 8.9 to 2.6), as well as for sample prevalence of middle ear disease (22.7%; 95% CI, 18.9 to 26.9 and 15.3%; 95% CI, 13.3 to 17.5, respectively). CONCLUSIONS: Lack of access to running water in the home was associated with increased prevalence of middle ear disease in this rural, Alaska Native population, particularly among younger children (aged 3 to 6 years). There was little evidence in this study that cigarette smoke, wood-burning stoves, and greater numbers of persons in the household were associated with infection-mediated hearing loss or middle ear disease. Future research with larger sample sizes and more sensitive measures of environmental exposure is necessary to further evaluate these relationships. Children who live in homes without access to running water may benefit from earlier and more frequent hearing health visits.
Asunto(s)
Nativos Alasqueños , Sordera , Pérdida Auditiva , Niño , Humanos , Adolescente , Estudios Transversales , Aglomeración , Composición Familiar , Pérdida Auditiva/epidemiología , AguaRESUMEN
OBJECTIVE: Childhood hearing loss has well-known, lifelong consequences. Infection-related hearing loss disproportionately affects underserved communities yet can be prevented with early identification and treatment. This study evaluates the utility of machine learning in automating tympanogram classifications of the middle ear to facilitate layperson-guided tympanometry in resource-constrained communities. DESIGN: Diagnostic performance of a hybrid deep learning model for classifying narrow-band tympanometry tracings was evaluated. Using 10-fold cross-validation, a machine learning model was trained and evaluated on 4810 pairs of tympanometry tracings acquired by an audiologist and layperson. The model was trained to classify tracings into types A (normal), B (effusion or perforation), and C (retraction), with the audiologist interpretation serving as reference standard. Tympanometry data were collected from 1635 children from October 10, 2017, to March 28, 2019, from two previous cluster-randomized hearing screening trials (NCT03309553, NCT03662256). Participants were school-aged children from an underserved population in rural Alaska with a high prevalence of infection-related hearing loss. Two-level classification performance statistics were calculated by treating type A as pass and types B and C as refer. RESULTS: For layperson-acquired data, the machine-learning model achieved a sensitivity of 95.2% (93.3, 97.1), specificity of 92.3% (91.5, 93.1), and area under curve of 0.968 (0.955, 0.978). The model's sensitivity was greater than that of the tympanometer's built-in classifier [79.2% (75.5, 82.8)] and a decision tree based on clinically recommended normative values [56.9% (52.4, 61.3)]. For audiologist-acquired data, the model achieved a higher AUC of 0.987 (0.980, 0.993), had an equivalent sensitivity of 95.2 (93.3, 97.1), and a higher specificity of 97.7 (97.3, 98.2). CONCLUSIONS: Machine learning can detect middle ear disease with comparable performance to an audiologist using tympanograms acquired either by an audiologist or a layperson. Automated classification enables the use of layperson-guided tympanometry in hearing screening programs in rural and underserved communities, where early detection of treatable pathology in children is crucial to prevent the lifelong adverse effects of childhood hearing loss.
Asunto(s)
Sordera , Aprendizaje Profundo , Pérdida Auditiva , Niño , Humanos , Pérdida Auditiva/diagnóstico , Pruebas de Impedancia Acústica , Oído MedioRESUMEN
OBJECTIVES: Diagnostic accuracy was evaluated for various screening tools, including mobile health (mHealth) pure-tone screening, tympanometry, distortion product otoacoustic emissions (DPOAE), and inclusion of high frequencies to determine the most accurate screening protocol for identifying children with hearing loss in rural Alaska where the prevalence of middle ear disease is high. DESIGN: Hearing screening data were collected as part of two cluster randomized trials conducted in 15 communities in rural northwest Alaska. All children enrolled in school from preschool to 12th grade were eligible. Analysis was limited to data collected 2018 to 2019 (n = 1449), when both trials were running and measurement of high frequencies were included in the protocols. Analyses included estimates of diagnostic accuracy for each screening tool, as well as exploring performance by age and grade. Multiple imputation was used to assess diagnostic accuracy in younger children, where missing data were more prevalent due to requirements for conditioned responses. The audiometric reference standard included otoscopy, tympanometry, and high frequencies to ensure detection of infection-related and noise-induced hearing loss. RESULTS: Both the mHealth pure-tone screen and DPOAE screen performed better when tympanometry was added to the protocol (increase in sensitivity of 19.9%, 95% Confidence Interval (CI): 15.9 to 24.1 for mHealth screen, 17.9%, 95% CI: 14.0 to 21.8 for high-frequency mHealth screen, and 10.4%, 95% CI: 7.5 to 13.9 for DPOAE). The addition of 6 kHz to the mHealth pure-tone screen provided an 8.7 percentage point improvement in sensitivity (95% CI: 6.5 to 11.3). Completeness of data for both the reference standard and the mHealth screening tool differed substantially by age, due to difficulty with behavioral testing in young children. By age 7, children were able to complete behavioral testing, and data indicated that high-frequency mHealth pure-tone screen with tympanometry was the superior tool for children 7 years and older. For children 3 to 6 years of age, DPOAE plus tympanometry performed the best, both for complete data and multiply imputed data, which better approximates accuracy for children with missing data. CONCLUSIONS: This study directly evaluated pure-tone, DPOAE, and tympanometry tools as part of school hearing screening in rural Alaskan children (3 to 18+ years). Results from this study indicate that tympanometry is a key component in the hearing screening protocol, particularly in environments with higher prevalence of infection-related hearing loss. DPOAE is the preferred hearing screening tool when evaluating children younger than 7 years of age (below 2nd grade in the United States) due to the frequency of missing data with behavioral testing in this age group. For children 7 years and older, the addition of high frequencies to pure-tone screening increased the accuracy of screening, likely due to improved identification of hearing loss from noise exposure. The lack of a consistent reference standard in the literature makes comparing across studies challenging. In our study with a reference standard inclusive of otoscopy, tympanometry, and high frequencies, less than ideal sensitivities were found even for the most sensitive screening protocols, suggesting more investigation is necessary to ensure screening programs are appropriately identifying noise- and infection-related hearing loss in rural, low-resource settings.
Asunto(s)
Sordera , Pérdida Auditiva Provocada por Ruido , Niño , Humanos , Preescolar , Alaska , Emisiones Otoacústicas Espontáneas/fisiología , Audiometría de Tonos Puros , Ensayos Clínicos Controlados Aleatorios como Asunto , Instituciones AcadémicasRESUMEN
OBJECTIVES: Preschool programs provide essential preventive services, such as hearing screening, but in rural regions, limited access to specialists and loss to follow-up compound rural health disparities. We conducted a parallel-arm cluster-randomized controlled trial to evaluate telemedicine specialty referral for preschool hearing screening. The goal of this trial was to improve timely identification and treatment of early childhood infection-related hearing loss, a preventable condition with lifelong implications. We hypothesized that telemedicine specialty referral would improve time to follow-up and the number of children receiving follow-up compared with the standard primary care referral. DESIGN: We conducted a cluster-randomized controlled trial in K-12 schools in 15 communities over two academic years. Community randomization occurred within four strata using location and school size. In the second academic year (2018-2019), an ancillary trial was performed in the 14 communities that had preschools to compare telemedicine specialty referral (intervention) to standard primary care referral (comparison) for preschool hearing screening. Randomization of communities from the main trial was used for this ancillary trial. All children enrolled in preschool were eligible. Masking was not possible because of timing in the second year of the main trial, but referral assignment was not openly disclosed. Study team members and school staff were masked throughout data collection, and statisticians were blinded to allocation during analysis. Preschool screening occurred once, and children who were referred for possible hearing loss or ear disease were monitored for follow-up for 9 months from the screening date. The primary outcome was time to ear/hearing-related follow-up from the date of screening. The secondary outcome was any ear/hearing follow-up from screening to 9 months. Analyses were conducted using an intention-to-treat approach. RESULTS: A total of 153 children were screened between September 2018 and March 2019. Of the 14 communities, 8 were assigned to the telemedicine specialty referral pathway (90 children), and 6 to the standard primary care referral pathway (63 children). Seventy-one children (46.4%) were referred for follow-up: 39 (43.3%) in the telemedicine specialty referral communities and 32 (50.8%) in the standard primary care referral communities. Of children referred, 30 (76.9%) children in telemedicine specialty referral communities and 16 (50.0%) children in standard primary care referral communities received follow-up within 9 months (Risk Ratio = 1.57; 95% confidence interval [CI], 1.22 to 2.01). Among children who received follow-up, median time to follow-up was 28 days (interquartile range [IQR]: 15 to 71) in telemedicine specialty referral communities compared with 85 days (IQR: 26 to 129) in standard primary care referral communities. Mean time to follow-up for all referred children was 4.5 (event time ratio = 4.5; 95% CI, 1.8 to 11.4; p = 0.045) times faster in telemedicine specialty referral communities compared with standard primary care referral communities in the 9-month follow-up time frame. CONCLUSIONS: Telemedicine specialty referral significantly improved follow-up and reduced time to follow-up after preschool hearing screening in rural Alaska. Telemedicine referrals could extend to other preventive school-based services to improve access to specialty care for rural preschool children.
Asunto(s)
Sordera , Pérdida Auditiva , Telemedicina , Humanos , Preescolar , Alaska , Pérdida Auditiva/diagnóstico , Servicios de Salud Escolar , Derivación y ConsultaRESUMEN
OBJECTIVES: Childhood hearing loss has well-known lifelong consequences. Certain rural populations are at higher risk for infection-related hearing loss. For Alaska Native children, historical data on hearing loss prevalence suggest a higher burden of infection-related hearing loss, but updated prevalence data are urgently needed in this high-risk population. DESIGN: Hearing data were collected as part of two school-based cluster-randomized trials in 15 communities in rural northwest Alaska over two academic years (2017-2019). All enrolled children from preschool to 12th grade were eligible. Pure-tone thresholds were obtained using standard audiometry and conditioned play when indicated. The analysis included the first available audiometric assessment for each child (n = 1634 participants, 3 to 21 years), except for the high-frequency analysis, which was limited to year 2 when higher frequencies were collected. Multiple imputation was used to quantify the prevalence of hearing loss in younger children, where missing data were more frequent due to the need for behavioral responses. Hearing loss in either ear was evaluated using both the former World Health Organization (WHO) definition (pure-tone average [PTA] > 25 dB) and the new WHO definition (PTA ≥ 20 dB), which was published after the study. Analyses with the new definition were limited to children 7 years and older due to incomplete data obtained on younger children at lower thresholds. RESULTS: The overall prevalence of hearing loss (PTA > 25 dB; 0.5, 1, 2, 4 kHz) was 10.5% (95% confidence interval [CI], 8.9 to 12.1). Hearing loss was predominately mild (PTA >25 to 40 dB; 8.9%, 95% CI, 7.4 to 10.5). The prevalence of unilateral hearing loss was 7.7% (95% CI, 6.3 to 9.0). Conductive hearing loss (air-bone gap of ≥ 10 dB) was the most common hearing loss type (9.1%, 95% CI, 7.6 to 10.7). Stratified by age, hearing loss (PTA >25 dB) was more common in children 3 to 6 years (14.9%, 95% CI, 11.4 to 18.5) compared to children 7 years and older (8.7%, 95% CI, 7.1 to 10.4). In children 7 years and older, the new WHO definition increased the prevalence of hearing loss to 23.4% (95% CI, 21.0 to 25.8) compared to the former definition (8.7%, 95% CI, 7.1 to 10.4). Middle ear disease prevalence was 17.6% (95% CI, 15.7 to 19.4) and was higher in younger children (23.6%, 95% CI, 19.7 to 27.6) compared to older children (15.2%, 95% CI, 13.2 to 17.3). High-frequency hearing loss (4, 6, 8kHz) was present in 20.5% (95% CI, 18.4 to 22.7 [PTA >25 dB]) of all children and 22.8% (95% CI, 20.3 to 25.3 [PTA >25 dB]) and 29.7% (95% CI, 27.0 to 32.4 [PTA ≥ 20 dB]) of children 7 years and older (limited to year 2). CONCLUSIONS: This analysis represents the first prevalence study on childhood hearing loss in Alaska in over 60 years and is the largest cohort with hearing data ever collected in rural Alaska. Our results highlight that hearing loss continues to be common in rural Alaska Native children, with middle ear disease more prevalent in younger children and high-frequency hearing loss more prevalent with increasing age. Prevention efforts may benefit from managing hearing loss type by age. Lastly, continued research is needed on the impact of the new WHO definition of hearing loss on field studies.
Asunto(s)
Sordera , Pérdida Auditiva de Alta Frecuencia , Niño , Humanos , Preescolar , Adolescente , Alaska/epidemiología , Prevalencia , Población Rural , Audiometría de Tonos Puros/métodosRESUMEN
Statins failed to reduce cardiovascular (CV) events in trials of patients on dialysis. However, trial populations used criteria that often excluded those with atherosclerotic heart disease (ASHD), in whom statins have the greatest benefit, and included outcome composites with high rates of nonatherosclerotic CV events that may not be modified by statins. Here, we study whether statin use associates with lower atherosclerotic CV risk among patients with known ASHD on dialysis, including in those likely to receive a kidney transplant, a group excluded within trials but with lower competing mortality risks. METHODS: Using data from the United States Renal Data System including Medicare claims, we identified adults initiating dialysis with ASHD. We matched statin users 1:1 to statin nonusers with propensity scores incorporating hard matches for age and kidney transplant listing status. Using Cox models, we evaluated associations of statin use with the primary composite of fatal/nonfatal myocardial infarction and stroke (including within prespecified subgroups of younger age [<50â¯years] and waitlisting status); secondary outcomes included all-cause mortality and the composite of all-cause mortality, nonfatal myocardial infarction, or stroke. RESULTS: Of 197,716 patients with ASHD, 47,562 (24%) were consistent statin users from which we created 46,186 matched pairs. Over a median 662â¯days, statin users had similar risk of fatal/nonfatal myocardial infarction or stroke overall (hazard ratio [HR] 1.00, 95% CI 0.97-1.02), or in subgroups (age<â¯50â¯years [HRâ¯=â¯1.05, 95% CI 0.95-1.17]; waitlisted for kidney transplant [HR 0.99, 95% CI 0.97-1.02]). Statin use was modestly associated with lower all-cause mortality (HR 0.96, 95% CI 0.94-0.98; E value = 1.21) and, similarly, a modest lower composite risk of all-cause mortality, nonfatal myocardial infarction, or stroke over the first 2 years (HR 0.90, 95% CI 0.88-0.91) but attenuated thereafter (HR 0.98, 95% CI 0.96-1.01). CONCLUSIONS: Our large observational analyses are consistent with trials in more selected populations and suggest that statins may not meaningfully reduce atherosclerotic CV events even among incident dialysis patients with established ASHD and those likely to receive kidney transplants.
Asunto(s)
Aterosclerosis/tratamiento farmacológico , Enfermedad Coronaria/tratamiento farmacológico , Inhibidores de Hidroximetilglutaril-CoA Reductasas/uso terapéutico , Fallo Renal Crónico/terapia , Diálisis Renal , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Aterosclerosis/epidemiología , Causas de Muerte , Enfermedad Coronaria/epidemiología , Femenino , Humanos , Estimación de Kaplan-Meier , Trasplante de Riñón , Masculino , Persona de Mediana Edad , Infarto del Miocardio/epidemiología , Puntaje de Propensión , Accidente Cerebrovascular/epidemiologíaRESUMEN
HIV incidence among young men who have sex with men (YMSM) is disproportionally high. Youth living with HIV demonstrate low rates of sustained virologic suppression (VS). Epic Allies, a theory-based behavioral intervention mobile app, utilizes self-management tools, gamification, and social support to improve engagement in care and antiretroviral adherence among YMSM living with HIV. A two-arm individually randomized-controlled trial enrolled 146 participants aged 16 to 24 years old to test the efficacy of Epic Allies to achieve VS. Both study arms showed improved VS at 26-weeks (62.9% intervention; 73.5% control; ARR = 0.93 (95% CI 0.73, 1.18)) and antiretroviral adherence; intervention effects were amplified in regular app users. Issues with recruitment and app usage metrics limit the ability to definitively say that the app was effective in causing behavior changes resulting in improved health outcomes. (ClinicalTrials.gov Identifier: NCT02782130).
Asunto(s)
Infecciones por VIH , Aplicaciones Móviles , Minorías Sexuales y de Género , Adolescente , Adulto , Antirretrovirales/uso terapéutico , Infecciones por VIH/tratamiento farmacológico , Homosexualidad Masculina , Humanos , Masculino , Adulto JovenRESUMEN
BACKGROUND: More than half of artemisinin combination therapies (ACTs) consumed globally are dispensed in the retail sector, where diagnostic testing is uncommon, leading to overconsumption and poor targeting. In many malaria-endemic countries, ACTs sold over the counter are available at heavily subsidized prices, further contributing to their misuse. Inappropriate use of ACTs can have serious implications for the spread of drug resistance and leads to poor outcomes for nonmalaria patients treated with incorrect drugs. We evaluated the public health impact of an innovative strategy that targets ACT subsidies to confirmed malaria cases by coupling free diagnostic testing with a diagnosis-dependent ACT subsidy. METHODS AND FINDINGS: We conducted a cluster-randomized controlled trial in 32 community clusters in western Kenya (population approximately 160,000). Eligible clusters had retail outlets selling ACTs and existing community health worker (CHW) programs and were randomly assigned 1:1 to control and intervention arms. In intervention areas, CHWs were available in their villages to perform malaria rapid diagnostic tests (RDTs) on demand for any individual >1 year of age experiencing a malaria-like illness. Malaria RDT-positive individuals received a voucher for a discount on a quality-assured ACT, redeemable at a participating retail medicine outlet. In control areas, CHWs offered a standard package of health education, prevention, and referral services. We conducted 4 population-based surveys-at baseline, 6 months, 12 months, and 18 months-of a random sample of households with fever in the last 4 weeks to evaluate predefined, individual-level outcomes. The primary outcome was uptake of malaria diagnostic testing at 12 months. The main secondary outcome was rational ACT use, defined as the proportion of ACTs used by test-positive individuals. Analyses followed the intention-to-treat principle using generalized estimating equations (GEEs) to account for clustering with prespecified adjustment for gender, age, education, and wealth. All descriptive statistics and regressions were weighted to account for sampling design. Between July 2015 and May 2017, 32,404 participants were tested for malaria, and 10,870 vouchers were issued. A total of 7,416 randomly selected participants with recent fever from all 32 clusters were surveyed. The majority of recent fevers were in children under 18 years (62.9%, n = 4,653). The gender of enrolled participants was balanced in children (49.8%, n = 2,318 boys versus 50.2%, n = 2,335 girls), but more adult women were enrolled than men (78.0%, n = 2,139 versus 22.0%, n = 604). At baseline, 67.6% (n = 1,362) of participants took an ACT for their illness, and 40.3% (n = 810) of all participants took an ACT purchased from a retail outlet. At 12 months, 50.5% (n = 454) in the intervention arm and 43.4% (n = 389) in the control arm had a malaria diagnostic test for their recent fever (adjusted risk difference [RD] = 9 percentage points [pp]; 95% CI 2-15 pp; p = 0.015; adjusted risk ratio [RR] = 1.20; 95% CI 1.05-1.38; p = 0.015). By 18 months, the ARR had increased to 1.25 (95% CI 1.09-1.44; p = 0.005). Rational use of ACTs in the intervention area increased from 41.7% (n = 279) at baseline to 59.6% (n = 403) and was 40% higher in the intervention arm at 18 months (ARR 1.40; 95% CI 1.19-1.64; p < 0.001). While intervention effects increased between 12 and 18 months, we were not able to estimate longer-term impact of the intervention and could not independently evaluate the effects of the free testing and the voucher on uptake of testing. CONCLUSIONS: Diagnosis-dependent ACT subsidies and community-based interventions that include the private sector can have an important impact on diagnostic testing and population-wide rational use of ACTs. Targeting of the ACT subsidy itself to those with a positive malaria diagnostic test may also improve sustainability and reduce the cost of retail-sector ACT subsidies. TRIAL REGISTRATION: ClinicalTrials.gov NCT02461628.
Asunto(s)
Antimaláricos/economía , Antimaláricos/uso terapéutico , Artemisininas/economía , Artemisininas/uso terapéutico , Costos de los Medicamentos , Malaria/tratamiento farmacológico , Cumplimiento de la Medicación , Medicamentos sin Prescripción/economía , Medicamentos sin Prescripción/uso terapéutico , Pruebas en el Punto de Atención , Adolescente , Adulto , Niño , Preescolar , Agentes Comunitarios de Salud , Combinación de Medicamentos , Femenino , Financiación de la Atención de la Salud , Humanos , Lactante , Kenia/epidemiología , Malaria/diagnóstico , Malaria/economía , Malaria/parasitología , Masculino , Valor Predictivo de las Pruebas , Sector Privado/economía , Asociación entre el Sector Público-Privado/economía , Factores de Tiempo , Resultado del TratamientoRESUMEN
BACKGROUND: Insecticide-treated bed nets (ITN) have been shown to be efficacious in reducing malaria morbidity and mortality in many regions. Unfortunately in some areas, malaria has persisted despite the scale up of ITNs. Recent reports indicate that human behaviour and mosquito behaviour are potential threats to the efficacy of ITNs. However, these concerns are likely highly heterogeneous even at very small scales. This study aimed at developing, testing and validating a rapid assessment tool to collect actionable information at local levels for a quick evaluation of potential barriers to malaria prevention. METHODS: The study was conducted at the Webuye Health and Demographic Surveillance Site in Bungoma East Sub-County, Kenya. Based on the findings from the case-control study, 12 primary surveillance components that encompass the major impediments to successful prevention were identified and used to develop a rapid assessment tool. Twenty community health volunteers were trained to identify patients with laboratory-confirmed malaria in six peripheral health facilities located within six sub locations and subsequently followed them up to their homes to conduct a rapid assessment. Sampling and analysis of the results of the survey are based on Lot Quality Assurance. RESULTS: The tool was able to detect local heterogeneity in bed net coverage, bed net use and larval site abundance in the six health facility catchment areas. Nearly all the catchment areas met the action threshold for incomplete household coverage (i.e. not all household members not using a net the previous night) except the peri-urban area. Although the threshold for nets not in good condition was set very high (≥50%), only two catchment areas failed to meet the action threshold. On the indicator for "Net not used every day last week", half of the areas failed, while for net ownership, only two areas met the action threshold. CONCLUSION: The rapid assessment tool was able to detect marked heterogeneity in key indicators for malaria prevention between patients attending health facilities, and can distinguish between priority areas for intervention. There is need to validate it for use in other contexts.
Asunto(s)
Transmisión de Enfermedad Infecciosa/prevención & control , Métodos Epidemiológicos , Mosquiteros Tratados con Insecticida/estadística & datos numéricos , Malaria/prevención & control , Control de Mosquitos/métodos , Estudios de Casos y Controles , Niño , Femenino , Humanos , Kenia , Masculino , Factores de Tiempo , VoluntariosRESUMEN
This study assesses why some individuals are re-arrested for driving while intoxicated (DWI). Using longitudinal data from North Carolina containing information on arrests and arrest outcomes, we test hypotheses that individuals prosecuted and convicted of DWI are less likely to be re-arrested for DWI. We allow for possible endogeneity of prosecution and conviction outcomes by using instrumental variables for the prosecutor's prosecution rate and the judge's conviction rate. With a three-year follow-up, the probability of DWI re-arrest was reduced by 6.6 percent if the person was prosecuted for DWI and, for those prosecuted, by 24.5 percent if convicted on this charge. Prosecution and conviction for DWI deters re-arrest for DWI.
RESUMEN
BACKGROUND: Inappropriate treatment of non-malaria fevers with artemisinin-based combination therapies (ACTs) is a growing concern, particularly in light of emerging artemisinin resistance, but it is a behavior that has proven difficult to change. Pay for performance (P4P) programs have generated interest as a mechanism to improve health service delivery and accountability in resource-constrained health systems. However, there has been little experimental evidence to establish the effectiveness of P4P in developing countries. We tested a P4P strategy that emphasized parasitological diagnosis and appropriate treatment of suspected malaria, in particular reduction of unnecessary consumption of ACTs. METHODS: A random sample of 18 health centers was selected and received a refresher workshop on malaria case management. Pre-intervention baseline data was collected from August to September 2012. Facilities were subsequently randomized to either the comparison (n = 9) or intervention arm (n = 9). Between October 2012 and November 2013, facilities in the intervention arm received quarterly incentive payments based on seven performance indicators. Incentives were for use by facilities rather than as payments to individual providers. All non-pregnant patients older than 1 year of age who presented to a participating facility and received either a malaria test or artemether-lumefantrine (AL) were eligible to be included in the analysis. Our primary outcome was prescription of AL to patients with a negative malaria diagnostic test (n = 11,953). Our secondary outcomes were prescription of AL to patients with laboratory-confirmed malaria (n = 2,993) and prescription of AL to patients without a malaria diagnostic test (analyzed at the cluster level, n = 178 facility-months). RESULTS: In the final quarter of the intervention period, the proportion of malaria-negative patients in the intervention arm who received AL was lower than in the comparison arm (7.3% versus 10.9%). The improvement from baseline to quarter 4 in the intervention arm was nearly three times that of the comparison arm (ratio of adjusted odds ratios for baseline to quarter 4 = 0.36, 95% CI: 0.24-0.57). The rate of prescription of AL to patients without a test was five times lower in the intervention arm (adjusted incidence rate ratio = 0.18, 95% CI: 0.07-0.48). Prescription of AL to patients with confirmed infection was not significantly different between the groups over the study period. CONCLUSIONS: Facility-based incentives coupled with training may be more effective than training alone and could complement other quality improvement approaches. TRIAL REGISTRATION: This study was registered with ClinicalTrials.gov (NCT01809873) on 11 March 2013.
Asunto(s)
Fiebre/diagnóstico , Malaria/diagnóstico , Reembolso de Incentivo/estadística & datos numéricos , Adulto , Manejo de la Enfermedad , Femenino , Humanos , Kenia , Masculino , Persona de Mediana Edad , Motivación , Población RuralRESUMEN
Hospital length of stay (LOS) in the USA has been increasing since the start of the COVID-19 pandemic, with numerous negative outcomes, including decreased quality of care, worsened patient satisfaction and negative financial impacts on hospitals. While many proposed factors contributing to prolonged LOS are challenging to modify, poor coordination of care and communication among clinical teams can be improved.Geographical cohorting of provider teams, patients and other clinical staff is proposed as a solution to prolonged LOS and readmissions. However, many studies on geographical cohorting alone have shown no significant impact on LOS or readmissions. Other potential benefits of geographical cohorting include improved quality of care, learning experience, communication, teamwork and efficiency.This paper presents a retrospective study at Duke University Hospital (DUH) on the General Medicine service, deploying a bundled intervention of geographical cohorting of patients and their care teams, twice daily multidisciplinary rounds and incremental case management support. The quality improvement study found that patients in the intervention arm had 16%-17% shorter LOS than those in the control arms, and there was a reduction in 30-day hospital readmissions compared with the concurrent control arm. Moreover, there was some evidence of improved accuracy of estimated discharge dates in the intervention arm.Based on these findings, the health system at DUH recognised the value of geographical cohorting and implemented additional geographically based medicine units with multidisciplinary rounds. Future studies will confirm the sustained impact of these care transformations on hospital throughput and patient outcomes, aiming to reduce LOS and enhance the quality of care provided to patients.
Asunto(s)
COVID-19 , Manejo de Caso , Tiempo de Internación , Readmisión del Paciente , Humanos , Readmisión del Paciente/estadística & datos numéricos , Tiempo de Internación/estadística & datos numéricos , COVID-19/terapia , Estudios Retrospectivos , Manejo de Caso/estadística & datos numéricos , Manejo de Caso/normas , Mejoramiento de la Calidad , Masculino , Femenino , SARS-CoV-2 , Persona de Mediana Edad , Grupo de Atención al Paciente/estadística & datos numéricos , Grupo de Atención al Paciente/normas , Puntaje de Propensión , Pandemias , Anciano , North Carolina , Rondas de Enseñanza/métodos , Rondas de Enseñanza/estadística & datos numéricos , Rondas de Enseñanza/normasRESUMEN
Chronic stress undermines psychological and physiological health. We tested three remotely delivered stress management interventions among clergy, accounting for intervention preferences. United Methodist clergy in North Carolina enrolled in a partially randomized, preference-based waitlist control trial. The interventions were: mindfulness-based stress reduction (MBSR), Daily Examen prayer practice, and Stress Proofing (stress inoculation plus breathing skills). Co-primary outcomes were symptoms of stress (Calgary Symptoms of Stress Inventory) and 48-hour ambulatory heart rate variability (HRV) at 12 weeks compared to waitlist control. Survey data were collected at 0, 12, and 24 weeks and 48-hour ambulatory HRV at 0 and 12 weeks. The 255 participants were 91% White and 48% female. Forty-nine participants (22%) without a preference were randomly assigned between the three interventions (n = 40) and waitlist control (n = 9). Two hundred six participants (78%) with a preference were randomly assigned to waitlist control (n = 62) or their preferred intervention (n = 144). Compared to waitlist control, MBSR [mean difference (MD) = -0.30, 95% CI: -0.41, -0.20; Pâ <â .001] and Stress Proofing (MD = -0.27, 95% CI: -0.40, -0.14; Pâ <â .001) participants had lower stress symptoms at 12 weeks; Daily Examen participants did not until 24 weeks (MD = -0.24, 95% CI: -0.41, -0.08). MBSR participants demonstrated improvement in HRV at 12 weeks (MD = +3.32 ms; 95% CI: 0.21, 6.44; Pâ =â .036). MBSR demonstrated robust improvement in self-reported and objective physical correlates of stress; Stress Proofing and Daily Examen resulted in improvements in self-reported correlates of stress. These brief practices were sustainable and beneficial for United Methodist clergy during the heightened stressors of the COVID pandemic. ClinicalTrials.gov identifier: NCT04625777.
A common source of stress, which can harm physical and mental health, is work. Clergy engage in a profession that requires toggling between varied and interpersonally complex tasks, providing emotional labor, and experiencing stressors such as public criticism. Practical, brief practices are needed to manage occupational stress. We invited all United Methodist clergy in North Carolina to enroll in a stress management study. Participants chose their preferred of three interventions: mindfulness-based stress reduction (MBSR), Daily Examen prayer practice, or Stress Proofing (a combination of stress inoculation plus breathing skills). Clergy without a preference were randomly assigned to one of the three interventions and a waiting group. Clergy with a preference were randomly assigned to either begin the intervention or wait at least 6 months and provide data while waiting. Participants practiced each of the three interventions at high levels across 24 weeks. Compared to clergy who waited for an intervention, MBSR participants evidenced robust improvement in self-reported (stress and anxiety symptoms) and physiological (heart rate variability measured across 48 hours) outcomes, whereas Stress Proofing and the Daily Examen only resulted in improvements in self-reported outcomes. The three brief practices were sustainable and beneficial for United Methodist clergy during the heightened stressors of the COVID pandemic.
RESUMEN
Data-intensive research continues to expand with the goal of improving healthcare delivery, clinical decision-making, and patient outcomes. Quantitative scientists, such as biostatisticians, epidemiologists, and informaticists, are tasked with turning data into health knowledge. In academic health centres, quantitative scientists are critical to the missions of biomedical discovery and improvement of health. Many academic health centres have developed centralized Quantitative Science Units which foster dual goals of professional development of quantitative scientists and producing high quality, reproducible domain research. Such units then develop teams of quantitative scientists who can collaborate with researchers. However, existing literature does not provide guidance on how such teams are formed or how to manage and sustain them. Leaders of Quantitative Science Units across six institutions formed a working group to examine common practices and tools that can serve as best practices for Quantitative Science Units that wish to achieve these dual goals through building long-term partnerships with researchers. The results of this working group are presented to provide tools and guidance for Quantitative Science Units challenged with developing, managing, and evaluating Quantitative Science Teams. This guidance aims to help Quantitative Science Units effectively participate in and enhance the research that is conducted throughout the academic health centre-shaping their resources to fit evolving research needs.
RESUMEN
BACKGROUND: The majority of maternal deaths, stillbirths, and neonatal deaths are concentrated in a few countries, many of which have weak health systems, poor access to health services, and low coverage of key health interventions. Early and consistent antenatal care (ANC) attendance could significantly reduce maternal and neonatal morbidity and mortality. Despite this, most Kenyan mothers initiate ANC care late in pregnancy and attend fewer than the recommended visits. METHODS: We used survey data from 6,200 pregnant women across six districts in western Kenya to understand demand-side factors related to use of ANC. Bayesian multi-level models were developed to explore the relative importance of individual, household and village-level factors in relation to ANC use. RESULTS: There is significant spatial autocorrelation of ANC attendance in three of the six districts and considerable heterogeneity in factors related to ANC use between districts. Working outside the home limited ANC attendance. Maternal age, the number of small children in the household, and ownership of livestock were important in some districts, but not all. Village proportions of pregnancy in women of child-bearing age was significantly correlated to ANC use in three of the six districts. Geographic distance to health facilities and the type of nearest facility was not correlated with ANC use. After incorporating individual, household and village-level covariates, no residual spatial autocorrelation remained in the outcome. CONCLUSIONS: ANC attendance was consistently low across all the districts, but factors related to poor attendance varied. This heterogeneity is expected for an outcome that is highly influenced by socio-cultural values and local context. Interventions to improve use of ANC must be tailored to local context and should include explicit approaches to reach women who work outside the home.