ABSTRACT
BACKGROUND: There is limited evidence on the use of high-sensitivity C-reactive protein (hsCRP) as a biomarker for selecting patients for advanced cardiovascular (CV) therapies in the modern era. The prognostic value of mildly elevated hsCRP beyond troponin in a large real-world cohort of unselected patients presenting with suspected acute coronary syndrome (ACS) is unknown. We evaluated whether a mildly elevated hsCRP (up to 15 mg/L) was associated with mortality risk, beyond troponin level, in patients with suspected ACS. METHODS AND FINDINGS: We conducted a retrospective cohort study based on the National Institute for Health Research Health Informatics Collaborative data of 257,948 patients with suspected ACS who had a troponin measured at 5 cardiac centres in the United Kingdom between 2010 and 2017. Patients were divided into 4 hsCRP groups (<2, 2 to 4.9, 5 to 9.9, and 10 to 15 mg/L). The main outcome measure was mortality within 3 years of index presentation. The association between hsCRP levels and all-cause mortality was assessed using multivariable Cox regression analysis adjusted for age, sex, haemoglobin, white cell count (WCC), platelet count, creatinine, and troponin. Following the exclusion criteria, there were 102,337 patients included in the analysis (hsCRP <2 mg/L (n = 38,390), 2 to 4.9 mg/L (n = 27,397), 5 to 9.9 mg/L (n = 26,957), and 10 to 15 mg/L (n = 9,593)). On multivariable Cox regression analysis, there was a positive and graded relationship between hsCRP level and mortality at baseline, which remained at 3 years (hazard ratio (HR) (95% CI) of 1.32 (1.18 to 1.48) for those with hsCRP 2.0 to 4.9 mg/L and 1.40 (1.26 to 1.57) and 2.00 (1.75 to 2.28) for those with hsCRP 5 to 9.9 mg/L and 10 to 15 mg/L, respectively. This relationship was independent of troponin in all suspected ACS patients and was further verified in those who were confirmed to have an ACS diagnosis by clinical coding. The main limitation of our study is that we did not have data on underlying cause of death; however, the exclusion of those with abnormal WCC or hsCRP levels >15 mg/L makes it unlikely that sepsis was a major contributor. CONCLUSIONS: These multicentre, real-world data from a large cohort of patients with suspected ACS suggest that mildly elevated hsCRP (up to 15 mg/L) may be a clinically meaningful prognostic marker beyond troponin and point to its potential utility in selecting patients for novel treatments targeting inflammation. TRIAL REGISTRATION: ClinicalTrials.gov - NCT03507309.
Subject(s)
Acute Coronary Syndrome/blood , Acute Coronary Syndrome/mortality , C-Reactive Protein/metabolism , Acute Coronary Syndrome/diagnosis , Aged , Aged, 80 and over , Biomarkers/blood , Cohort Studies , Female , Follow-Up Studies , Humans , Longitudinal Studies , Male , Middle Aged , Mortality/trends , Predictive Value of Tests , Retrospective Studies , Risk Factors , United Kingdom/epidemiologyABSTRACT
OBJECTIVE: To characterise factors associated with COVID-19 vaccine uptake among people with kidney disease in England. DESIGN: Retrospective cohort study using the OpenSAFELY-TPP platform, performed with the approval of NHS England. SETTING: Individual-level routine clinical data from 24 million people across GPs in England using TPP software. Primary care data were linked directly with COVID-19 vaccine records up to 31 August 2022 and with renal replacement therapy (RRT) status via the UK Renal Registry (UKRR). PARTICIPANTS: A cohort of adults with stage 3-5 chronic kidney disease (CKD) or receiving RRT at the start of the COVID-19 vaccine roll-out was identified based on evidence of reduced estimated glomerular filtration rate (eGFR) or inclusion in the UKRR. MAIN OUTCOME MEASURES: Dose-specific vaccine coverage over time was determined from 1 December 2020 to 31 August 2022. Individual-level factors associated with receipt of a 3-dose or 4-dose vaccine series were explored via Cox proportional hazards models. RESULTS: 992 205 people with stage 3-5 CKD or receiving RRT were included. Cumulative vaccine coverage as of 31 August 2022 was 97.5%, 97.0% and 93.9% for doses 1, 2 and 3, respectively, and 81.9% for dose 4 among individuals with one or more indications for eligibility. Delayed 3-dose vaccine uptake was associated with younger age, minority ethnicity, social deprivation and severe mental illness-associations that were consistent across CKD severity subgroups, dialysis patients and kidney transplant recipients. Similar associations were observed for 4-dose uptake. CONCLUSION: Although high primary vaccine and booster dose coverage has been achieved among people with kidney disease in England, key disparities in vaccine uptake remain across clinical and demographic groups and 4-dose coverage is suboptimal. Targeted interventions are needed to identify barriers to vaccine uptake among under-vaccinated subgroups identified in the present study.
Subject(s)
COVID-19 , Kidney Diseases , Kidney Failure, Chronic , Adult , Humans , COVID-19 Vaccines , Cohort Studies , Retrospective Studies , Renal Dialysis , COVID-19/prevention & control , Kidney Failure, Chronic/therapyABSTRACT
BACKGROUND: Pesticide self-poisoning is the most commonly used suicide method worldwide, but few studies have investigated the national epidemiology of pesticide suicide in countries where it is a major public health problem. This study aims to investigate geographic variations in pesticide suicide and their impact on the spatial distribution of suicide in Taiwan. METHODS: Smoothed standardized mortality ratios for pesticide suicide (2002-2009) were mapped across Taiwan's 358 districts (median population aged 15 or above = 27 000), and their associations with the size of agricultural workforce were investigated using Bayesian hierarchical models. RESULTS: In 2002-2009 pesticide poisoning was the third most common suicide method in Taiwan, accounting for 13.6% (4913/36 110) of all suicides. Rates were higher in agricultural East and Central Taiwan and lower in major cities. Almost half (47%) of all pesticide suicides occurred in areas where only 13% of Taiwan's population lived. The geographic distribution of overall suicides was more similar to that of pesticide suicides than non-pesticide suicides. Rural-urban differences in suicide were mostly due to pesticide suicide. Areas where a higher proportion of people worked in agriculture showed higher pesticide suicide rates (adjusted rate ratio [ARR] per standard deviation increase in the proportion of agricultural workers = 1.58, 95% Credible Interval [CrI] 1.44-1.74) and overall suicide rates (ARR = 1.06, 95% CrI 1.03-1.10) but lower non-pesticide suicide rates (ARR = 0.91, 95% CrI 0.87-0.95). CONCLUSION: Easy access to pesticides appears to influence the geographic distribution of suicide in Taiwan, highlighting the potential benefits of targeted prevention strategies such as restricting access to highly toxic pesticides.
Subject(s)
Occupational Exposure/adverse effects , Suicide/statistics & numerical data , Agriculture , Female , Geographic Information Systems , Humans , Male , Pesticide Residues , Population Density , Risk Factors , Taiwan/epidemiologyABSTRACT
BACKGROUND: Observational studies of pregnant women in sub-Saharan Africa have shown that low serum vitamin A levels are associated with an increased risk of mother-to-child transmission (MTCT) of HIV. Vitamin A is cheap and easily provided through existing health services in low-income settings. It is thus important to determine the effect of routine supplementation of HIV positive pregnant or breastfeeding women with this vitamin on the risk of MTCT of HIV, which currently results in more than 1000 new HIV infections each day world-wide. OBJECTIVES: We aimed to assess the effect of antenatal and or postpartum vitamin A supplementation on the risk of MTCT of HIV as well as infant and maternal mortality and morbidity. SEARCH STRATEGY: In June 2010 we searched the Cochrane Central Register of Controlled Trials, PubMed, EMBASE, AIDS Education Global Information System, and WHO International Clinical Trials Registry Platform; and checked reference lists of identified articles for any studies published after the earlier version of this review was updated in 2008. SELECTION CRITERIA: We selected randomised controlled trials conducted in any setting that compared vitamin A supplementation with placebo in known HIV-infected pregnant or breastfeeding women. DATA COLLECTION AND ANALYSIS: At least two authors independently assessed trial eligibility and quality and extracted data. We calculated relative risks (RR) or mean differences (MD), with their 95% confidence intervals (CI) for each study. We conducted meta-analysis using a fixed-effects method (when there was no significant heterogeneity between study results, i.e. P>0.1) or the random-effects method (when there was significant heterogeneity), and report the Higgins' statistic for all pooled effect measures. MAIN RESULTS: Five randomised controlled trials which enrolled 7,528 HIV-infected women (either during pregnancy or the immediate postpartum period) met our inclusion criteria. These trials were conducted in Malawi, South Africa, Tanzania, and Zimbabwe between 1995 and 2005. We combined the results of these trials and found no evidence that vitamin A supplementation has an effect on the risk of MTCT of HIV (4 trials, 6517 women: RR 1.04, 95% CI 0.87 to 1.24; I(2)=68%). However, antenatal vitamin A supplementation significantly improved birth weight (3 trials, 1809 women: MD 89.78, 95%CI 84.73 to 94.83; I(2)=33.0%), but there was no evidence of an effect on preterm births (3 trials, 2110 women: RR 0.88, 95%CI 0.65 to 1.19; I(2)=58.1%), stillbirths (4 trials, 2855 women: RR 0.99, 95%CI 0.68 to 1.43; I(2)=0%), deaths by 24 months (2 trials, 1635 women: RR 1.03, 95%CI 0.88 to 1.20; I(2)=0%), postpartum CD4 levels (1 trial, 727 women: MD -4.00, 95% CI -51.06 to 43.06), and maternal death ( 1 trial, 728 women: RR 0.49, 95%CI 0.04 to 5.37). AUTHORS' CONCLUSIONS: Current best evidence shows that antenatal or postpartum vitamin A supplementation probably has little or no effect on mother-to-child transmission of HIV. According to the GRADE classification, the quality of this evidence is moderate; implying that the true effect of vitamin A supplementation on the risk of mother-to-child transmission of HIV is likely to be close to the findings of this review, but that there is also a possibility that it is substantially different.
Subject(s)
HIV Infections/transmission , Infectious Disease Transmission, Vertical/prevention & control , Pregnancy Complications, Infectious , Vitamin A Deficiency/complications , Vitamin A/administration & dosage , Vitamins/administration & dosage , Female , HIV Infections/prevention & control , Humans , Infant, Newborn , Pregnancy , Randomized Controlled Trials as Topic , Vitamin A Deficiency/drug therapyABSTRACT
Randomized clinical trials underpin evidence-based clinical practice, but flaws in their conduct may lead to biased estimates of intervention effects and hence invalid treatment recommendations. The main approach to the empirical study of bias is to collate a number of meta-analyses and, within each, compare the results of trials with and without a methodological characteristic such as blinding of participants and health professionals. Estimated within-meta-analysis differences are combined across meta-analyses, leading to an estimate of mean bias. Such "meta-epidemiological" studies are published in increasing numbers and have the potential to inform trial design, assessment of risk of bias, and reporting guidelines. However, their interpretation is complicated by issues of confounding, imprecision, and applicability. We developed a guide for interpreting meta-epidemiological studies, illustrated using MetaBLIND, a large study on the impact of blinding. Applying generally accepted principles of research methodology to meta-epidemiology, we framed 10 questions covering the main issues to consider when interpreting results of such studies, including risk of systematic error, risk of random error, issues related to heterogeneity, and theoretical plausibility. We suggest that readers of a meta-epidemiological study reflect comprehensively on the research question posed in the study, whether an experimental intervention was unequivocally identified for all included trials, the risk of misclassification of the trial characteristic, and the risk of confounding, i.e the adequacy of any adjustment for the likely confounders. We hope that our guide to interpretation of results of meta-epidemiological studies is helpful for readers of such studies.
Subject(s)
Bias , Epidemiologic Studies , Meta-Analysis as Topic , Empirical Research , Evidence-Based Practice , Humans , Randomized Controlled Trials as Topic , Research DesignABSTRACT
INTRODUCTION: HIV-1 infection leads to chronic inflammation and to an increased risk of non-AIDS mortality. Our objective was to determine whether AIDS-defining events (ADEs) were associated with increased overall and cause-specific non-AIDS related mortality after antiretroviral therapy (ART) initiation. METHODS: We included HIV treatment-naïve adults from the Antiretroviral Therapy Cohort Collaboration (ART-CC) who initiated ART from 1996 to 2014. Causes of death were assigned using the Coding Causes of Death in HIV (CoDe) protocol. The adjusted hazard ratio (aHR) for overall and cause-specific non-AIDS mortality among those with an ADE (all ADEs, tuberculosis (TB), Pneumocystis jiroveci pneumonia (PJP), and non-Hodgkin's lymphoma (NHL)) compared to those without an ADE was estimated using a marginal structural model. RESULTS: The adjusted hazard of overall non-AIDS mortality was higher among those with any ADE compared to those without any ADE (aHR 2.21, 95% confidence interval (CI) 2.00 to 2.43). The adjusted hazard of each of the cause-specific non-AIDS related deaths were higher among those with any ADE compared to those without, except metabolic deaths (malignancy aHR 2.59 (95% CI 2.13 to 3.14), accident/suicide/overdose aHR 1.37 (95% CI 1.05 to 1.79), cardiovascular aHR 1.95 (95% CI 1.54 to 2.48), infection aHR (95% CI 1.68 to 2.81), hepatic aHR 2.09 (95% CI 1.61 to 2.72), respiratory aHR 4.28 (95% CI 2.67 to 6.88), renal aHR 5.81 (95% CI 2.69 to 12.56) and central nervous aHR 1.53 (95% CI 1.18 to 5.44)). The risk of overall and cause-specific non-AIDS mortality differed depending on the specific ADE of interest (TB, PJP, NHL). CONCLUSIONS: In this large multi-centre cohort collaboration with standardized assignment of causes of death, non-AIDS mortality was twice as high among patients with an ADE compared to without an ADE. However, non-AIDS related mortality after an ADE depended on the ADE of interest. Although there may be unmeasured confounders, these findings suggest that a common pathway may be independently driving both ADEs and NADE mortality. While prevention of ADEs may reduce subsequent death due to NADEs following ART initiation, modification of risk factors for NADE mortality remains important after ADE survival.
Subject(s)
Acquired Immunodeficiency Syndrome/drug therapy , Anti-HIV Agents/therapeutic use , Acquired Immunodeficiency Syndrome/complications , Adult , Cohort Studies , Female , Humans , Lymphoma, Non-Hodgkin/mortality , Male , Middle Aged , Pneumonia, Pneumocystis/mortality , Tuberculosis/mortalityABSTRACT
OBJECTIVE: Little is known about persistence of or recovery from chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) in adolescents. Previous studies have small sample sizes, short follow-up or have focused on fatigue rather than CFS/ME or, equivalently, chronic fatigue, which is disabling. This work aimed to describe the epidemiology and natural course of CFS/ME in adolescents aged 13-18â years. DESIGN: Longitudinal follow-up of adolescents enrolled in the Avon Longitudinal Study of Parents and Children. SETTING: Avon, UK. PARTICIPANTS: We identified adolescents who had disabling fatigue of >6 months duration without a known cause at ages 13, 16 and 18â years. We use the term 'chronic disabling fatigue' (CDF) because CFS/ME was not verified by clinical diagnosis. We used multiple imputation to obtain unbiased estimates of prevalence and persistence. RESULTS: The estimated prevalence of CDF was 1.47% (95% CI 1.05% to 1.89%) at age 13, 2.22% (1.67% to 2.78%) at age 16 and 2.99% (2.24% to 3.75%) at age 18. Among adolescents with CDF of 6â months duration at 13â years 75.3% (64.0% to 86.6%) were not classified as such at age 16. Similar change was observed between 16 and 18â years (75.0% (62.8% to 87.2%)). Of those with CDF at age 13, 8.02% (0.61% to 15.4%) presented with CDF throughout the duration of adolescence. CONCLUSIONS: The prevalence of CDF lasting 6â months or longer (a proxy for clinically diagnosed CFS/ME) increases from 13 to 18â years. However, persistent CDF is rare in adolescents, with approximately 75% recovering after 2-3â years.
Subject(s)
Fatigue Syndrome, Chronic/diagnosis , Adolescent , Educational Status , England/epidemiology , Fatigue Syndrome, Chronic/epidemiology , Female , Follow-Up Studies , Humans , Male , Prevalence , Prognosis , Sex Distribution , Time FactorsABSTRACT
BACKGROUND: Atrial fibrillation (AF) is a common cardiac arrhythmia that increases the risk of thromboembolic events. Anticoagulation therapy to prevent AF-related stroke has been shown to be cost-effective. A national screening programme for AF may prevent AF-related events, but would involve a substantial investment of NHS resources. OBJECTIVES: To conduct a systematic review of the diagnostic test accuracy (DTA) of screening tests for AF, update a systematic review of comparative studies evaluating screening strategies for AF, develop an economic model to compare the cost-effectiveness of different screening strategies and review observational studies of AF screening to provide inputs to the model. DESIGN: Systematic review, meta-analysis and cost-effectiveness analysis. SETTING: Primary care. PARTICIPANTS: Adults. INTERVENTION: Screening strategies, defined by screening test, age at initial and final screens, screening interval and format of screening {systematic opportunistic screening [individuals offered screening if they consult with their general practitioner (GP)] or systematic population screening (when all eligible individuals are invited to screening)}. MAIN OUTCOME MEASURES: Sensitivity, specificity and diagnostic odds ratios; the odds ratio of detecting new AF cases compared with no screening; and the mean incremental net benefit compared with no screening. REVIEW METHODS: Two reviewers screened the search results, extracted data and assessed the risk of bias. A DTA meta-analysis was perfomed, and a decision tree and Markov model was used to evaluate the cost-effectiveness of the screening strategies. RESULTS: Diagnostic test accuracy depended on the screening test and how it was interpreted. In general, the screening tests identified in our review had high sensitivity (> 0.9). Systematic population and systematic opportunistic screening strategies were found to be similarly effective, with an estimated 170 individuals needed to be screened to detect one additional AF case compared with no screening. Systematic opportunistic screening was more likely to be cost-effective than systematic population screening, as long as the uptake of opportunistic screening observed in randomised controlled trials translates to practice. Modified blood pressure monitors, photoplethysmography or nurse pulse palpation were more likely to be cost-effective than other screening tests. A screening strategy with an initial screening age of 65 years and repeated screens every 5 years until age 80 years was likely to be cost-effective, provided that compliance with treatment does not decline with increasing age. CONCLUSIONS: A national screening programme for AF is likely to represent a cost-effective use of resources. Systematic opportunistic screening is more likely to be cost-effective than systematic population screening. Nurse pulse palpation or modified blood pressure monitors would be appropriate screening tests, with confirmation by diagnostic 12-lead electrocardiography interpreted by a trained GP, with referral to a specialist in the case of an unclear diagnosis. Implementation strategies to operationalise uptake of systematic opportunistic screening in primary care should accompany any screening recommendations. LIMITATIONS: Many inputs for the economic model relied on a single trial [the Screening for Atrial Fibrillation in the Elderly (SAFE) study] and DTA results were based on a few studies at high risk of bias/of low applicability. FUTURE WORK: Comparative studies measuring long-term outcomes of screening strategies and DTA studies for new, emerging technologies and to replicate the results for photoplethysmography and GP interpretation of 12-lead electrocardiography in a screening population. STUDY REGISTRATION: This study is registered as PROSPERO CRD42014013739. FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Subject(s)
Atrial Fibrillation/diagnosis , Mass Screening/economics , Mass Screening/methods , Primary Health Care/economics , Primary Health Care/methods , Aged , Aged, 80 and over , Blood Pressure , Cost-Benefit Analysis , Electrocardiography , Female , Humans , Male , Mass Screening/standards , Models, Econometric , Patient Acceptance of Health Care , Pulse , Quality-Adjusted Life Years , Randomized Controlled Trials as Topic , Sensitivity and SpecificityABSTRACT
BACKGROUND: Warfarin is effective for stroke prevention in atrial fibrillation (AF), but anticoagulation is underused in clinical care. The risk of venous thromboembolic disease during hospitalisation can be reduced by low-molecular-weight heparin (LMWH): warfarin is the most frequently prescribed anticoagulant for treatment and secondary prevention of venous thromboembolism (VTE). Warfarin-related bleeding is a major reason for hospitalisation for adverse drug effects. Warfarin is cheap but therapeutic monitoring increases treatment costs. Novel oral anticoagulants (NOACs) have more rapid onset and offset of action than warfarin, and more predictable dosing requirements. OBJECTIVE: To determine the best oral anticoagulant/s for prevention of stroke in AF and for primary prevention, treatment and secondary prevention of VTE. DESIGN: Four systematic reviews, network meta-analyses (NMAs) and cost-effectiveness analyses (CEAs) of randomised controlled trials. SETTING: Hospital (VTE primary prevention and acute treatment) and primary care/anticoagulation clinics (AF and VTE secondary prevention). PARTICIPANTS: Patients eligible for anticoagulation with warfarin (stroke prevention in AF, acute treatment or secondary prevention of VTE) or LMWH (primary prevention of VTE). INTERVENTIONS: NOACs, warfarin and LMWH, together with other interventions (antiplatelet therapy, placebo) evaluated in the evidence network. MAIN OUTCOME MEASURES: Efficacy Stroke, symptomatic VTE, symptomatic deep-vein thrombosis and symptomatic pulmonary embolism. Safety Major bleeding, clinically relevant bleeding and intracranial haemorrhage. We also considered myocardial infarction and all-cause mortality and evaluated cost-effectiveness. DATA SOURCES: MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library, reference lists of published NMAs and trial registries. We searched MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library. The stroke prevention in AF review search was run on the 12 March 2014 and updated on 15 September 2014, and covered the period 2010 to September 2014. The search for the three reviews in VTE was run on the 19 March 2014, updated on 15 September 2014, and covered the period 2008 to September 2014. REVIEW METHODS: Two reviewers screened search results, extracted and checked data, and assessed risk of bias. For each outcome we conducted standard meta-analysis and NMA. We evaluated cost-effectiveness using discrete-time Markov models. RESULTS: Apixaban (Eliquis®, Bristol-Myers Squibb, USA; Pfizer, USA) [5 mg bd (twice daily)] was ranked as among the best interventions for stroke prevention in AF, and had the highest expected net benefit. Edoxaban (Lixiana®, Daiichi Sankyo, Japan) [60 mg od (once daily)] was ranked second for major bleeding and all-cause mortality. Neither the clinical effectiveness analysis nor the CEA provided strong evidence that NOACs should replace postoperative LMWH in primary prevention of VTE. For acute treatment and secondary prevention of VTE, we found little evidence that NOACs offer an efficacy advantage over warfarin, but the risk of bleeding complications was lower for some NOACs than for warfarin. For a willingness-to-pay threshold of > £5000, apixaban (5 mg bd) had the highest expected net benefit for acute treatment of VTE. Aspirin or no pharmacotherapy were likely to be the most cost-effective interventions for secondary prevention of VTE: our results suggest that it is not cost-effective to prescribe NOACs or warfarin for this indication. CONCLUSIONS: NOACs have advantages over warfarin in patients with AF, but we found no strong evidence that they should replace warfarin or LMWH in primary prevention, treatment or secondary prevention of VTE. LIMITATIONS: These relate mainly to shortfalls in the primary data: in particular, there were no head-to-head comparisons between different NOAC drugs. FUTURE WORK: Calculating the expected value of sample information to clarify whether or not it would be justifiable to fund one or more head-to-head trials. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005324, CRD42013005331 and CRD42013005330. FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Subject(s)
Anticoagulants/administration & dosage , Atrial Fibrillation/diagnosis , Mass Screening/economics , Mass Screening/methods , Stroke/prevention & control , Venous Thromboembolism/prevention & control , Age Distribution , Aged , Aged, 80 and over , Blood Pressure , Cost-Benefit Analysis , Electrocardiography , Female , Humans , Male , Markov Chains , Mass Screening/standards , Models, Econometric , Network Meta-Analysis , Observational Studies as Topic , Primary Health Care , Pulse , Secondary Prevention , Sensitivity and Specificity , State Medicine/economics , United KingdomABSTRACT
INTRODUCTION: Response to antiretroviral therapy (ART) among individuals infected with HIV-2 is poorly described. We compared the immunological response among patients treated with three nucleoside reverse-transcriptase inhibitors (NRTIs) to boosted protease inhibitor (PI) and unboosted PI-based regimens in West Africa. METHODS: This prospective cohort study enrolled treatment-naïve HIV-2-infected patients within the International Epidemiological Databases to Evaluate AIDS collaboration in West Africa. We used mixed models to compare the CD4 count response to treatment over 12 months between regimens. RESULTS: Of 422 HIV-2-infected patients, 285 (67.5%) were treated with a boosted PI-based regimen, 104 (24.6%) with an unboosted PI-based regimen and 33 (7.8%) with three NRTIs. Treatment groups were comparable with regard to gender (54.5% female) and median age at ART initiation (45.3 years; interquartile range 38.3 to 51.8). Treatment groups differed by clinical stage (21.2%, 16.8% and 17.3% at CDC Stage C or World Health Organization Stage IV for the triple NRTI, boosted PI and unboosted PI groups, respectively, p=0.02), median length of follow-up (12.9, 17.7 and 44.0 months for the triple NRTI, the boosted PI and the unboosted PI groups, respectively, p<0.001) and baseline median CD4 count (192, 173 and 129 cells/µl in the triple NRTI, the boosted PI and the unboosted PI-based regimen groups, respectively, p=0.003). CD4 count recovery at 12 months was higher for patients treated with boosted PI-based regimens than those treated with three NRTIs or with unboosted PI-based regimens (191 cells/µl, 95% CI 142 to 241; 110 cells/µl, 95% CI 29 to 192; 133 cells/µl, 95% CI 80 to 186, respectively, p=0.004). CONCLUSIONS: In this observational study using African data, boosted PI-containing regimens had better immunological response compared to triple NRTI combinations and unboosted PI-based regimens at 12 months. A randomized clinical trial is still required to determine the best initial regimen for treating HIV-2 infected patients.
Subject(s)
HIV Infections/drug therapy , HIV-2 , Adult , CD4 Lymphocyte Count , Cohort Studies , Female , HIV Infections/immunology , Humans , Male , Middle Aged , Prospective StudiesABSTRACT
BACKGROUND: It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES: To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN: Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS: Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS: One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS: A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick results. Nappy pad samples were provided from the other 2277 children, of whom 82% were < 2 years old and 1.3% met the UTI definition. 'Clinical diagnosis' correctly identified 13.3% positive cultures, with 98.5% specificity and an AUROC of 0.63 (95% CI 0.53 to 0.72). Four symptoms and two dipstick results were independently associated with UTI, with an AUROC of 0.81 (0.72 to 0.90; validated 0.78) for symptoms, increasing to 0.87 (0.80 to 0.94; validated 0.82) with the dipstick findings. A high specificity threshold for the clean-catch model was more accurate and less costly than, and as effective as, clinical judgement. The additional diagnostic utility of dipstick testing was offset by its costs. The cost-effectiveness of the nappy pad model was not clear-cut. CONCLUSIONS: Clinicians should prioritise the use of clean-catch sampling as symptoms and signs can cost-effectively improve the identification of UTI in young children where clean catch is possible. Dipstick testing can improve targeting of antibiotic treatment, but at a higher cost than waiting for a laboratory result. Future research is needed to distinguish pathogens from contaminants, assess the impact of the clean-catch algorithm on patient outcomes, and the cost-effectiveness of presumptive versus dipstick versus laboratory-guided antibiotic treatment. FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Subject(s)
Algorithms , Primary Health Care/methods , Urinary Tract Infections/diagnosis , Urine Specimen Collection/economics , Urine Specimen Collection/methods , Child, Preschool , Cost-Benefit Analysis , Female , Humans , Infant , Male , Prospective Studies , ROC Curve , Sensitivity and Specificity , Single-Blind Method , Urine Specimen Collection/standardsABSTRACT
BACKGROUND: The added diagnostic utility of nappy pad urine samples and the proportion that are contaminated is unknown. AIM: To develop a clinical prediction rule for the diagnosis of urinary tract infection (UTI) based on sampling using the nappy pad method. DESIGN AND SETTING: Acutely unwell children <5 years presenting to 233 UK primary care sites. METHOD: Logistic regression to identify independent associations of symptoms, signs, and urine dipstick test results with UTI; diagnostic utility quantified as area under the receiver operator curves (AUROC). Nappy pad rule characteristics, AUROC, and contamination, compared with findings from clean-catch samples. RESULTS: Nappy pad samples were obtained from 3205 children (82% aged <2 years; 48% female), culture results were available for 2277 (71.0%) and 30 (1.3%) had a UTI on culture. Female sex, smelly urine, darker urine, and the absence of nappy rash were independently associated with a UTI, with an internally-validated, coefficient model AUROC of 0.81 (0.87 for clean-catch), which increased to 0.87 (0.90 for clean-catch) with the addition of dipstick results. GPs' 'working diagnosis' had an AUROC 0.63 (95% confidence intervals [CI] = 0.53 to 0.72). A total of 12.2% of nappy pad and 1.8% of clean-catch samples were 'frankly contaminated' (risk ratio 6.66; 95% CI = 4.95 to 8.96; P<0.001). CONCLUSION: Nappy pad urine culture results, with features that can be reported by parents and dipstick tests, can be clinically useful, but are less accurate and more often contaminated compared with clean-catch urine culture.
Subject(s)
Diapers, Infant/statistics & numerical data , Primary Health Care , Specimen Handling/methods , Urinary Tract Infections/diagnosis , Child, Preschool , Female , Humans , Infant , Male , Prospective Studies , United Kingdom , Urinalysis , Urinary Tract Infections/urineABSTRACT
Twin studies have long been recognized for their value in learning about the aetiology of disease and specifically for their potential for separating genetic effects from environmental effects. The recent upsurge of interest in life-course epidemiology and the study of developmental influences on later health has provided a new impetus to study twins as a source of unique insights. Twins are of special interest because they provide naturally matched pairs where the confounding effects of a large number of potentially causal factors (such as maternal nutrition or gestation length) may be removed by comparisons between twins who share them. The traditional tool of epidemiological 'risk factor analysis' is the regression model, but it is not straightforward to transfer standard regression methods to twin data, because the analysis needs to reflect the paired structure of the data, which induces correlation between twins. This paper reviews the use of more specialized regression methods for twin data, based on generalized least squares or linear mixed models, and explains the relationship between these methods and the commonly used approach of analysing within-twin-pair difference values. Methods and issues of interpretation are illustrated using an example from a recent study of the association between birth weight and cord blood erythropoietin. We focus on the analysis of continuous outcome measures but review additional complexities that arise with binary outcomes. We recommend the use of a general model that includes separate regression coefficients for within-twin-pair and between-pair effects, and provide guidelines for the interpretation of estimates obtained under this model.
Subject(s)
Regression Analysis , Twin Studies as Topic , Birth Weight/physiology , Data Interpretation, Statistical , Erythropoietin/blood , Guidelines as Topic , Humans , Least-Squares Analysis , Likelihood Functions , Linear ModelsABSTRACT
OBJECTIVE: To compare the performance of MEDLINE searches using index test(s) and target condition (subject searches) with the same searches combined with methodological filters for test accuracy studies. STUDY DESIGN AND SETTING: We derived a reference set of 506 test accuracy studies indexed on MEDLINE from seven systematic reviews that conducted extensive searches. We compared the performance of "subject" with "filtered" searches (same searches combined with each of 22 filters). Outcome measures were number of reference set records missed, sensitivity, number needed to read (NNR), and precision (Number of reference set studies identified for every 100 records screened). RESULTS: Subject searches missed 47 of the 506 reference studies; filtered searches missed an additional 21 to 241 studies. Sensitivity was 91% for subject searches and ranged from 43% to 87% for filtered searches. The NNR was 56 (precision 2%) for subject searches and ranged from 7 to 51 (precision 2-15%) for filtered searches. CONCLUSIONS: Filtered searches miss additional studies compared with searches based on index test and target condition. None of the existing filters provided reductions in the NNR for acceptable sensitivity; currently available methodological filters should not be used to identify studies for inclusion in test accuracy reviews.