Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
AIDS ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38742863

RESUMO

OBJECTIVE: Interruptions in care of people with HIV (PWH) on antiretroviral therapy (ART) are associated with adverse outcomes, but most studies have relied on composite outcomes. We investigated whether mortality risk following care interruptions differed from mortality risk after first starting ART. DESIGN: Collaboration of 18 European and North American HIV observational cohort studies of adults with HIV starting ART between 2004-2019. METHODS: Care interruptions were defined as gaps in contact of ≥365 days, with a subsequent return to care (distinct from loss to follow-up), or ≥270 days and ≥545 days in sensitivity analyses. Follow-up time was allocated to no/pre-interruption or post-interruption follow-up groups. We used Cox regression to compare hazards of mortality between care interruption groups, adjusting for time-updated demographic and clinical characteristics and biomarkers upon ART initiation or re-initiation of care. RESULTS: Of 89197 PWH, 83.4% were male and median age at ART start was 39 years (interquartile range [IQR]: 31-48). 8654 PWH (9.7%) had ≥1 care interruption; 10913 episodes of follow-up following a care interruption were included. There were 6104 deaths in 536,334 person-years, a crude mortality rate of 11.4 (95%CI: 11.1-11.7) per 1000 person-years. The adjusted mortality hazard ratio (HR) for the post-interruption group was 1.72 (95%CI: 1.57-1.88) compared with the no/pre-interruption group. Results were robust to sensitivity analyses assuming ≥270-day (HR 1.49, 95%CI: 1.40-1.60) and ≥545-day (HR 1.67, 95%CI: 1.48-1.88) interruptions. CONCLUSIONS: Mortality was higher among PWH reinitiating care following an interruption, compared with when PWH initially start ART, indicating the importance of uninterrupted care.

3.
BMJ Open ; 13(1): e066164, 2023 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-36720568

RESUMO

OBJECTIVE: To characterise factors associated with COVID-19 vaccine uptake among people with kidney disease in England. DESIGN: Retrospective cohort study using the OpenSAFELY-TPP platform, performed with the approval of NHS England. SETTING: Individual-level routine clinical data from 24 million people across GPs in England using TPP software. Primary care data were linked directly with COVID-19 vaccine records up to 31 August 2022 and with renal replacement therapy (RRT) status via the UK Renal Registry (UKRR). PARTICIPANTS: A cohort of adults with stage 3-5 chronic kidney disease (CKD) or receiving RRT at the start of the COVID-19 vaccine roll-out was identified based on evidence of reduced estimated glomerular filtration rate (eGFR) or inclusion in the UKRR. MAIN OUTCOME MEASURES: Dose-specific vaccine coverage over time was determined from 1 December 2020 to 31 August 2022. Individual-level factors associated with receipt of a 3-dose or 4-dose vaccine series were explored via Cox proportional hazards models. RESULTS: 992 205 people with stage 3-5 CKD or receiving RRT were included. Cumulative vaccine coverage as of 31 August 2022 was 97.5%, 97.0% and 93.9% for doses 1, 2 and 3, respectively, and 81.9% for dose 4 among individuals with one or more indications for eligibility. Delayed 3-dose vaccine uptake was associated with younger age, minority ethnicity, social deprivation and severe mental illness-associations that were consistent across CKD severity subgroups, dialysis patients and kidney transplant recipients. Similar associations were observed for 4-dose uptake. CONCLUSION: Although high primary vaccine and booster dose coverage has been achieved among people with kidney disease in England, key disparities in vaccine uptake remain across clinical and demographic groups and 4-dose coverage is suboptimal. Targeted interventions are needed to identify barriers to vaccine uptake among under-vaccinated subgroups identified in the present study.


Assuntos
COVID-19 , Nefropatias , Falência Renal Crônica , Adulto , Humanos , Vacinas contra COVID-19 , Estudos de Coortes , Estudos Retrospectivos , Diálise Renal , COVID-19/prevenção & controle , Falência Renal Crônica/terapia
4.
PLoS Med ; 19(2): e1003911, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35192610

RESUMO

BACKGROUND: There is limited evidence on the use of high-sensitivity C-reactive protein (hsCRP) as a biomarker for selecting patients for advanced cardiovascular (CV) therapies in the modern era. The prognostic value of mildly elevated hsCRP beyond troponin in a large real-world cohort of unselected patients presenting with suspected acute coronary syndrome (ACS) is unknown. We evaluated whether a mildly elevated hsCRP (up to 15 mg/L) was associated with mortality risk, beyond troponin level, in patients with suspected ACS. METHODS AND FINDINGS: We conducted a retrospective cohort study based on the National Institute for Health Research Health Informatics Collaborative data of 257,948 patients with suspected ACS who had a troponin measured at 5 cardiac centres in the United Kingdom between 2010 and 2017. Patients were divided into 4 hsCRP groups (<2, 2 to 4.9, 5 to 9.9, and 10 to 15 mg/L). The main outcome measure was mortality within 3 years of index presentation. The association between hsCRP levels and all-cause mortality was assessed using multivariable Cox regression analysis adjusted for age, sex, haemoglobin, white cell count (WCC), platelet count, creatinine, and troponin. Following the exclusion criteria, there were 102,337 patients included in the analysis (hsCRP <2 mg/L (n = 38,390), 2 to 4.9 mg/L (n = 27,397), 5 to 9.9 mg/L (n = 26,957), and 10 to 15 mg/L (n = 9,593)). On multivariable Cox regression analysis, there was a positive and graded relationship between hsCRP level and mortality at baseline, which remained at 3 years (hazard ratio (HR) (95% CI) of 1.32 (1.18 to 1.48) for those with hsCRP 2.0 to 4.9 mg/L and 1.40 (1.26 to 1.57) and 2.00 (1.75 to 2.28) for those with hsCRP 5 to 9.9 mg/L and 10 to 15 mg/L, respectively. This relationship was independent of troponin in all suspected ACS patients and was further verified in those who were confirmed to have an ACS diagnosis by clinical coding. The main limitation of our study is that we did not have data on underlying cause of death; however, the exclusion of those with abnormal WCC or hsCRP levels >15 mg/L makes it unlikely that sepsis was a major contributor. CONCLUSIONS: These multicentre, real-world data from a large cohort of patients with suspected ACS suggest that mildly elevated hsCRP (up to 15 mg/L) may be a clinically meaningful prognostic marker beyond troponin and point to its potential utility in selecting patients for novel treatments targeting inflammation. TRIAL REGISTRATION: ClinicalTrials.gov - NCT03507309.


Assuntos
Síndrome Coronariana Aguda/sangue , Síndrome Coronariana Aguda/mortalidade , Proteína C-Reativa/metabolismo , Síndrome Coronariana Aguda/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Biomarcadores/sangue , Estudos de Coortes , Feminino , Seguimentos , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Mortalidade/tendências , Valor Preditivo dos Testes , Estudos Retrospectivos , Fatores de Risco , Reino Unido/epidemiologia
5.
Res Synth Methods ; 11(2): 260-274, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31851427

RESUMO

Randomized clinical trials underpin evidence-based clinical practice, but flaws in their conduct may lead to biased estimates of intervention effects and hence invalid treatment recommendations. The main approach to the empirical study of bias is to collate a number of meta-analyses and, within each, compare the results of trials with and without a methodological characteristic such as blinding of participants and health professionals. Estimated within-meta-analysis differences are combined across meta-analyses, leading to an estimate of mean bias. Such "meta-epidemiological" studies are published in increasing numbers and have the potential to inform trial design, assessment of risk of bias, and reporting guidelines. However, their interpretation is complicated by issues of confounding, imprecision, and applicability. We developed a guide for interpreting meta-epidemiological studies, illustrated using MetaBLIND, a large study on the impact of blinding. Applying generally accepted principles of research methodology to meta-epidemiology, we framed 10 questions covering the main issues to consider when interpreting results of such studies, including risk of systematic error, risk of random error, issues related to heterogeneity, and theoretical plausibility. We suggest that readers of a meta-epidemiological study reflect comprehensively on the research question posed in the study, whether an experimental intervention was unequivocally identified for all included trials, the risk of misclassification of the trial characteristic, and the risk of confounding, i.e the adequacy of any adjustment for the likely confounders. We hope that our guide to interpretation of results of meta-epidemiological studies is helpful for readers of such studies.


Assuntos
Viés , Estudos Epidemiológicos , Metanálise como Assunto , Pesquisa Empírica , Prática Clínica Baseada em Evidências , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa
6.
J Int AIDS Soc ; 21(1)2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29334197

RESUMO

INTRODUCTION: HIV-1 infection leads to chronic inflammation and to an increased risk of non-AIDS mortality. Our objective was to determine whether AIDS-defining events (ADEs) were associated with increased overall and cause-specific non-AIDS related mortality after antiretroviral therapy (ART) initiation. METHODS: We included HIV treatment-naïve adults from the Antiretroviral Therapy Cohort Collaboration (ART-CC) who initiated ART from 1996 to 2014. Causes of death were assigned using the Coding Causes of Death in HIV (CoDe) protocol. The adjusted hazard ratio (aHR) for overall and cause-specific non-AIDS mortality among those with an ADE (all ADEs, tuberculosis (TB), Pneumocystis jiroveci pneumonia (PJP), and non-Hodgkin's lymphoma (NHL)) compared to those without an ADE was estimated using a marginal structural model. RESULTS: The adjusted hazard of overall non-AIDS mortality was higher among those with any ADE compared to those without any ADE (aHR 2.21, 95% confidence interval (CI) 2.00 to 2.43). The adjusted hazard of each of the cause-specific non-AIDS related deaths were higher among those with any ADE compared to those without, except metabolic deaths (malignancy aHR 2.59 (95% CI 2.13 to 3.14), accident/suicide/overdose aHR 1.37 (95% CI 1.05 to 1.79), cardiovascular aHR 1.95 (95% CI 1.54 to 2.48), infection aHR (95% CI 1.68 to 2.81), hepatic aHR 2.09 (95% CI 1.61 to 2.72), respiratory aHR 4.28 (95% CI 2.67 to 6.88), renal aHR 5.81 (95% CI 2.69 to 12.56) and central nervous aHR 1.53 (95% CI 1.18 to 5.44)). The risk of overall and cause-specific non-AIDS mortality differed depending on the specific ADE of interest (TB, PJP, NHL). CONCLUSIONS: In this large multi-centre cohort collaboration with standardized assignment of causes of death, non-AIDS mortality was twice as high among patients with an ADE compared to without an ADE. However, non-AIDS related mortality after an ADE depended on the ADE of interest. Although there may be unmeasured confounders, these findings suggest that a common pathway may be independently driving both ADEs and NADE mortality. While prevention of ADEs may reduce subsequent death due to NADEs following ART initiation, modification of risk factors for NADE mortality remains important after ADE survival.


Assuntos
Síndrome da Imunodeficiência Adquirida/tratamento farmacológico , Fármacos Anti-HIV/uso terapêutico , Síndrome da Imunodeficiência Adquirida/complicações , Adulto , Estudos de Coortes , Feminino , Humanos , Linfoma não Hodgkin/mortalidade , Masculino , Pessoa de Meia-Idade , Pneumonia por Pneumocystis/mortalidade , Tuberculose/mortalidade
8.
Health Technol Assess ; 21(29): 1-236, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28629510

RESUMO

BACKGROUND: Atrial fibrillation (AF) is a common cardiac arrhythmia that increases the risk of thromboembolic events. Anticoagulation therapy to prevent AF-related stroke has been shown to be cost-effective. A national screening programme for AF may prevent AF-related events, but would involve a substantial investment of NHS resources. OBJECTIVES: To conduct a systematic review of the diagnostic test accuracy (DTA) of screening tests for AF, update a systematic review of comparative studies evaluating screening strategies for AF, develop an economic model to compare the cost-effectiveness of different screening strategies and review observational studies of AF screening to provide inputs to the model. DESIGN: Systematic review, meta-analysis and cost-effectiveness analysis. SETTING: Primary care. PARTICIPANTS: Adults. INTERVENTION: Screening strategies, defined by screening test, age at initial and final screens, screening interval and format of screening {systematic opportunistic screening [individuals offered screening if they consult with their general practitioner (GP)] or systematic population screening (when all eligible individuals are invited to screening)}. MAIN OUTCOME MEASURES: Sensitivity, specificity and diagnostic odds ratios; the odds ratio of detecting new AF cases compared with no screening; and the mean incremental net benefit compared with no screening. REVIEW METHODS: Two reviewers screened the search results, extracted data and assessed the risk of bias. A DTA meta-analysis was perfomed, and a decision tree and Markov model was used to evaluate the cost-effectiveness of the screening strategies. RESULTS: Diagnostic test accuracy depended on the screening test and how it was interpreted. In general, the screening tests identified in our review had high sensitivity (> 0.9). Systematic population and systematic opportunistic screening strategies were found to be similarly effective, with an estimated 170 individuals needed to be screened to detect one additional AF case compared with no screening. Systematic opportunistic screening was more likely to be cost-effective than systematic population screening, as long as the uptake of opportunistic screening observed in randomised controlled trials translates to practice. Modified blood pressure monitors, photoplethysmography or nurse pulse palpation were more likely to be cost-effective than other screening tests. A screening strategy with an initial screening age of 65 years and repeated screens every 5 years until age 80 years was likely to be cost-effective, provided that compliance with treatment does not decline with increasing age. CONCLUSIONS: A national screening programme for AF is likely to represent a cost-effective use of resources. Systematic opportunistic screening is more likely to be cost-effective than systematic population screening. Nurse pulse palpation or modified blood pressure monitors would be appropriate screening tests, with confirmation by diagnostic 12-lead electrocardiography interpreted by a trained GP, with referral to a specialist in the case of an unclear diagnosis. Implementation strategies to operationalise uptake of systematic opportunistic screening in primary care should accompany any screening recommendations. LIMITATIONS: Many inputs for the economic model relied on a single trial [the Screening for Atrial Fibrillation in the Elderly (SAFE) study] and DTA results were based on a few studies at high risk of bias/of low applicability. FUTURE WORK: Comparative studies measuring long-term outcomes of screening strategies and DTA studies for new, emerging technologies and to replicate the results for photoplethysmography and GP interpretation of 12-lead electrocardiography in a screening population. STUDY REGISTRATION: This study is registered as PROSPERO CRD42014013739. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Fibrilação Atrial/diagnóstico , Programas de Rastreamento/economia , Programas de Rastreamento/métodos , Atenção Primária à Saúde/economia , Atenção Primária à Saúde/métodos , Idoso , Idoso de 80 Anos ou mais , Pressão Sanguínea , Análise Custo-Benefício , Eletrocardiografia , Feminino , Humanos , Masculino , Programas de Rastreamento/normas , Modelos Econométricos , Aceitação pelo Paciente de Cuidados de Saúde , Pulso Arterial , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Sensibilidade e Especificidade
9.
Health Technol Assess ; 21(9): 1-386, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28279251

RESUMO

BACKGROUND: Warfarin is effective for stroke prevention in atrial fibrillation (AF), but anticoagulation is underused in clinical care. The risk of venous thromboembolic disease during hospitalisation can be reduced by low-molecular-weight heparin (LMWH): warfarin is the most frequently prescribed anticoagulant for treatment and secondary prevention of venous thromboembolism (VTE). Warfarin-related bleeding is a major reason for hospitalisation for adverse drug effects. Warfarin is cheap but therapeutic monitoring increases treatment costs. Novel oral anticoagulants (NOACs) have more rapid onset and offset of action than warfarin, and more predictable dosing requirements. OBJECTIVE: To determine the best oral anticoagulant/s for prevention of stroke in AF and for primary prevention, treatment and secondary prevention of VTE. DESIGN: Four systematic reviews, network meta-analyses (NMAs) and cost-effectiveness analyses (CEAs) of randomised controlled trials. SETTING: Hospital (VTE primary prevention and acute treatment) and primary care/anticoagulation clinics (AF and VTE secondary prevention). PARTICIPANTS: Patients eligible for anticoagulation with warfarin (stroke prevention in AF, acute treatment or secondary prevention of VTE) or LMWH (primary prevention of VTE). INTERVENTIONS: NOACs, warfarin and LMWH, together with other interventions (antiplatelet therapy, placebo) evaluated in the evidence network. MAIN OUTCOME MEASURES: Efficacy Stroke, symptomatic VTE, symptomatic deep-vein thrombosis and symptomatic pulmonary embolism. Safety Major bleeding, clinically relevant bleeding and intracranial haemorrhage. We also considered myocardial infarction and all-cause mortality and evaluated cost-effectiveness. DATA SOURCES: MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library, reference lists of published NMAs and trial registries. We searched MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library. The stroke prevention in AF review search was run on the 12 March 2014 and updated on 15 September 2014, and covered the period 2010 to September 2014. The search for the three reviews in VTE was run on the 19 March 2014, updated on 15 September 2014, and covered the period 2008 to September 2014. REVIEW METHODS: Two reviewers screened search results, extracted and checked data, and assessed risk of bias. For each outcome we conducted standard meta-analysis and NMA. We evaluated cost-effectiveness using discrete-time Markov models. RESULTS: Apixaban (Eliquis®, Bristol-Myers Squibb, USA; Pfizer, USA) [5 mg bd (twice daily)] was ranked as among the best interventions for stroke prevention in AF, and had the highest expected net benefit. Edoxaban (Lixiana®, Daiichi Sankyo, Japan) [60 mg od (once daily)] was ranked second for major bleeding and all-cause mortality. Neither the clinical effectiveness analysis nor the CEA provided strong evidence that NOACs should replace postoperative LMWH in primary prevention of VTE. For acute treatment and secondary prevention of VTE, we found little evidence that NOACs offer an efficacy advantage over warfarin, but the risk of bleeding complications was lower for some NOACs than for warfarin. For a willingness-to-pay threshold of > £5000, apixaban (5 mg bd) had the highest expected net benefit for acute treatment of VTE. Aspirin or no pharmacotherapy were likely to be the most cost-effective interventions for secondary prevention of VTE: our results suggest that it is not cost-effective to prescribe NOACs or warfarin for this indication. CONCLUSIONS: NOACs have advantages over warfarin in patients with AF, but we found no strong evidence that they should replace warfarin or LMWH in primary prevention, treatment or secondary prevention of VTE. LIMITATIONS: These relate mainly to shortfalls in the primary data: in particular, there were no head-to-head comparisons between different NOAC drugs. FUTURE WORK: Calculating the expected value of sample information to clarify whether or not it would be justifiable to fund one or more head-to-head trials. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005324, CRD42013005331 and CRD42013005330. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Anticoagulantes/administração & dosagem , Fibrilação Atrial/diagnóstico , Programas de Rastreamento/economia , Programas de Rastreamento/métodos , Acidente Vascular Cerebral/prevenção & controle , Tromboembolia Venosa/prevenção & controle , Distribuição por Idade , Idoso , Idoso de 80 Anos ou mais , Pressão Sanguínea , Análise Custo-Benefício , Eletrocardiografia , Feminino , Humanos , Masculino , Cadeias de Markov , Programas de Rastreamento/normas , Modelos Econométricos , Metanálise em Rede , Estudos Observacionais como Assunto , Atenção Primária à Saúde , Pulso Arterial , Prevenção Secundária , Sensibilidade e Especificidade , Medicina Estatal/economia , Reino Unido
10.
Arch Dis Child ; 102(6): 522-528, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28104625

RESUMO

OBJECTIVE: Little is known about persistence of or recovery from chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) in adolescents. Previous studies have small sample sizes, short follow-up or have focused on fatigue rather than CFS/ME or, equivalently, chronic fatigue, which is disabling. This work aimed to describe the epidemiology and natural course of CFS/ME in adolescents aged 13-18 years. DESIGN: Longitudinal follow-up of adolescents enrolled in the Avon Longitudinal Study of Parents and Children. SETTING: Avon, UK. PARTICIPANTS: We identified adolescents who had disabling fatigue of >6 months duration without a known cause at ages 13, 16 and 18 years. We use the term 'chronic disabling fatigue' (CDF) because CFS/ME was not verified by clinical diagnosis. We used multiple imputation to obtain unbiased estimates of prevalence and persistence. RESULTS: The estimated prevalence of CDF was 1.47% (95% CI 1.05% to 1.89%) at age 13, 2.22% (1.67% to 2.78%) at age 16 and 2.99% (2.24% to 3.75%) at age 18. Among adolescents with CDF of 6 months duration at 13 years 75.3% (64.0% to 86.6%) were not classified as such at age 16. Similar change was observed between 16 and 18 years (75.0% (62.8% to 87.2%)). Of those with CDF at age 13, 8.02% (0.61% to 15.4%) presented with CDF throughout the duration of adolescence. CONCLUSIONS: The prevalence of CDF lasting 6 months or longer (a proxy for clinically diagnosed CFS/ME) increases from 13 to 18 years. However, persistent CDF is rare in adolescents, with approximately 75% recovering after 2-3 years.


Assuntos
Síndrome de Fadiga Crônica/diagnóstico , Adolescente , Escolaridade , Inglaterra/epidemiologia , Síndrome de Fadiga Crônica/epidemiologia , Feminino , Seguimentos , Humanos , Masculino , Prevalência , Prognóstico , Distribuição por Sexo , Fatores de Tempo
12.
Health Technol Assess ; 20(51): 1-294, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27401902

RESUMO

BACKGROUND: It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES: To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN: Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS: Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS: One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS: A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick results. Nappy pad samples were provided from the other 2277 children, of whom 82% were < 2 years old and 1.3% met the UTI definition. 'Clinical diagnosis' correctly identified 13.3% positive cultures, with 98.5% specificity and an AUROC of 0.63 (95% CI 0.53 to 0.72). Four symptoms and two dipstick results were independently associated with UTI, with an AUROC of 0.81 (0.72 to 0.90; validated 0.78) for symptoms, increasing to 0.87 (0.80 to 0.94; validated 0.82) with the dipstick findings. A high specificity threshold for the clean-catch model was more accurate and less costly than, and as effective as, clinical judgement. The additional diagnostic utility of dipstick testing was offset by its costs. The cost-effectiveness of the nappy pad model was not clear-cut. CONCLUSIONS: Clinicians should prioritise the use of clean-catch sampling as symptoms and signs can cost-effectively improve the identification of UTI in young children where clean catch is possible. Dipstick testing can improve targeting of antibiotic treatment, but at a higher cost than waiting for a laboratory result. Future research is needed to distinguish pathogens from contaminants, assess the impact of the clean-catch algorithm on patient outcomes, and the cost-effectiveness of presumptive versus dipstick versus laboratory-guided antibiotic treatment. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Algoritmos , Atenção Primária à Saúde/métodos , Infecções Urinárias/diagnóstico , Coleta de Urina/economia , Coleta de Urina/métodos , Pré-Escolar , Análise Custo-Benefício , Feminino , Humanos , Lactente , Masculino , Estudos Prospectivos , Curva ROC , Sensibilidade e Especificidade , Método Simples-Cego , Coleta de Urina/normas
13.
Br J Gen Pract ; 66(648): e516-24, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27364678

RESUMO

BACKGROUND: The added diagnostic utility of nappy pad urine samples and the proportion that are contaminated is unknown. AIM: To develop a clinical prediction rule for the diagnosis of urinary tract infection (UTI) based on sampling using the nappy pad method. DESIGN AND SETTING: Acutely unwell children <5 years presenting to 233 UK primary care sites. METHOD: Logistic regression to identify independent associations of symptoms, signs, and urine dipstick test results with UTI; diagnostic utility quantified as area under the receiver operator curves (AUROC). Nappy pad rule characteristics, AUROC, and contamination, compared with findings from clean-catch samples. RESULTS: Nappy pad samples were obtained from 3205 children (82% aged <2 years; 48% female), culture results were available for 2277 (71.0%) and 30 (1.3%) had a UTI on culture. Female sex, smelly urine, darker urine, and the absence of nappy rash were independently associated with a UTI, with an internally-validated, coefficient model AUROC of 0.81 (0.87 for clean-catch), which increased to 0.87 (0.90 for clean-catch) with the addition of dipstick results. GPs' 'working diagnosis' had an AUROC 0.63 (95% confidence intervals [CI] = 0.53 to 0.72). A total of 12.2% of nappy pad and 1.8% of clean-catch samples were 'frankly contaminated' (risk ratio 6.66; 95% CI = 4.95 to 8.96; P<0.001). CONCLUSION: Nappy pad urine culture results, with features that can be reported by parents and dipstick tests, can be clinically useful, but are less accurate and more often contaminated compared with clean-catch urine culture.


Assuntos
Fraldas Infantis/estatística & dados numéricos , Atenção Primária à Saúde , Manejo de Espécimes/métodos , Infecções Urinárias/diagnóstico , Pré-Escolar , Feminino , Humanos , Lactente , Masculino , Estudos Prospectivos , Reino Unido , Urinálise , Infecções Urinárias/urina
14.
J Int AIDS Soc ; 19(1): 20044, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26861115

RESUMO

INTRODUCTION: Response to antiretroviral therapy (ART) among individuals infected with HIV-2 is poorly described. We compared the immunological response among patients treated with three nucleoside reverse-transcriptase inhibitors (NRTIs) to boosted protease inhibitor (PI) and unboosted PI-based regimens in West Africa. METHODS: This prospective cohort study enrolled treatment-naïve HIV-2-infected patients within the International Epidemiological Databases to Evaluate AIDS collaboration in West Africa. We used mixed models to compare the CD4 count response to treatment over 12 months between regimens. RESULTS: Of 422 HIV-2-infected patients, 285 (67.5%) were treated with a boosted PI-based regimen, 104 (24.6%) with an unboosted PI-based regimen and 33 (7.8%) with three NRTIs. Treatment groups were comparable with regard to gender (54.5% female) and median age at ART initiation (45.3 years; interquartile range 38.3 to 51.8). Treatment groups differed by clinical stage (21.2%, 16.8% and 17.3% at CDC Stage C or World Health Organization Stage IV for the triple NRTI, boosted PI and unboosted PI groups, respectively, p=0.02), median length of follow-up (12.9, 17.7 and 44.0 months for the triple NRTI, the boosted PI and the unboosted PI groups, respectively, p<0.001) and baseline median CD4 count (192, 173 and 129 cells/µl in the triple NRTI, the boosted PI and the unboosted PI-based regimen groups, respectively, p=0.003). CD4 count recovery at 12 months was higher for patients treated with boosted PI-based regimens than those treated with three NRTIs or with unboosted PI-based regimens (191 cells/µl, 95% CI 142 to 241; 110 cells/µl, 95% CI 29 to 192; 133 cells/µl, 95% CI 80 to 186, respectively, p=0.004). CONCLUSIONS: In this observational study using African data, boosted PI-containing regimens had better immunological response compared to triple NRTI combinations and unboosted PI-based regimens at 12 months. A randomized clinical trial is still required to determine the best initial regimen for treating HIV-2 infected patients.


Assuntos
Infecções por HIV/tratamento farmacológico , HIV-2 , Adulto , Contagem de Linfócito CD4 , Estudos de Coortes , Feminino , Infecções por HIV/imunologia , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
16.
BMC Public Health ; 12: 260, 2012 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-22471759

RESUMO

BACKGROUND: Pesticide self-poisoning is the most commonly used suicide method worldwide, but few studies have investigated the national epidemiology of pesticide suicide in countries where it is a major public health problem. This study aims to investigate geographic variations in pesticide suicide and their impact on the spatial distribution of suicide in Taiwan. METHODS: Smoothed standardized mortality ratios for pesticide suicide (2002-2009) were mapped across Taiwan's 358 districts (median population aged 15 or above = 27 000), and their associations with the size of agricultural workforce were investigated using Bayesian hierarchical models. RESULTS: In 2002-2009 pesticide poisoning was the third most common suicide method in Taiwan, accounting for 13.6% (4913/36 110) of all suicides. Rates were higher in agricultural East and Central Taiwan and lower in major cities. Almost half (47%) of all pesticide suicides occurred in areas where only 13% of Taiwan's population lived. The geographic distribution of overall suicides was more similar to that of pesticide suicides than non-pesticide suicides. Rural-urban differences in suicide were mostly due to pesticide suicide. Areas where a higher proportion of people worked in agriculture showed higher pesticide suicide rates (adjusted rate ratio [ARR] per standard deviation increase in the proportion of agricultural workers = 1.58, 95% Credible Interval [CrI] 1.44-1.74) and overall suicide rates (ARR = 1.06, 95% CrI 1.03-1.10) but lower non-pesticide suicide rates (ARR = 0.91, 95% CrI 0.87-0.95). CONCLUSION: Easy access to pesticides appears to influence the geographic distribution of suicide in Taiwan, highlighting the potential benefits of targeted prevention strategies such as restricting access to highly toxic pesticides.


Assuntos
Exposição Ocupacional/efeitos adversos , Suicídio/estatística & dados numéricos , Agricultura , Feminino , Sistemas de Informação Geográfica , Humanos , Masculino , Resíduos de Praguicidas , Densidade Demográfica , Fatores de Risco , Taiwan/epidemiologia
17.
Cochrane Database Syst Rev ; (1): CD003648, 2011 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-21249656

RESUMO

BACKGROUND: Observational studies of pregnant women in sub-Saharan Africa have shown that low serum vitamin A levels are associated with an increased risk of mother-to-child transmission (MTCT) of HIV. Vitamin A is cheap and easily provided through existing health services in low-income settings. It is thus important to determine the effect of routine supplementation of HIV positive pregnant or breastfeeding women with this vitamin on the risk of MTCT of HIV, which currently results in more than 1000 new HIV infections each day world-wide. OBJECTIVES: We aimed to assess the effect of antenatal and or postpartum vitamin A supplementation on the risk of MTCT of HIV as well as infant and maternal mortality and morbidity. SEARCH STRATEGY: In June 2010 we searched the Cochrane Central Register of Controlled Trials, PubMed, EMBASE, AIDS Education Global Information System, and WHO International Clinical Trials Registry Platform; and checked reference lists of identified articles for any studies published after the earlier version of this review was updated in 2008. SELECTION CRITERIA: We selected randomised controlled trials conducted in any setting that compared vitamin A supplementation with placebo in known HIV-infected pregnant or breastfeeding women. DATA COLLECTION AND ANALYSIS: At least two authors independently assessed trial eligibility and quality and extracted data. We calculated relative risks (RR) or mean differences (MD), with their 95% confidence intervals (CI) for each study. We conducted meta-analysis using a fixed-effects method (when there was no significant heterogeneity between study results, i.e. P>0.1) or the random-effects method (when there was significant heterogeneity), and report the Higgins' statistic for all pooled effect measures. MAIN RESULTS: Five randomised controlled trials which enrolled 7,528 HIV-infected women (either during pregnancy or the immediate postpartum period) met our inclusion criteria. These trials were conducted in Malawi, South Africa, Tanzania, and Zimbabwe between 1995 and 2005. We combined the results of these trials and found no evidence that vitamin A supplementation has an effect on the risk of MTCT of HIV (4 trials, 6517 women: RR 1.04, 95% CI 0.87 to 1.24; I(2)=68%). However, antenatal vitamin A supplementation significantly improved birth weight (3 trials, 1809 women: MD 89.78, 95%CI 84.73 to 94.83; I(2)=33.0%), but there was no evidence of an effect on preterm births (3 trials, 2110 women: RR 0.88, 95%CI 0.65 to 1.19; I(2)=58.1%), stillbirths (4 trials, 2855 women: RR 0.99, 95%CI 0.68 to 1.43; I(2)=0%), deaths by 24 months (2 trials, 1635 women: RR 1.03, 95%CI 0.88 to 1.20; I(2)=0%), postpartum CD4 levels (1 trial, 727 women: MD -4.00, 95% CI -51.06 to 43.06), and maternal death ( 1 trial, 728 women: RR 0.49, 95%CI 0.04 to 5.37). AUTHORS' CONCLUSIONS: Current best evidence shows that antenatal or postpartum vitamin A supplementation probably has little or no effect on mother-to-child transmission of HIV. According to the GRADE classification, the quality of this evidence is moderate; implying that the true effect of vitamin A supplementation on the risk of mother-to-child transmission of HIV is likely to be close to the findings of this review, but that there is also a possibility that it is substantially different.


Assuntos
Infecções por HIV/transmissão , Transmissão Vertical de Doenças Infecciosas/prevenção & controle , Complicações Infecciosas na Gravidez , Deficiência de Vitamina A/complicações , Vitamina A/administração & dosagem , Vitaminas/administração & dosagem , Feminino , Infecções por HIV/prevenção & controle , Humanos , Recém-Nascido , Gravidez , Ensaios Clínicos Controlados Aleatórios como Assunto , Deficiência de Vitamina A/tratamento farmacológico
18.
J Clin Epidemiol ; 64(6): 602-7, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21075596

RESUMO

OBJECTIVE: To compare the performance of MEDLINE searches using index test(s) and target condition (subject searches) with the same searches combined with methodological filters for test accuracy studies. STUDY DESIGN AND SETTING: We derived a reference set of 506 test accuracy studies indexed on MEDLINE from seven systematic reviews that conducted extensive searches. We compared the performance of "subject" with "filtered" searches (same searches combined with each of 22 filters). Outcome measures were number of reference set records missed, sensitivity, number needed to read (NNR), and precision (Number of reference set studies identified for every 100 records screened). RESULTS: Subject searches missed 47 of the 506 reference studies; filtered searches missed an additional 21 to 241 studies. Sensitivity was 91% for subject searches and ranged from 43% to 87% for filtered searches. The NNR was 56 (precision 2%) for subject searches and ranged from 7 to 51 (precision 2-15%) for filtered searches. CONCLUSIONS: Filtered searches miss additional studies compared with searches based on index test and target condition. None of the existing filters provided reductions in the NNR for acceptable sensitivity; currently available methodological filters should not be used to identify studies for inclusion in test accuracy reviews.


Assuntos
Testes Diagnósticos de Rotina/normas , Armazenamento e Recuperação da Informação/normas , MEDLINE/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Medicina Baseada em Evidências , Humanos , Projetos de Pesquisa , Estados Unidos
19.
Int J Epidemiol ; 34(5): 1089-99, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16087687

RESUMO

Twin studies have long been recognized for their value in learning about the aetiology of disease and specifically for their potential for separating genetic effects from environmental effects. The recent upsurge of interest in life-course epidemiology and the study of developmental influences on later health has provided a new impetus to study twins as a source of unique insights. Twins are of special interest because they provide naturally matched pairs where the confounding effects of a large number of potentially causal factors (such as maternal nutrition or gestation length) may be removed by comparisons between twins who share them. The traditional tool of epidemiological 'risk factor analysis' is the regression model, but it is not straightforward to transfer standard regression methods to twin data, because the analysis needs to reflect the paired structure of the data, which induces correlation between twins. This paper reviews the use of more specialized regression methods for twin data, based on generalized least squares or linear mixed models, and explains the relationship between these methods and the commonly used approach of analysing within-twin-pair difference values. Methods and issues of interpretation are illustrated using an example from a recent study of the association between birth weight and cord blood erythropoietin. We focus on the analysis of continuous outcome measures but review additional complexities that arise with binary outcomes. We recommend the use of a general model that includes separate regression coefficients for within-twin-pair and between-pair effects, and provide guidelines for the interpretation of estimates obtained under this model.


Assuntos
Análise de Regressão , Estudos em Gêmeos como Assunto , Peso ao Nascer/fisiologia , Interpretação Estatística de Dados , Eritropoetina/sangue , Guias como Assunto , Humanos , Análise dos Mínimos Quadrados , Funções Verossimilhança , Modelos Lineares
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA