Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Health Technol Assess ; 27(8): 1-257, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37435838

RESUMO

Background: Bleeding among populations undergoing percutaneous coronary intervention or coronary artery bypass grafting and among conservatively managed patients with acute coronary syndrome exposed to different dual antiplatelet therapy and triple therapy (i.e. dual antiplatelet therapy plus an anticoagulant) has not been previously quantified. Objectives: The objectives were to estimate hazard ratios for bleeding for different antiplatelet and triple therapy regimens, estimate resources and the associated costs of treating bleeding events, and to extend existing economic models of the cost-effectiveness of dual antiplatelet therapy. Design: The study was designed as three retrospective population-based cohort studies emulating target randomised controlled trials. Setting: The study was set in primary and secondary care in England from 2010 to 2017. Participants: Participants were patients aged ≥ 18 years undergoing coronary artery bypass grafting or emergency percutaneous coronary intervention (for acute coronary syndrome), or conservatively managed patients with acute coronary syndrome. Data sources: Data were sourced from linked Clinical Practice Research Datalink and Hospital Episode Statistics. Interventions: Coronary artery bypass grafting and conservatively managed acute coronary syndrome: aspirin (reference) compared with aspirin and clopidogrel. Percutaneous coronary intervention: aspirin and clopidogrel (reference) compared with aspirin and prasugrel (ST elevation myocardial infarction only) or aspirin and ticagrelor. Main outcome measures: Primary outcome: any bleeding events up to 12 months after the index event. Secondary outcomes: major or minor bleeding, all-cause and cardiovascular mortality, mortality from bleeding, myocardial infarction, stroke, additional coronary intervention and major adverse cardiovascular events. Results: The incidence of any bleeding was 5% among coronary artery bypass graft patients, 10% among conservatively managed acute coronary syndrome patients and 9% among emergency percutaneous coronary intervention patients, compared with 18% among patients prescribed triple therapy. Among coronary artery bypass grafting and conservatively managed acute coronary syndrome patients, dual antiplatelet therapy, compared with aspirin, increased the hazards of any bleeding (coronary artery bypass grafting: hazard ratio 1.43, 95% confidence interval 1.21 to 1.69; conservatively-managed acute coronary syndrome: hazard ratio 1.72, 95% confidence interval 1.15 to 2.57) and major adverse cardiovascular events (coronary artery bypass grafting: hazard ratio 2.06, 95% confidence interval 1.23 to 3.46; conservatively-managed acute coronary syndrome: hazard ratio 1.57, 95% confidence interval 1.38 to 1.78). Among emergency percutaneous coronary intervention patients, dual antiplatelet therapy with ticagrelor, compared with dual antiplatelet therapy with clopidogrel, increased the hazard of any bleeding (hazard ratio 1.47, 95% confidence interval 1.19 to 1.82), but did not reduce the incidence of major adverse cardiovascular events (hazard ratio 1.06, 95% confidence interval 0.89 to 1.27). Among ST elevation myocardial infarction percutaneous coronary intervention patients, dual antiplatelet therapy with prasugrel, compared with dual antiplatelet therapy with clopidogrel, increased the hazard of any bleeding (hazard ratio 1.48, 95% confidence interval 1.02 to 2.12), but did not reduce the incidence of major adverse cardiovascular events (hazard ratio 1.10, 95% confidence interval 0.80 to 1.51). Health-care costs in the first year did not differ between dual antiplatelet therapy with clopidogrel and aspirin monotherapy among either coronary artery bypass grafting patients (mean difference £94, 95% confidence interval -£155 to £763) or conservatively managed acute coronary syndrome patients (mean difference £610, 95% confidence interval -£626 to £1516), but among emergency percutaneous coronary intervention patients were higher for those receiving dual antiplatelet therapy with ticagrelor than for those receiving dual antiplatelet therapy with clopidogrel, although for only patients on concurrent proton pump inhibitors (mean difference £1145, 95% confidence interval £269 to £2195). Conclusions: This study suggests that more potent dual antiplatelet therapy may increase the risk of bleeding without reducing the incidence of major adverse cardiovascular events. These results should be carefully considered by clinicians and decision-makers alongside randomised controlled trial evidence when making recommendations about dual antiplatelet therapy. Limitations: The estimates for bleeding and major adverse cardiovascular events may be biased from unmeasured confounding and the exclusion of an eligible subgroup of patients who could not be assigned an intervention. Because of these limitations, a formal cost-effectiveness analysis could not be conducted. Future work: Future work should explore the feasibility of using other UK data sets of routinely collected data, less susceptible to bias, to estimate the benefit and harm of antiplatelet interventions. Trial registration: This trial is registered as ISRCTN76607611. Funding: This project was funded by the National Institute for Health and Care Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 27, No. 8. See the NIHR Journals Library website for further project information.


People who have a heart attack are treated with a stent to open up the blocked artery that caused the heart attack, with surgery to bypass the blocked artery or with medication only. Whatever the treatment, they are prescribed one or more antiplatelet drugs, either aspirin only or aspirin and an additional antiplatelet (clopidogrel, prasugrel or ticagrelor), for 12 months after the heart attack. Antiplatelets are given to prevent another heart attack, but increase the risk of bleeding. We used a large general practice database and a database describing patients' attendances and admissions to hospital to determine how many people bleed with different antiplatelet combinations. We found that, overall, up to 1 in 10 people taking antiplatelets (rising to 2 in 10 if also taking an anticoagulant such as warfarin or dabigatran) reported a bleed. Among patients treated with surgery or medication only, we compared aspirin only (which is a less potent therapy) with aspirin and clopidogrel (a more potent therapy). Among patients treated with stents, we compared aspirin and clopidogrel (less potent therapy) with aspirin and prasugrel or ticagrelor (more potent therapy). In all three populations, the more potent therapy increased the risk of bleeding by about one and a half times, but this was not offset by a reduced risk of having a subsequent heart attack. This may be explained by low adherence to the medication: between one-third and almost half of all patients did not adhere to their regimen, and non-adherence was generally higher among patients taking a more potent therapy. It may also be explained by bias inherent in the study, for example if the groups prescribed different antiplatelet regimens had different risks of having another heart attack. Nevertheless, the results show that doctors should be cautious about prescribing more potent antiplatelet therapy because it may increase serious bleeds without necessarily reducing the number of heart attacks.


Assuntos
Síndrome Coronariana Aguda , Infarto do Miocárdio com Supradesnível do Segmento ST , Humanos , Síndrome Coronariana Aguda/tratamento farmacológico , Síndrome Coronariana Aguda/cirurgia , Aspirina/efeitos adversos , Clopidogrel/uso terapêutico , Inibidores da Agregação Plaquetária/efeitos adversos , Cloridrato de Prasugrel , Estudos Retrospectivos , Ticagrelor , Estudos de Coortes
2.
Nat Med ; 29(1): 219-225, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36658423

RESUMO

How the Coronavirus Disease 2019 (COVID-19) pandemic has affected prevention and management of cardiovascular disease (CVD) is not fully understood. In this study, we used medication data as a proxy for CVD management using routinely collected, de-identified, individual-level data comprising 1.32 billion records of community-dispensed CVD medications from England, Scotland and Wales between April 2018 and July 2021. Here we describe monthly counts of prevalent and incident medications dispensed, as well as percentage changes compared to the previous year, for several CVD-related indications, focusing on hypertension, hypercholesterolemia and diabetes. We observed a decline in the dispensing of antihypertensive medications between March 2020 and July 2021, with 491,306 fewer individuals initiating treatment than expected. This decline was predicted to result in 13,662 additional CVD events, including 2,281 cases of myocardial infarction and 3,474 cases of stroke, should individuals remain untreated over their lifecourse. Incident use of lipid-lowering medications decreased by 16,744 patients per month during the first half of 2021 as compared to 2019. By contrast, incident use of medications to treat type 2 diabetes mellitus, other than insulin, increased by approximately 623 patients per month for the same time period. In light of these results, methods to identify and treat individuals who have missed treatment for CVD risk factors and remain undiagnosed are urgently required to avoid large numbers of excess future CVD events, an indirect impact of the COVID-19 pandemic.


Assuntos
COVID-19 , Doenças Cardiovasculares , Diabetes Mellitus Tipo 2 , Hipertensão , Humanos , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/prevenção & controle , Doenças Cardiovasculares/diagnóstico , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/tratamento farmacológico , Diabetes Mellitus Tipo 2/epidemiologia , Pandemias/prevenção & controle , COVID-19/epidemiologia , Hipertensão/complicações , Hipertensão/tratamento farmacológico , Hipertensão/epidemiologia , Fatores de Risco
3.
Heart ; 106(24): 1890-1897, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33020224

RESUMO

OBJECTIVE: To monitor hospital activity for presentation, diagnosis and treatment of cardiovascular diseases during the COVID-19) pandemic to inform on indirect effects. METHODS: Retrospective serial cross-sectional study in nine UK hospitals using hospital activity data from 28 October 2019 (pre-COVID-19) to 10 May 2020 (pre-easing of lockdown) and for the same weeks during 2018-2019. We analysed aggregate data for selected cardiovascular diseases before and during the epidemic. We produced an online visualisation tool to enable near real-time monitoring of trends. RESULTS: Across nine hospitals, total admissions and emergency department (ED) attendances decreased after lockdown (23 March 2020) by 57.9% (57.1%-58.6%) and 52.9% (52.2%-53.5%), respectively, compared with the previous year. Activity for cardiac, cerebrovascular and other vascular conditions started to decline 1-2 weeks before lockdown and fell by 31%-88% after lockdown, with the greatest reductions observed for coronary artery bypass grafts, carotid endarterectomy, aortic aneurysm repair and peripheral arterial disease procedures. Compared with before the first UK COVID-19 (31 January 2020), activity declined across diseases and specialties between the first case and lockdown (total ED attendances relative reduction (RR) 0.94, 0.93-0.95; total hospital admissions RR 0.96, 0.95-0.97) and after lockdown (attendances RR 0.63, 0.62-0.64; admissions RR 0.59, 0.57-0.60). There was limited recovery towards usual levels of some activities from mid-April 2020. CONCLUSIONS: Substantial reductions in total and cardiovascular activities are likely to contribute to a major burden of indirect effects of the pandemic, suggesting they should be monitored and mitigated urgently.


Assuntos
COVID-19 , Serviço Hospitalar de Cardiologia/tendências , Doenças Cardiovasculares/terapia , Prestação Integrada de Cuidados de Saúde/tendências , Necessidades e Demandas de Serviços de Saúde/tendências , Avaliação das Necessidades/tendências , Doenças Cardiovasculares/diagnóstico , Estudos Transversais , Serviço Hospitalar de Emergência/tendências , Humanos , Admissão do Paciente/tendências , Estudos Retrospectivos , Fatores de Tempo , Reino Unido
4.
BMJ ; 368: l6802, 2020 01 21.
Artigo em Inglês | MEDLINE | ID: mdl-31964641

RESUMO

OBJECTIVES: To study the impact of blinding on estimated treatment effects, and their variation between trials; differentiating between blinding of patients, healthcare providers, and observers; detection bias and performance bias; and types of outcome (the MetaBLIND study). DESIGN: Meta-epidemiological study. DATA SOURCE: Cochrane Database of Systematic Reviews (2013-14). ELIGIBILITY CRITERIA FOR SELECTING STUDIES: Meta-analyses with both blinded and non-blinded trials on any topic. REVIEW METHODS: Blinding status was retrieved from trial publications and authors, and results retrieved automatically from the Cochrane Database of Systematic Reviews. Bayesian hierarchical models estimated the average ratio of odds ratios (ROR), and estimated the increases in heterogeneity between trials, for non-blinded trials (or of unclear status) versus blinded trials. Secondary analyses adjusted for adequacy of concealment of allocation, attrition, and trial size, and explored the association between outcome subjectivity (high, moderate, low) and average bias. An ROR lower than 1 indicated exaggerated effect estimates in trials without blinding. RESULTS: The study included 142 meta-analyses (1153 trials). The ROR for lack of blinding of patients was 0.91 (95% credible interval 0.61 to 1.34) in 18 meta-analyses with patient reported outcomes, and 0.98 (0.69 to 1.39) in 14 meta-analyses with outcomes reported by blinded observers. The ROR for lack of blinding of healthcare providers was 1.01 (0.84 to 1.19) in 29 meta-analyses with healthcare provider decision outcomes (eg, readmissions), and 0.97 (0.64 to 1.45) in 13 meta-analyses with outcomes reported by blinded patients or observers. The ROR for lack of blinding of observers was 1.01 (0.86 to 1.18) in 46 meta-analyses with subjective observer reported outcomes, with no clear impact of degree of subjectivity. Information was insufficient to determine whether lack of blinding was associated with increased heterogeneity between trials. The ROR for trials not reported as double blind versus those that were double blind was 1.02 (0.90 to 1.13) in 74 meta-analyses. CONCLUSION: No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision. At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials.


Assuntos
Ensaios Clínicos como Assunto , Projetos de Pesquisa Epidemiológica , Avaliação de Resultados em Cuidados de Saúde , Ensaios Clínicos como Assunto/métodos , Ensaios Clínicos como Assunto/organização & administração , Ensaios Clínicos como Assunto/normas , Humanos , Variações Dependentes do Observador , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Projetos de Pesquisa/normas
5.
MDM Policy Pract ; 4(2): 2381468319866828, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31453363

RESUMO

Objectives. Determine the optimal, licensed, first-line anticoagulant for prevention of ischemic stroke in patients with non-valvular atrial fibrillation (AF) in England and Wales from the UK National Health Service (NHS) perspective and estimate value to decision making of further research. Methods. We developed a cost-effectiveness model to compare warfarin (international normalized ratio target range 2-3) with directly acting (or non-vitamin K antagonist) oral anticoagulants (DOACs) apixaban 5 mg, dabigatran 150 mg, edoxaban 60 mg, and rivaroxaban 20 mg, over 30 years post treatment initiation. In addition to death, the 17-state Markov model included the events stroke, bleed, myocardial infarction, and intracranial hemorrhage. Input parameters were informed by systematic literature reviews and network meta-analysis. Expected value of perfect information (EVPI) and expected value of partial perfect information (EVPPI) were estimated to provide an upper bound on value of further research. Results. At willingness-to-pay threshold £20,000, all DOACs have positive expected incremental net benefit compared to warfarin, suggesting they are likely cost-effective. Apixaban has highest expected incremental net benefit (£7533), followed by dabigatran (£6365), rivaroxaban (£5279), and edoxaban (£5212). There was considerable uncertainty as to the optimal DOAC, with the probability apixaban has highest net benefit only 60%. Total estimated population EVPI was £17.94 million (17.85 million, 18.03 million), with relative effect between apixaban versus dabigatran making the largest contribution with EVPPI of £7.95 million (7.66 million, 8.24 million). Conclusions. At willingness-to-pay threshold £20,000, all DOACs have higher expected net benefit than warfarin but there is considerable uncertainty between the DOACs. Apixaban had the highest expected net benefit and greatest probability of having highest net benefit, but there is considerable uncertainty between DOACs. A head-to-head apixaban versus dabigatran trial may be of value.

6.
J Clin Epidemiol ; 111: 105-114, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-29432858

RESUMO

OBJECTIVE: To provide guidance on how systematic review authors, guideline developers, and health technology assessment practitioners should approach the use of the risk of bias in nonrandomized studies of interventions (ROBINS-I) tool as a part of GRADE's certainty rating process. STUDY DESIGN AND SETTING: The study design and setting comprised iterative discussions, testing in systematic reviews, and presentation at GRADE working group meetings with feedback from the GRADE working group. RESULTS: We describe where to start the initial assessment of a body of evidence with the use of ROBINS-I and where one would anticipate the final rating would end up. The GRADE accounted for issues that mitigate concerns about confounding and selection bias by introducing the upgrading domains: large effects, dose-effect relations, and when plausible residual confounders or other biases increase certainty. They will need to be considered in an assessment of a body of evidence when using ROBINS-I. CONCLUSIONS: The use of ROBINS-I in GRADE assessments may allow for a better comparison of evidence from randomized controlled trials (RCTs) and nonrandomized studies (NRSs) because they are placed on a common metric for risk of bias. Challenges remain, including appropriate presentation of evidence from RCTs and NRSs for decision-making and how to optimally integrate RCTs and NRSs in an evidence assessment.


Assuntos
Viés , Medicina Baseada em Evidências/normas , Guias de Prática Clínica como Assunto , Ensaios Clínicos como Assunto/normas , Medicina Baseada em Evidências/métodos , Humanos , Estudos Observacionais como Assunto/normas , Guias de Prática Clínica como Assunto/normas , Medição de Risco , Fatores de Risco , Revisões Sistemáticas como Assunto , Incerteza
7.
Arch Dis Child ; 103(2): 155-164, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28931531

RESUMO

OBJECTIVE: Investigate the effectiveness and cost-effectiveness of the Lightning Process (LP) in addition to specialist medical care (SMC) compared with SMC alone, for children with chronic fatigue syndrome (CFS)/myalgic encephalitis (ME). DESIGN: Pragmatic randomised controlled open trial. Participants were randomly assigned to SMC or SMC+LP. Randomisation was minimised by age and gender. SETTING: Specialist paediatric CFS/ME service. PATIENTS: 12-18 year olds with mild/moderate CFS/ME. MAIN OUTCOME MEASURES: The primary outcome was the the 36-Item Short-Form Health Survey Physical Function Subscale (SF-36-PFS) at 6 months. Secondary outcomes included pain, anxiety, depression, school attendance and cost-effectiveness from a health service perspective at 3, 6 and 12 months. RESULTS: We recruited 100 participants, of whom 51 were randomised to SMC+LP. Data from 81 participants were analysed at 6 months. Physical function (SF-36-PFS) was better in those allocated SMC+LP (adjusted difference in means 12.5(95% CI 4.5 to 20.5), p=0.003) and this improved further at 12 months (15.1 (5.8 to 24.4), p=0.002). At 6 months, fatigue and anxiety were reduced, and at 12 months, fatigue, anxiety, depression and school attendance had improved in the SMC+LP arm. Results were similar following multiple imputation. SMC+LP was probably more cost-effective in the multiple imputation dataset (difference in means in net monetary benefit at 12 months £1474(95% CI £111 to £2836), p=0.034) but not for complete cases. CONCLUSION: The LP is effective and is probably cost-effective when provided in addition to SMC for mild/moderately affected adolescents with CFS/ME. TRIAL REGISTRATION NUMBER: ISRCTN81456207.


Assuntos
Síndrome de Fadiga Crônica/terapia , Psicoterapia de Grupo , Adolescente , Protocolos Clínicos , Terapia Combinada , Análise Custo-Benefício , Síndrome de Fadiga Crônica/economia , Síndrome de Fadiga Crônica/psicologia , Síndrome de Fadiga Crônica/reabilitação , Feminino , Humanos , Masculino , Psicoterapia de Grupo/economia , Resultado do Tratamento
8.
Int J Epidemiol ; 47(1): 193-201, 2018 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-29025083

RESUMO

Background: Evidence of protection from childhood Bacillus Calmette-Guerin (BCG) against tuberculosis (TB) in adulthood, when most transmission occurs, is important for TB control and resource allocation. Methods: We conducted a population-based case-control study of protection by BCG given to children aged 12-13 years against tuberculosis occurring 10-29 years later. We recruited UK-born White subjects with tuberculosis and randomly sampled White community controls. Hazard ratios and 95% confidence intervals (CIs) were estimated using case-cohort Cox regression, adjusting for potential confounding factors, including socio-economic status, smoking, drug use, prison and homelessness. Vaccine effectiveness (VE = 1 - hazard ratio) was assessed at successive intervals more than 10 years following vaccination. Results: We obtained 677 cases and 1170 controls after a 65% response rate in both groups. Confounding by deprivation, education and lifestyle factors was slight 10-20 years after vaccination, and more evident after 20 years. VE 10-15 years after vaccination was 51% (95% CI 21, 69%) and 57% (CI 33, 72%) at 15-20 years. Subsequently, BCG protection appeared to wane; 20-25 years VE = 25% (CI -14%, 51%) and 25-29 years VE = 1% (CI -84%, 47%). Based on multiple imputation of missing data (in 17% subjects), VE estimated in the same intervals after vaccination were similar [56% (CI 33, 72%), 57% (CI 36, 71%), 25% (-10, 48%), 21% (-39, 55%)]. Conclusions: School-aged BCG vaccination offered moderate protection against tuberculosis for at least 20 years, which is longer than previously thought. This has implications for assessing the cost-effectiveness of BCG vaccination and when evaluating new TB vaccines.


Assuntos
Vacina BCG/uso terapêutico , Tuberculose/prevenção & controle , Adolescente , Estudos de Casos e Controles , Criança , Estudos de Coortes , Análise Custo-Benefício , Inglaterra/epidemiologia , Feminino , Humanos , Incidência , Masculino , Avaliação de Programas e Projetos de Saúde , Modelos de Riscos Proporcionais , Serviços de Saúde Escolar , Fatores de Tempo , Tuberculose/epidemiologia
9.
BMJ ; 359: j5058, 2017 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-29183961

RESUMO

Objective To compare the efficacy, safety, and cost effectiveness of direct acting oral anticoagulants (DOACs) for patients with atrial fibrillation.Design Systematic review, network meta-analysis, and cost effectiveness analysis. Data sources Medline, PreMedline, Embase, and The Cochrane Library.Eligibility criteria for selecting studies Published randomised trials evaluating the use of a DOAC, vitamin K antagonist, or antiplatelet drug for prevention of stroke in patients with atrial fibrillation.Results 23 randomised trials involving 94 656 patients were analysed: 13 compared a DOAC with warfarin dosed to achieve a target INR of 2.0-3.0. Apixaban 5 mg twice daily (odds ratio 0.79, 95% confidence interval 0.66 to 0.94), dabigatran 150 mg twice daily (0.65, 0.52 to 0.81), edoxaban 60 mg once daily (0.86, 0.74 to 1.01), and rivaroxaban 20 mg once daily (0.88, 0.74 to 1.03) reduced the risk of stroke or systemic embolism compared with warfarin. The risk of stroke or systemic embolism was higher with edoxaban 60 mg once daily (1.33, 1.02 to 1.75) and rivaroxaban 20 mg once daily (1.35, 1.03 to 1.78) than with dabigatran 150 mg twice daily. The risk of all-cause mortality was lower with all DOACs than with warfarin. Apixaban 5 mg twice daily (0.71, 0.61 to 0.81), dabigatran 110 mg twice daily (0.80, 0.69 to 0.93), edoxaban 30 mg once daily (0.46, 0.40 to 0.54), and edoxaban 60 mg once daily (0.78, 0.69 to 0.90) reduced the risk of major bleeding compared with warfarin. The risk of major bleeding was higher with dabigatran 150 mg twice daily than apixaban 5 mg twice daily (1.33, 1.09 to 1.62), rivaroxaban 20 mg twice daily than apixaban 5 mg twice daily (1.45, 1.19 to 1.78), and rivaroxaban 20 mg twice daily than edoxaban 60 mg once daily (1.31, 1.07 to 1.59). The risk of intracranial bleeding was substantially lower for most DOACs compared with warfarin, whereas the risk of gastrointestinal bleeding was higher with some DOACs than warfarin. Apixaban 5 mg twice daily was ranked the highest for most outcomes, and was cost effective compared with warfarin.Conclusions The network meta-analysis informs the choice of DOACs for prevention of stroke in patients with atrial fibrillation. Several DOACs are of net benefit compared with warfarin. A trial directly comparing DOACs would overcome the need for indirect comparisons to be made through network meta-analysis.Systematic review registration PROSPERO CRD 42013005324.


Assuntos
Anticoagulantes/uso terapêutico , Fibrilação Atrial/complicações , Fibrilação Atrial/tratamento farmacológico , Acidente Vascular Cerebral/prevenção & controle , Administração Oral , Anticoagulantes/administração & dosagem , Anticoagulantes/efeitos adversos , Anticoagulantes/economia , Fibrilação Atrial/economia , Análise Custo-Benefício , Custos de Medicamentos/estatística & dados numéricos , Hemorragia/induzido quimicamente , Humanos , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Acidente Vascular Cerebral/economia , Acidente Vascular Cerebral/etiologia
10.
Health Technol Assess ; 21(39): 1-54, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28738015

RESUMO

BACKGROUND: Until recently, evidence that protection from the bacillus Calmette-Guérin (BCG) vaccination lasted beyond 10 years was limited. In the past few years, studies in Brazil and the USA (in Native Americans) have suggested that protection from BCG vaccination against tuberculosis (TB) in childhood can last for several decades. The UK's universal school-age BCG vaccination programme was stopped in 2005 and the programme of selective vaccination of high-risk (usually ethnic minority) infants was enhanced. OBJECTIVES: To assess the duration of protection of infant and school-age BCG vaccination against TB in the UK. METHODS: Two case-control studies of the duration of protection of BCG vaccination were conducted, the first on minority ethnic groups who were eligible for infant BCG vaccination 0-19 years earlier and the second on white subjects eligible for school-age BCG vaccination 10-29 years earlier. TB cases were selected from notifications to the UK national Enhanced Tuberculosis Surveillance system from 2003 to 2012. Population-based control subjects, frequency matched for age, were recruited. BCG vaccination status was established from BCG records, scar reading and BCG history. Information on potential confounders was collected using computer-assisted interviews. Vaccine effectiveness was estimated as a function of time since vaccination, using a case-cohort analysis based on Cox regression. RESULTS: In the infant BCG study, vaccination status was determined using vaccination records as recall was poor and concordance between records and scar reading was limited. A protective effect was seen up to 10 years following infant vaccination [< 5 years since vaccination: vaccine effectiveness (VE) 66%, 95% confidence interval (CI) 17% to 86%; 5-10 years since vaccination: VE 75%, 95% CI 43% to 89%], but there was weak evidence of an effect 10-15 years after vaccination (VE 36%, 95% CI negative to 77%; p = 0.396). The analyses of the protective effect of infant BCG vaccination were adjusted for confounders, including birth cohort and ethnicity. For school-aged BCG vaccination, VE was 51% (95% CI 21% to 69%) 10-15 years after vaccination and 57% (95% CI 33% to 72%) 15-20 years after vaccination, beyond which time protection appeared to wane. Ascertainment of vaccination status was based on self-reported history and scar reading. LIMITATIONS: The difficulty in examining vaccination sites in older women in the high-risk minority ethnic study population and the sparsity of vaccine record data in the later time periods precluded robust assessment of protection from infant BCG vaccination > 10 years after vaccination. CONCLUSIONS: Infant BCG vaccination in a population at high risk for TB was shown to provide protection for at least 10 years, whereas in the white population school-age vaccination was shown to provide protection for at least 20 years. This evidence may inform TB vaccination programmes (e.g. the timing of administration of improved TB vaccines, if they become available) and cost-effectiveness studies. Methods to deal with missing record data in the infant study could be explored, including the use of scar reading. FUNDING: The National Institute for Health Research Health Technology Assessment programme. During the conduct of the study, Jonathan Sterne, Ibrahim Abubakar and Laura C Rodrigues received other funding from NIHR; Ibrahim Abubakar and Laura C Rodrigues have also received funding from the Medical Research Council. Punam Mangtani received funding from the Biotechnology and Biological Sciences Research Council.


Assuntos
Vacina BCG/administração & dosagem , Resultado do Tratamento , Tuberculose/prevenção & controle , Vacinação/estatística & dados numéricos , Adolescente , Adulto , Vacina BCG/economia , Criança , Pré-Escolar , Estudos de Coortes , Etnicidade/estatística & dados numéricos , Feminino , Humanos , Lactente , Masculino , Grupos Minoritários/estatística & dados numéricos , Fatores de Risco , Autorrelato , Fatores de Tempo , Reino Unido , População Branca/estatística & dados numéricos , Adulto Jovem
11.
Health Technol Assess ; 21(29): 1-236, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28629510

RESUMO

BACKGROUND: Atrial fibrillation (AF) is a common cardiac arrhythmia that increases the risk of thromboembolic events. Anticoagulation therapy to prevent AF-related stroke has been shown to be cost-effective. A national screening programme for AF may prevent AF-related events, but would involve a substantial investment of NHS resources. OBJECTIVES: To conduct a systematic review of the diagnostic test accuracy (DTA) of screening tests for AF, update a systematic review of comparative studies evaluating screening strategies for AF, develop an economic model to compare the cost-effectiveness of different screening strategies and review observational studies of AF screening to provide inputs to the model. DESIGN: Systematic review, meta-analysis and cost-effectiveness analysis. SETTING: Primary care. PARTICIPANTS: Adults. INTERVENTION: Screening strategies, defined by screening test, age at initial and final screens, screening interval and format of screening {systematic opportunistic screening [individuals offered screening if they consult with their general practitioner (GP)] or systematic population screening (when all eligible individuals are invited to screening)}. MAIN OUTCOME MEASURES: Sensitivity, specificity and diagnostic odds ratios; the odds ratio of detecting new AF cases compared with no screening; and the mean incremental net benefit compared with no screening. REVIEW METHODS: Two reviewers screened the search results, extracted data and assessed the risk of bias. A DTA meta-analysis was perfomed, and a decision tree and Markov model was used to evaluate the cost-effectiveness of the screening strategies. RESULTS: Diagnostic test accuracy depended on the screening test and how it was interpreted. In general, the screening tests identified in our review had high sensitivity (> 0.9). Systematic population and systematic opportunistic screening strategies were found to be similarly effective, with an estimated 170 individuals needed to be screened to detect one additional AF case compared with no screening. Systematic opportunistic screening was more likely to be cost-effective than systematic population screening, as long as the uptake of opportunistic screening observed in randomised controlled trials translates to practice. Modified blood pressure monitors, photoplethysmography or nurse pulse palpation were more likely to be cost-effective than other screening tests. A screening strategy with an initial screening age of 65 years and repeated screens every 5 years until age 80 years was likely to be cost-effective, provided that compliance with treatment does not decline with increasing age. CONCLUSIONS: A national screening programme for AF is likely to represent a cost-effective use of resources. Systematic opportunistic screening is more likely to be cost-effective than systematic population screening. Nurse pulse palpation or modified blood pressure monitors would be appropriate screening tests, with confirmation by diagnostic 12-lead electrocardiography interpreted by a trained GP, with referral to a specialist in the case of an unclear diagnosis. Implementation strategies to operationalise uptake of systematic opportunistic screening in primary care should accompany any screening recommendations. LIMITATIONS: Many inputs for the economic model relied on a single trial [the Screening for Atrial Fibrillation in the Elderly (SAFE) study] and DTA results were based on a few studies at high risk of bias/of low applicability. FUTURE WORK: Comparative studies measuring long-term outcomes of screening strategies and DTA studies for new, emerging technologies and to replicate the results for photoplethysmography and GP interpretation of 12-lead electrocardiography in a screening population. STUDY REGISTRATION: This study is registered as PROSPERO CRD42014013739. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Fibrilação Atrial/diagnóstico , Programas de Rastreamento/economia , Programas de Rastreamento/métodos , Atenção Primária à Saúde/economia , Atenção Primária à Saúde/métodos , Idoso , Idoso de 80 Anos ou mais , Pressão Sanguínea , Análise Custo-Benefício , Eletrocardiografia , Feminino , Humanos , Masculino , Programas de Rastreamento/normas , Modelos Econométricos , Aceitação pelo Paciente de Cuidados de Saúde , Pulso Arterial , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Sensibilidade e Especificidade
12.
Lancet HIV ; 4(6): e251-e259, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28411091

RESUMO

BACKGROUND: Clinical guidelines vary with respect to the optimal monitoring frequency of HIV-positive individuals. We compared dynamic monitoring strategies based on time-varying CD4 cell counts in virologically suppressed HIV-positive individuals. METHODS: In this observational study, we used data from prospective studies of HIV-positive individuals in Europe (France, Greece, the Netherlands, Spain, Switzerland, and the UK) and North and South America (Brazil, Canada, and the USA) in The HIV-CAUSAL Collaboration and The Centers for AIDS Research Network of Integrated Clinical Systems. We compared three monitoring strategies that differ in the threshold used to measure CD4 cell count and HIV RNA viral load every 3-6 months (when below the threshold) or every 9-12 months (when above the threshold). The strategies were defined by the threshold CD4 counts of 200 cells per µL, 350 cells per µL, and 500 cells per µL. Using inverse probability weighting to adjust for baseline and time-varying confounders, we estimated hazard ratios (HRs) of death and of AIDS-defining illness or death, risk ratios of virological failure, and mean differences in CD4 cell count. FINDINGS: 47 635 individuals initiated an antiretroviral therapy regimen between Jan 1, 2000, and Jan 9, 2015, and met the eligibility criteria for inclusion in our study. During follow-up, CD4 cell count was measured on average every 4·0 months and viral load every 3·8 months. 464 individuals died (107 in threshold 200 strategy, 157 in threshold 350, and 200 in threshold 500) and 1091 had AIDS-defining illnesses or died (267 in threshold 200 strategy, 365 in threshold 350, and 459 in threshold 500). Compared with threshold 500, the mortality HR was 1·05 (95% CI 0·86-1·29) for threshold 200 and 1·02 (0·91·1·14) for threshold 350. Corresponding estimates for death or AIDS-defining illness were 1·08 (0·95-1·22) for threshold 200 and 1·03 (0·96-1·12) for threshold 350. Compared with threshold 500, the 24 month risk ratios of virological failure (viral load more than 200 copies per mL) were 2·01 (1·17-3·43) for threshold 200 and 1·24 (0·89-1·73) for threshold 350, and 24 month mean CD4 cell count differences were 0·4 (-25·5 to 26·3) cells per µL for threshold 200 and -3·5 (-16·0 to 8·9) cells per µL for threshold 350. INTERPRETATION: Decreasing monitoring to annually when CD4 count is higher than 200 cells per µL compared with higher than 500 cells per µL does not worsen the short-term clinical and immunological outcomes of virally suppressed HIV-positive individuals. However, more frequent virological monitoring might be necessary to reduce the risk of virological failure. Further follow-up studies are needed to establish the long-term safety of these strategies. FUNDING: National Institutes of Health.


Assuntos
Monitoramento de Medicamentos/métodos , Infecções por HIV/tratamento farmacológico , Adolescente , Adulto , Fármacos Anti-HIV/economia , Fármacos Anti-HIV/uso terapêutico , Contagem de Linfócito CD4 , Países Desenvolvidos , Monitoramento de Medicamentos/economia , Europa (Continente) , Feminino , Infecções por HIV/sangue , Infecções por HIV/economia , Infecções por HIV/virologia , HIV-1/efeitos dos fármacos , HIV-1/genética , HIV-1/isolamento & purificação , HIV-1/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Carga Viral , Adulto Jovem
13.
Value Health ; 20(4): 556-566, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28407997

RESUMO

OBJECTIVE: To estimate the cost-effectiveness of a two-step clinical rule using symptoms, signs and dipstick testing to guide the diagnosis and antibiotic treatment of urinary tract infection (UTI) in acutely unwell young children presenting to primary care. METHODS: Decision analytic model synthesising data from a multicentre, prospective cohort study (DUTY) and the wider literature to estimate the short-term and lifetime costs and healthcare outcomes (symptomatic days, recurrent UTI, quality adjusted life years) of eight diagnostic strategies. We compared GP clinical judgement with three strategies based on a 'coefficient score' combining seven symptoms and signs independently associated with UTI and four strategies based on weighted scores according to the presence/absence of five symptoms and signs. We compared dipstick testing versus laboratory culture in children at intermediate risk of UTI. RESULTS: Sampling, culture and antibiotic costs were lowest in high-specificity DUTY strategies (£1.22 and £1.08) compared to clinical judgement (£1.99). These strategies also approximately halved urine sampling (4.8% versus 9.1% in clinical judgement) without reducing sensitivity (58.2% versus 56.4%). Outcomes were very similar across all diagnostic strategies. High-specificity DUTY strategies were more cost-effective than clinical judgement in the short- (iNMB = £0.78 and £0.84) and long-term (iNMB =£2.31 and £2.50). Dipstick tests had poorer cost-effectiveness than laboratory culture in children at intermediate risk of UTI (iNMB = £-1.41). CONCLUSIONS: Compared to GPs' clinical judgement, high specificity clinical rules from the DUTY study could substantially reduce urine sampling, achieving lower costs and equivalent patient outcomes. Dipstick testing children for UTI is not cost-effective.


Assuntos
Técnicas Bacteriológicas/economia , Técnicas de Apoio para a Decisão , Custos de Cuidados de Saúde , Fitas Reagentes/economia , Urinálise/economia , Infecções Urinárias/diagnóstico , Fatores Etários , Antibacterianos/economia , Antibacterianos/uso terapêutico , Pré-Escolar , Análise Custo-Benefício , Árvores de Decisões , Custos de Medicamentos , Humanos , Julgamento , Valor Preditivo dos Testes , Prevalência , Atenção Primária à Saúde/economia , Estudos Prospectivos , Anos de Vida Ajustados por Qualidade de Vida , Recidiva , Indução de Remissão , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento , Reino Unido/epidemiologia , Procedimentos Desnecessários/economia , Urinálise/instrumentação , Infecções Urinárias/tratamento farmacológico , Infecções Urinárias/economia , Infecções Urinárias/epidemiologia , Urina/microbiologia
14.
Health Technol Assess ; 21(9): 1-386, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28279251

RESUMO

BACKGROUND: Warfarin is effective for stroke prevention in atrial fibrillation (AF), but anticoagulation is underused in clinical care. The risk of venous thromboembolic disease during hospitalisation can be reduced by low-molecular-weight heparin (LMWH): warfarin is the most frequently prescribed anticoagulant for treatment and secondary prevention of venous thromboembolism (VTE). Warfarin-related bleeding is a major reason for hospitalisation for adverse drug effects. Warfarin is cheap but therapeutic monitoring increases treatment costs. Novel oral anticoagulants (NOACs) have more rapid onset and offset of action than warfarin, and more predictable dosing requirements. OBJECTIVE: To determine the best oral anticoagulant/s for prevention of stroke in AF and for primary prevention, treatment and secondary prevention of VTE. DESIGN: Four systematic reviews, network meta-analyses (NMAs) and cost-effectiveness analyses (CEAs) of randomised controlled trials. SETTING: Hospital (VTE primary prevention and acute treatment) and primary care/anticoagulation clinics (AF and VTE secondary prevention). PARTICIPANTS: Patients eligible for anticoagulation with warfarin (stroke prevention in AF, acute treatment or secondary prevention of VTE) or LMWH (primary prevention of VTE). INTERVENTIONS: NOACs, warfarin and LMWH, together with other interventions (antiplatelet therapy, placebo) evaluated in the evidence network. MAIN OUTCOME MEASURES: Efficacy Stroke, symptomatic VTE, symptomatic deep-vein thrombosis and symptomatic pulmonary embolism. Safety Major bleeding, clinically relevant bleeding and intracranial haemorrhage. We also considered myocardial infarction and all-cause mortality and evaluated cost-effectiveness. DATA SOURCES: MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library, reference lists of published NMAs and trial registries. We searched MEDLINE and PREMEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library. The stroke prevention in AF review search was run on the 12 March 2014 and updated on 15 September 2014, and covered the period 2010 to September 2014. The search for the three reviews in VTE was run on the 19 March 2014, updated on 15 September 2014, and covered the period 2008 to September 2014. REVIEW METHODS: Two reviewers screened search results, extracted and checked data, and assessed risk of bias. For each outcome we conducted standard meta-analysis and NMA. We evaluated cost-effectiveness using discrete-time Markov models. RESULTS: Apixaban (Eliquis®, Bristol-Myers Squibb, USA; Pfizer, USA) [5 mg bd (twice daily)] was ranked as among the best interventions for stroke prevention in AF, and had the highest expected net benefit. Edoxaban (Lixiana®, Daiichi Sankyo, Japan) [60 mg od (once daily)] was ranked second for major bleeding and all-cause mortality. Neither the clinical effectiveness analysis nor the CEA provided strong evidence that NOACs should replace postoperative LMWH in primary prevention of VTE. For acute treatment and secondary prevention of VTE, we found little evidence that NOACs offer an efficacy advantage over warfarin, but the risk of bleeding complications was lower for some NOACs than for warfarin. For a willingness-to-pay threshold of > £5000, apixaban (5 mg bd) had the highest expected net benefit for acute treatment of VTE. Aspirin or no pharmacotherapy were likely to be the most cost-effective interventions for secondary prevention of VTE: our results suggest that it is not cost-effective to prescribe NOACs or warfarin for this indication. CONCLUSIONS: NOACs have advantages over warfarin in patients with AF, but we found no strong evidence that they should replace warfarin or LMWH in primary prevention, treatment or secondary prevention of VTE. LIMITATIONS: These relate mainly to shortfalls in the primary data: in particular, there were no head-to-head comparisons between different NOAC drugs. FUTURE WORK: Calculating the expected value of sample information to clarify whether or not it would be justifiable to fund one or more head-to-head trials. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005324, CRD42013005331 and CRD42013005330. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Anticoagulantes/administração & dosagem , Fibrilação Atrial/diagnóstico , Programas de Rastreamento/economia , Programas de Rastreamento/métodos , Acidente Vascular Cerebral/prevenção & controle , Tromboembolia Venosa/prevenção & controle , Distribuição por Idade , Idoso , Idoso de 80 Anos ou mais , Pressão Sanguínea , Análise Custo-Benefício , Eletrocardiografia , Feminino , Humanos , Masculino , Cadeias de Markov , Programas de Rastreamento/normas , Modelos Econométricos , Metanálise em Rede , Estudos Observacionais como Assunto , Atenção Primária à Saúde , Pulso Arterial , Prevenção Secundária , Sensibilidade e Especificidade , Medicina Estatal/economia , Reino Unido
15.
Health Technol Assess ; 20(51): 1-294, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27401902

RESUMO

BACKGROUND: It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES: To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN: Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS: Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS: One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS: A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick results. Nappy pad samples were provided from the other 2277 children, of whom 82% were < 2 years old and 1.3% met the UTI definition. 'Clinical diagnosis' correctly identified 13.3% positive cultures, with 98.5% specificity and an AUROC of 0.63 (95% CI 0.53 to 0.72). Four symptoms and two dipstick results were independently associated with UTI, with an AUROC of 0.81 (0.72 to 0.90; validated 0.78) for symptoms, increasing to 0.87 (0.80 to 0.94; validated 0.82) with the dipstick findings. A high specificity threshold for the clean-catch model was more accurate and less costly than, and as effective as, clinical judgement. The additional diagnostic utility of dipstick testing was offset by its costs. The cost-effectiveness of the nappy pad model was not clear-cut. CONCLUSIONS: Clinicians should prioritise the use of clean-catch sampling as symptoms and signs can cost-effectively improve the identification of UTI in young children where clean catch is possible. Dipstick testing can improve targeting of antibiotic treatment, but at a higher cost than waiting for a laboratory result. Future research is needed to distinguish pathogens from contaminants, assess the impact of the clean-catch algorithm on patient outcomes, and the cost-effectiveness of presumptive versus dipstick versus laboratory-guided antibiotic treatment. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Algoritmos , Atenção Primária à Saúde/métodos , Infecções Urinárias/diagnóstico , Coleta de Urina/economia , Coleta de Urina/métodos , Pré-Escolar , Análise Custo-Benefício , Feminino , Humanos , Lactente , Masculino , Estudos Prospectivos , Curva ROC , Sensibilidade e Especificidade , Método Simples-Cego , Coleta de Urina/normas
16.
Am J Epidemiol ; 182(9): 763-74, 2015 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-26443417

RESUMO

Identifying preventable exposures that lead to asthma and associated allergies has proved challenging, partly because of the difficulty in differentiating phenotypes that define homogeneous disease groups. Understanding the socioeconomic patterns of disease phenotypes can help distinguish which exposures are preventable. In the present study, we identified disease phenotypes that are susceptible to socioeconomic variation, and we determined which life-course exposures were associated with these inequalities in a contemporary birth cohort. Participants included children from the Avon Longitudinal Study of Parents and Children, a population-based birth cohort in England, who were born in 1991 and 1992 and attended the clinic at 7-8 years of age (n = 6,378). Disease phenotypes included asthma, atopy, wheezing, altered lung function, and bronchial reactivity phenotypes. Combining atopy with a diagnosis of asthma from a doctor captured the greatest socioeconomic variation, including opposing patterns between phenotype groups: Children with a low socioeconomic position (SEP) had more asthma alone (adjusted multinomial odds ratio = 1.50, 95% confidence interval: 1.21, 1.87) but less atopy alone (adjusted multinomial odds ratio = 0.80, 95% confidence interval: 0.66, 0.98) than did children with high SEP. Adjustment for maternal exposure to tobacco smoke during pregnancy and childhood exposure to tobacco smoke reduced the odds of asthma alone in children with a low SEP. Current inequalities among children who have asthma but not atopy can be prevented by eliminating exposure to tobacco smoke. Other disease phenotypes were not socially patterned or had SEP patterns that were not related to smoke exposure.


Assuntos
Asma/epidemiologia , Hipersensibilidade/epidemiologia , Sons Respiratórios , Classe Social , Asma/fisiopatologia , Criança , Inglaterra/epidemiologia , Feminino , Humanos , Hipersensibilidade/fisiopatologia , Estudos Longitudinais , Masculino , Fenótipo , Testes de Função Respiratória , Sons Respiratórios/fisiopatologia , Fatores de Risco
17.
Trials ; 14: 444, 2013 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-24370208

RESUMO

BACKGROUND: Chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME) is a relatively common and potentially serious condition with a limited evidence base for treatment. Specialist treatment for paediatric CFS/ME uses interventions recommended by National Institute for Health and Clinical Excellence (NICE) including cognitive behavioural therapy, graded exercise therapy and activity management. The Lightning Process (LP) is a trademarked intervention derived from osteopathy, life-coaching and neuro-linguistic programming, delivered over three consecutive days as group sessions. Although over 250 children with CFS/ME attend LP courses each year, there are no reported studies on the effectiveness or cost-effectiveness. METHODS: This pragmatic randomised controlled trial is set within a specialist paediatric CFS/ME service in the south west of England. Children and young people with CFS/ME (n = 80 to 112), aged 12 to 18 years old will be randomised to specialist medical care (SMC) or SMC plus the LP. The primary outcome will be physical function (SF-36 physical function short form) and fatigue (Chalder Fatigue Scale). DISCUSSION: This study will tell us whether adding the LP to SMC is effective and cost-effective compared to SMC alone. This study will also provide detailed information on the implementation of the LP and SMC. TRIAL REGISTRATION: Current Controlled Trials ISRCTN81456207 (31 July 2012).


Assuntos
Síndrome de Fadiga Crônica/terapia , Psicoterapia de Grupo , Projetos de Pesquisa , Adolescente , Criança , Protocolos Clínicos , Terapia Combinada , Análise Custo-Benefício , Inglaterra , Síndrome de Fadiga Crônica/diagnóstico , Síndrome de Fadiga Crônica/economia , Síndrome de Fadiga Crônica/fisiopatologia , Síndrome de Fadiga Crônica/psicologia , Feminino , Custos de Cuidados de Saúde , Humanos , Masculino , Psicoterapia de Grupo/economia , Inquéritos e Questionários , Fatores de Tempo , Resultado do Tratamento
18.
Br J Gen Pract ; 63(609): e256-66, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23540482

RESUMO

BACKGROUND: Laboratory tests are extensively used for diagnosis and monitoring in UK primary care. Test usage by GPs, and associated costs, have grown substantially in recent years. AIM: This study aimed to quantify temporal growth and geographic variation in utilisation of laboratory tests. DESIGN AND SETTING: Retrospective cohort study using data from general practices in the UK. METHOD: Data from the General Practice Research Database, including patient demographics, clinical details, and laboratory test results, were used to estimate rates of change in utilisation between 2005 and 2009, and identify tests with greatest inter-regional variation, by fitting random-effects Poisson regression models. The study also investigated indications for test requests, using diagnoses and symptoms recorded in the 2 weeks before each test. RESULTS: Around 660 000 tests were recorded in 230 000 person-years of follow-up. Test use increased by 24.2%, from 23 872 to 29 644 tests per 10 000 person-years, between 2005 and 2009. Tests with the largest increases were faecal occult blood (121%) and C-reactive protein (86%). There was substantial geographic variation in test utilisation; GPs in some regions requested tests such as plasma viscosity and cardiac enzymes at a rate more than three times the national average. CONCLUSION: Increases in the use of laboratory tests have substantial resource implications. Rapid increases in particular tests may be supported by evidence-based guidelines, but these are often vague about who should be tested, how often, and for how long. Substantial regional variation in test use may reflect uncertainty about diagnostic accuracy and appropriate indications for the laboratory test. There is a need for further research on the diagnostic accuracy, therapeutic impact, and effect on patient health outcomes of the most rapidly increasing and geographically variable tests.


Assuntos
Técnicas de Laboratório Clínico/economia , Medicina Geral/economia , Testes Hematológicos/economia , Programas de Rastreamento/economia , Regionalização da Saúde/economia , Medicina Estatal/economia , Análise de Variância , Proteína C-Reativa/metabolismo , Técnicas de Laboratório Clínico/estatística & dados numéricos , Técnicas de Laboratório Clínico/tendências , Análise Custo-Benefício , Inglaterra/epidemiologia , Feminino , Medicina Geral/tendências , Testes Hematológicos/estatística & dados numéricos , Testes Hematológicos/tendências , Humanos , Masculino , Sangue Oculto , Atenção Primária à Saúde , Regionalização da Saúde/estatística & dados numéricos , Regionalização da Saúde/tendências , Pesquisa/economia , Estudos Retrospectivos
19.
AIDS ; 27(10): 1641-55, 2013 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-23449349

RESUMO

OBJECTIVE: To increase equitable access to life insurance for HIV-positive individuals by identifying subgroups with lower relative mortality. DESIGN: Collaborative analysis of cohort studies. METHODS: We estimated relative mortality from 6 months after starting antiretroviral therapy (ART), compared with the insured population in each country, among adult patients from European cohorts participating in the ART Cohort Collaboration (ART-CC) who were not infected via injection drug use, had not tested positive for hepatitis C, and started triple ART between 1996-2008. We used Poisson models for mortality, with the expected number of deaths according to age, sex and country specified as offset. RESULTS: There were 1236 deaths recorded among 34,680 patients followed for 174,906 person-years. Relative mortality was lower in patients with higher CD4 cell count and lower HIV-1 RNA 6 months after starting ART, without prior AIDS, who were older, and who started ART after 2000. Compared with insured HIV-negative lives, estimated relative mortality of patients aged 20-39 from France, Italy, United Kingdom, Spain and Switzerland, who started ART after 2000 had 6-month CD4 cell count at least 350 cells/µl and HIV-1 RNA less than 104 copies/ml and without prior AIDS was 459%. The proportion of exposure time with relative mortality below 300, 400, 500 and 600% was 28, 43, 61 and 64%, respectively, suggesting that more than 50% of patients (those with lower relative mortality) could be insurable. CONCLUSION: The continuing long-term effectiveness of ART implies that life insurance with sufficiently long duration to cover a mortgage is feasible for many HIV-positive people successfully treated with ART for more than 6 months.


Assuntos
Infecções por HIV/mortalidade , Adolescente , Adulto , Fármacos Anti-HIV/uso terapêutico , Estudos de Coortes , Quimioterapia Combinada , Feminino , França/epidemiologia , Infecções por HIV/tratamento farmacológico , Humanos , Seguro Saúde/estatística & dados numéricos , Itália/epidemiologia , Tábuas de Vida , Masculino , Pessoa de Meia-Idade , Fatores de Risco , Espanha/epidemiologia , Suíça/epidemiologia , Reino Unido/epidemiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA