Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Pharmacoecon Open ; 8(3): 359-371, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38393659

RESUMO

BACKGROUND: Long-term conditions (LTCs) are major public health problems with a considerable health-related and economic burden. Modelling is key in assessing costs and benefits of different disease management strategies, including routine monitoring, in the conditions of hypertension, type 2 diabetes mellitus (T2DM) and chronic kidney disease (CKD) in primary care. OBJECTIVE: This review aimed to identify published model-based cost-effectiveness studies of routine laboratory testing strategies in these LTCs to inform a model evaluating the cost effectiveness of testing strategies in the UK. METHODS: We searched the Medline and Embase databases from inception to July 2023; the National Institute for Health and Care Institute (NICE) website was also searched. Studies were included if they were model-based economic evaluations, evaluated testing strategies, assessed regular testing, and considered adults aged >16 years. Studies identified were summarised by testing strategies, model type, structure, inputs, assessment of uncertainty, and conclusions drawn. RESULTS: Five studies were included in the review, i.e. Markov (n = 3) and microsimulation (n = 2) models. Models were applied within T2DM (n = 2), hypertension (n = 1), T2DM/hypertension (n = 1) and CKD (n = 1). Comorbidity between all three LTCs was modelled to varying extents. All studies used a lifetime horizon, except for a 10-year horizon T2DM model, and all used quality-adjusted life-years as the effectiveness outcome, except a TD2M model that used glycaemic control. No studies explicitly provided a rationale for their selected modelling approach. UK models were available for diabetes and CKD, but these compared only a limited set of routine monitoring tests and frequencies. CONCLUSIONS: There were few studies comparing routine testing strategies in the UK, indicating a need to develop a novel model in all three LTCs. Justification for the modelling technique of the identified studies was lacking. Markov and microsimulation models, with and without comorbidities, were used; however, the findings of this review can provide data sources and inform modelling approaches for evaluating the cost effectiveness of testing strategies in all three LTCs.

2.
Value Health ; 27(3): 301-312, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38154593

RESUMO

OBJECTIVES: Celiac disease (CD) is thought to affect around 1% of people in the United Kingdom, but only approximately 30% are diagnosed. The aim of this work was to assess the cost-effectiveness of strategies for identifying adults and children with CD in terms of who to test and which tests to use. METHODS: A decision tree and Markov model were used to describe testing strategies and model long-term consequences of CD. The analysis compared a selection of pre-test probabilities of CD above which patients should be screened, as well as the use of different serological tests, with or without genetic testing. Value of information analysis was used to prioritize parameters for future research. RESULTS: Using serological testing alone in adults, immunoglobulin A (IgA) tissue transglutaminase (tTG) at a 1% pre-test probability (equivalent to population screening) was most cost-effective. If combining serological testing with genetic testing, human leukocyte antigen combined with IgA tTG at a 5% pre-test probability was most cost-effective. In children, the most cost-effective strategy was a 10% pre-test probability with human leukocyte antigen plus IgA tTG. Value of information analysis highlighted the probability of late diagnosis of CD and the accuracy of serological tests as important parameters. The analysis also suggested prioritizing research in adult women over adult men or children. CONCLUSIONS: For adults, these cost-effectiveness results suggest UK National Screening Committee Criteria for population-based screening for CD should be explored. Substantial uncertainty in the results indicate a high value in conducting further research.


Assuntos
Doença Celíaca , Criança , Masculino , Adulto , Humanos , Feminino , Doença Celíaca/diagnóstico , Análise Custo-Benefício , Transglutaminases , Imunoglobulina A , Antígenos HLA
3.
BJGP Open ; 7(1)2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36693759

RESUMO

BACKGROUND: Use of laboratory testing has increased in the UK over the past few decades, with considerable geographical variation. AIM: To evaluate what laboratory tests are used to monitor people with hypertension, type 2 (T2) diabetes, or chronic kidney disease (CKD) and assess variation in test use in UK primary care. DESIGN & SETTING: Longitudinal cohort study of people registered with UK general practices between June 2013 and May 2018 and previously diagnosed with hypertension, T2 diabetes, or CKD. METHOD: Clinical Practice Research Datalink (CPRD) primary care data linked to ethnic group and deprivation was used to examine testing rates over time, by GP practice, age, sex, ethnic group, and socioeconomic deprivation, with age-sex standardisation. RESULTS: Nearly 1 million patients were included, and more than 27 million tests. The most ordered tests were for renal function (1463 per 1000 person-years), liver function (1063 per 1000 person-years), and full blood count (FBC; 996 per 1000 person-years). There was evidence of undertesting (compared with current guidelines) for HbA1c and albumin:creatinine ratio (ACR) or microalbumin, and potential overtesting of lipids, FBC, liver function, and thyroid function. Some GP practices had up to 27 times higher testing rates than others (HbA1c testing among patients with CKD). CONCLUSION: Testing rates are no longer increasing, but they are not always within the guidelines for monitoring long-term conditions (LTCs). There was considerable variation by GP practice, indicating uncertainty over the most appropriate testing frequencies for different conditions. Standardising the monitoring of LTCs based on the latest evidence would provide greater consistency of access to monitoring tests.

4.
Health Technol Assess ; 26(44): 1-310, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36321689

RESUMO

BACKGROUND: Coeliac disease is an autoimmune disorder triggered by ingesting gluten. It affects approximately 1% of the UK population, but only one in three people is thought to have a diagnosis. Untreated coeliac disease may lead to malnutrition, anaemia, osteoporosis and lymphoma. OBJECTIVES: The objectives were to define at-risk groups and determine the cost-effectiveness of active case-finding strategies in primary care. DESIGN: (1) Systematic review of the accuracy of potential diagnostic indicators for coeliac disease. (2) Routine data analysis to develop prediction models for identification of people who may benefit from testing for coeliac disease. (3) Systematic review of the accuracy of diagnostic tests for coeliac disease. (4) Systematic review of the accuracy of genetic tests for coeliac disease (literature search conducted in April 2021). (5) Online survey to identify diagnostic thresholds for testing, starting treatment and referral for biopsy. (6) Economic modelling to identify the cost-effectiveness of different active case-finding strategies, informed by the findings from previous objectives. DATA SOURCES: For the first systematic review, the following databases were searched from 1997 to April 2021: MEDLINE® (National Library of Medicine, Bethesda, MD, USA), Embase® (Elsevier, Amsterdam, the Netherlands), Cochrane Library, Web of Science™ (Clarivate™, Philadelphia, PA, USA), the World Health Organization International Clinical Trials Registry Platform ( WHO ICTRP ) and the National Institutes of Health Clinical Trials database. For the second systematic review, the following databases were searched from January 1990 to August 2020: MEDLINE, Embase, Cochrane Library, Web of Science, Kleijnen Systematic Reviews ( KSR ) Evidence, WHO ICTRP and the National Institutes of Health Clinical Trials database. For prediction model development, Clinical Practice Research Datalink GOLD, Clinical Practice Research Datalink Aurum and a subcohort of the Avon Longitudinal Study of Parents and Children were used; for estimates for the economic models, Clinical Practice Research Datalink Aurum was used. REVIEW METHODS: For review 1, cohort and case-control studies reporting on a diagnostic indicator in a population with and a population without coeliac disease were eligible. For review 2, diagnostic cohort studies including patients presenting with coeliac disease symptoms who were tested with serological tests for coeliac disease and underwent a duodenal biopsy as reference standard were eligible. In both reviews, risk of bias was assessed using the quality assessment of diagnostic accuracy studies 2 tool. Bivariate random-effects meta-analyses were fitted, in which binomial likelihoods for the numbers of true positives and true negatives were assumed. RESULTS: People with dermatitis herpetiformis, a family history of coeliac disease, migraine, anaemia, type 1 diabetes, osteoporosis or chronic liver disease are 1.5-2 times more likely than the general population to have coeliac disease; individual gastrointestinal symptoms were not useful for identifying coeliac disease. For children, women and men, prediction models included 24, 24 and 21 indicators of coeliac disease, respectively. The models showed good discrimination between patients with and patients without coeliac disease, but performed less well when externally validated. Serological tests were found to have good diagnostic accuracy for coeliac disease. Immunoglobulin A tissue transglutaminase had the highest sensitivity and endomysial antibody the highest specificity. There was little improvement when tests were used in combination. Survey respondents (n = 472) wanted to be 66% certain of the diagnosis from a blood test before starting a gluten-free diet if symptomatic, and 90% certain if asymptomatic. Cost-effectiveness analyses found that, among adults, and using serological testing alone, immunoglobulin A tissue transglutaminase was most cost-effective at a 1% pre-test probability (equivalent to population screening). Strategies using immunoglobulin A endomysial antibody plus human leucocyte antigen or human leucocyte antigen plus immunoglobulin A tissue transglutaminase with any pre-test probability had similar cost-effectiveness results, which were also similar to the cost-effectiveness results of immunoglobulin A tissue transglutaminase at a 1% pre-test probability. The most practical alternative for implementation within the NHS is likely to be a combination of human leucocyte antigen and immunoglobulin A tissue transglutaminase testing among those with a pre-test probability above 1.5%. Among children, the most cost-effective strategy was a 10% pre-test probability with human leucocyte antigen plus immunoglobulin A tissue transglutaminase, but there was uncertainty around the most cost-effective pre-test probability. There was substantial uncertainty in economic model results, which means that there would be great value in conducting further research. LIMITATIONS: The interpretation of meta-analyses was limited by the substantial heterogeneity between the included studies, and most included studies were judged to be at high risk of bias. The main limitations of the prediction models were that we were restricted to diagnostic indicators that were recorded by general practitioners and that, because coeliac disease is underdiagnosed, it is also under-reported in health-care data. The cost-effectiveness model is a simplification of coeliac disease and modelled an average cohort rather than individuals. Evidence was weak on the probability of routine coeliac disease diagnosis, the accuracy of serological and genetic tests and the utility of a gluten-free diet. CONCLUSIONS: Population screening with immunoglobulin A tissue transglutaminase (1% pre-test probability) and of immunoglobulin A endomysial antibody followed by human leucocyte antigen testing or human leucocyte antigen testing followed by immunoglobulin A tissue transglutaminase with any pre-test probability appear to have similar cost-effectiveness results. As decisions to implement population screening cannot be made based on our economic analysis alone, and given the practical challenges of identifying patients with higher pre-test probabilities, we recommend that human leucocyte antigen combined with immunoglobulin A tissue transglutaminase testing should be considered for adults with at least a 1.5% pre-test probability of coeliac disease, equivalent to having at least one predictor. A more targeted strategy of 10% pre-test probability is recommended for children (e.g. children with anaemia). FUTURE WORK: Future work should consider whether or not population-based screening for coeliac disease could meet the UK National Screening Committee criteria and whether or not it necessitates a long-term randomised controlled trial of screening strategies. Large prospective cohort studies in which all participants receive accurate tests for coeliac disease are needed. STUDY REGISTRATION: This study is registered as PROSPERO CRD42019115506 and CRD42020170766. FUNDING: This project was funded by the National Institute for Health and Care Research ( NIHR ) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 26, No. 44. See the NIHR Journals Library website for further project information.


WHAT IS THE PROBLEM?: Around 1 in 100 people in the UK has coeliac disease. It develops when the immune system attacks the lining of the gut after eating gluten. It is thought that only one in three people with coeliac disease is currently diagnosed. Without treatment, people with coeliac disease are at an increased risk of anaemia, osteoporosis and cancer. Treatment is a lifelong gluten-free diet. Diagnosing coeliac disease is difficult. Some people have minimal or non-specific symptoms, such as pain, indigestion or bloating, so knowing who to test is tricky. WHAT DID WE DO?: We wanted to establish who should be tested for coeliac disease, what tests should be used and whether or not invasive testing (a gut biopsy) is necessary for everyone. We looked at existing studies and data from general practices, and conducted an online survey, and brought everything together in an economic (cost) analysis. WHAT DID WE FIND?: Using individual symptoms is not helpful to identify people who may have coeliac disease. People with coeliac disease are more likely to have a combination of symptoms. People with anaemia, type 1 diabetes, osteoporosis, thyroid disorders, immunoglobulin A deficiency, Down syndrome, Turner syndrome or a family history of coeliac disease are more likely to have coeliac disease and should be offered tests. Common blood tests for coeliac disease are very accurate, particularly when used in combination with genetic testing. Blood tests alone can be used for diagnosis for some people. Others will need a biopsy to confirm the diagnosis. Whether or not this is needed depends on their risk of coeliac disease: whether or not they have symptoms and whether or not they have a condition that puts them at higher risk. Shared decision-making is important for individuals considering an invasive test, depending on how certain they want to be about their diagnosis before starting a gluten-free diet.


Assuntos
Doença Celíaca , Osteoporose , Neoplasias Cutâneas , Estados Unidos , Adulto , Criança , Masculino , Humanos , Feminino , Estudos Longitudinais , Estudos Prospectivos , Imunoglobulina A , Ensaios Clínicos Controlados Aleatórios como Assunto
5.
EClinicalMedicine ; 46: 101376, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35434586

RESUMO

Background: Coeliac disease (CD) affects approximately 1% of the population, although only a fraction of patients are diagnosed. Our objective was to develop diagnostic prediction models to help decide who should be offered testing for CD in primary care. Methods: Logistic regression models were developed in Clinical Practice Research Datalink (CPRD) GOLD (between Sep 9, 1987 and Apr 4, 2021, n=107,075) and externally validated in CPRD Aurum (between Jan 1, 1995 and Jan 15, 2021, n=227,915), two UK primary care databases, using (and controlling for) 1:4 nested case-control designs. Candidate predictors included symptoms and chronic conditions identified in current guidelines and using a systematic review of the literature. We used elastic-net regression to further refine the models. Findings: The prediction model included 24, 24, and 21 predictors for children, women, and men, respectively. For children, the strongest predictors were type 1 diabetes, Turner syndrome, IgA deficiency, or first-degree relatives with CD. For women and men, these were anaemia and first-degree relatives. In the development dataset, the models showed good discrimination with a c-statistic of 0·84 (95% CI 0·83-0·84) in children, 0·77 (0·77-0·78) in women, and 0·81 (0·81-0·82) in men. External validation discrimination was lower, potentially because 'first-degree relative' was not recorded in the dataset used for validation. Model calibration was poor, tending to overestimate CD risk in all three groups in both datasets. Interpretation: These prediction models could help identify individuals with an increased risk of CD in relatively low prevalence populations such as primary care. Offering a serological test to these patients could increase case finding for CD. However, this involves offering tests to more people than is currently done. Further work is needed in prospective cohorts to refine and confirm the models and assess clinical and cost effectiveness. Funding: National Institute for Health Research Health Technology Assessment Programme (grant number NIHR129020).

7.
Fam Pract ; 37(6): 845-853, 2020 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-32820328

RESUMO

BACKGROUND: Studies have shown unwarranted variation in test ordering among GP practices and regions, which may lead to patient harm and increased health care costs. There is currently no robust evidence base to inform guidelines on monitoring long-term conditions. OBJECTIVES: To map the extent and nature of research that provides evidence on the use of laboratory tests to monitor long-term conditions in primary care, and to identify gaps in existing research. METHODS: We performed a scoping review-a relatively new approach for mapping research evidence across broad topics-using data abstraction forms and charting data according to a scoping framework. We searched CINAHL, EMBASE and MEDLINE to April 2019. We included studies that aimed to optimize the use of laboratory tests and determine costs, patient harm or variation related to testing in a primary care population with long-term conditions. RESULTS: Ninety-four studies were included. Forty percent aimed to describe variation in test ordering and 36% to investigate test performance. Renal function tests (35%), HbA1c (23%) and lipids (17%) were the most studied laboratory tests. Most studies applied a cohort design using routinely collected health care data (49%). We found gaps in research on strategies to optimize test use to improve patient outcomes, optimal testing intervals and patient harms caused by over-testing. CONCLUSIONS: Future research needs to address these gaps in evidence. High-level evidence is missing, i.e. randomized controlled trials comparing one monitoring strategy to another or quasi-experimental designs such as interrupted time series analysis if trials are not feasible.


Assuntos
Técnicas de Laboratório Clínico/normas , Custos de Cuidados de Saúde , Atenção Primária à Saúde , Humanos , Análise de Séries Temporais Interrompida
8.
J Clin Epidemiol ; 127: 167-174, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32798714

RESUMO

OBJECTIVES: Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests. STUDY DESIGN AND SETTING: This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests. RESULTS: We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons. CONCLUSION: Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.


Assuntos
Viés , Testes Diagnósticos de Rotina/normas , Revisões Sistemáticas como Assunto , Intervalos de Confiança , Confiabilidade dos Dados , Humanos
9.
Br J Cancer ; 120(11): 1045-1051, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31015558

RESUMO

BACKGROUND: Early identification of cancer in primary care is important and challenging. This study examined the diagnostic utility of inflammatory markers (C-reactive protein, erythrocyte sedimentation rate and plasma viscosity) for cancer diagnosis in primary care. METHODS: Cohort study of 160,000 patients with inflammatory marker testing in 2014, plus 40,000 untested matched controls, using Clinical Practice Research Datalink (CPRD), with Cancer Registry linkage. Primary outcome was one-year cancer incidence. RESULTS: Primary care patients with a raised inflammatory marker have a one-year cancer incidence of 3.53% (95% CI 3.37-3.70), compared to 1.50% (1.43-1.58) in those with normal inflammatory markers, and 0.97% (0.87-1.07) in untested controls. Cancer risk is greater with higher inflammatory marker levels, with older age and in men; risk rises further when a repeat test is abnormal but falls if it normalises. Men over 50 and women over 60 with raised inflammatory markers have a cancer risk which exceeds the 3% NICE threshold for urgent investigation. Sensitivities for cancer were 46.1% for CRP, 43.6% ESR and 49.7% for PV. CONCLUSION: Cancer should be considered in patients with raised inflammatory markers. However, inflammatory markers have a poor sensitivity for cancer and are therefore not useful as 'rule-out' test.


Assuntos
Sedimentação Sanguínea , Viscosidade Sanguínea , Proteína C-Reativa/análise , Registros Eletrônicos de Saúde , Neoplasias/diagnóstico , Atenção Primária à Saúde , Adulto , Fatores Etários , Idoso , Biomarcadores , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Neoplasias/epidemiologia , Estudos Prospectivos
10.
Syst Rev ; 6(1): 204, 2017 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-29041953

RESUMO

BACKGROUND: Assessment of the quality of included studies is an essential component of any systematic review. A formal quality assessment is facilitated by using a structured tool. There are currently no guidelines available for researchers wanting to develop a new quality assessment tool. METHODS: This paper provides a framework for developing quality assessment tools based on our experiences of developing a variety of quality assessment tools for studies of differing designs over the last 14 years. We have also drawn on experience from the work of the EQUATOR Network in producing guidance for developing reporting guidelines. RESULTS: We do not recommend a single 'best' approach. Instead, we provide a general framework with suggestions as to how the different stages can be approached. Our proposed framework is based around three key stages: initial steps, tool development and dissemination. CONCLUSIONS: We recommend that anyone who would like to develop a new quality assessment tool follow the stages outlined in this paper. We hope that our proposed framework will increase the number of tools developed using robust methods.


Assuntos
Controle de Qualidade , Projetos de Pesquisa/normas , Literatura de Revisão como Assunto , Inquéritos e Questionários , Viés , Pesquisa Biomédica/normas , Humanos
11.
Health Technol Assess ; 20(51): 1-294, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27401902

RESUMO

BACKGROUND: It is not clear which young children presenting acutely unwell to primary care should be investigated for urinary tract infection (UTI) and whether or not dipstick testing should be used to inform antibiotic treatment. OBJECTIVES: To develop algorithms to accurately identify pre-school children in whom urine should be obtained; assess whether or not dipstick urinalysis provides additional diagnostic information; and model algorithm cost-effectiveness. DESIGN: Multicentre, prospective diagnostic cohort study. SETTING AND PARTICIPANTS: Children < 5 years old presenting to primary care with an acute illness and/or new urinary symptoms. METHODS: One hundred and seven clinical characteristics (index tests) were recorded from the child's past medical history, symptoms, physical examination signs and urine dipstick test. Prior to dipstick results clinician opinion of UTI likelihood ('clinical diagnosis') and urine sampling and treatment intentions ('clinical judgement') were recorded. All index tests were measured blind to the reference standard, defined as a pure or predominant uropathogen cultured at ≥ 10(5) colony-forming units (CFU)/ml in a single research laboratory. Urine was collected by clean catch (preferred) or nappy pad. Index tests were sequentially evaluated in two groups, stratified by urine collection method: parent-reported symptoms with clinician-reported signs, and urine dipstick results. Diagnostic accuracy was quantified using area under receiver operating characteristic curve (AUROC) with 95% confidence interval (CI) and bootstrap-validated AUROC, and compared with the 'clinician diagnosis' AUROC. Decision-analytic models were used to identify optimal urine sampling strategy compared with 'clinical judgement'. RESULTS: A total of 7163 children were recruited, of whom 50% were female and 49% were < 2 years old. Culture results were available for 5017 (70%); 2740 children provided clean-catch samples, 94% of whom were ≥ 2 years old, with 2.2% meeting the UTI definition. Among these, 'clinical diagnosis' correctly identified 46.6% of positive cultures, with 94.7% specificity and an AUROC of 0.77 (95% CI 0.71 to 0.83). Four symptoms, three signs and three dipstick results were independently associated with UTI with an AUROC (95% CI; bootstrap-validated AUROC) of 0.89 (0.85 to 0.95; validated 0.88) for symptoms and signs, increasing to 0.93 (0.90 to 0.97; validated 0.90) with dipstick results. Nappy pad samples were provided from the other 2277 children, of whom 82% were < 2 years old and 1.3% met the UTI definition. 'Clinical diagnosis' correctly identified 13.3% positive cultures, with 98.5% specificity and an AUROC of 0.63 (95% CI 0.53 to 0.72). Four symptoms and two dipstick results were independently associated with UTI, with an AUROC of 0.81 (0.72 to 0.90; validated 0.78) for symptoms, increasing to 0.87 (0.80 to 0.94; validated 0.82) with the dipstick findings. A high specificity threshold for the clean-catch model was more accurate and less costly than, and as effective as, clinical judgement. The additional diagnostic utility of dipstick testing was offset by its costs. The cost-effectiveness of the nappy pad model was not clear-cut. CONCLUSIONS: Clinicians should prioritise the use of clean-catch sampling as symptoms and signs can cost-effectively improve the identification of UTI in young children where clean catch is possible. Dipstick testing can improve targeting of antibiotic treatment, but at a higher cost than waiting for a laboratory result. Future research is needed to distinguish pathogens from contaminants, assess the impact of the clean-catch algorithm on patient outcomes, and the cost-effectiveness of presumptive versus dipstick versus laboratory-guided antibiotic treatment. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Algoritmos , Atenção Primária à Saúde/métodos , Infecções Urinárias/diagnóstico , Coleta de Urina/economia , Coleta de Urina/métodos , Pré-Escolar , Análise Custo-Benefício , Feminino , Humanos , Lactente , Masculino , Estudos Prospectivos , Curva ROC , Sensibilidade e Especificidade , Método Simples-Cego , Coleta de Urina/normas
12.
Health Technol Assess ; 19(96): v-xxv, 1-236, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26569153

RESUMO

BACKGROUND: Determination of the presence or absence of bacterial infection is important to guide appropriate therapy and reduce antibiotic exposure. Procalcitonin (PCT) is an inflammatory marker that has been suggested as a marker for bacterial infection. OBJECTIVES: To assess the clinical effectiveness and cost-effectiveness of adding PCT testing to the information used to guide antibiotic therapy in adults and children (1) with confirmed or highly suspected sepsis in intensive care and (2) presenting to the emergency department (ED) with suspected bacterial infection. METHODS: Twelve databases were searched to June 2014. Randomised controlled trials were assessed for quality using the Cochrane Risk of Bias tool. Summary relative risks (RRs) and weighted mean differences (WMDs) were estimated using random-effects models. Heterogeneity was assessed visually using forest plots and statistically using the I (2) and Q statistics and investigated through subgroup analysis. The cost-effectiveness of PCT testing in addition to current clinical practice was compared with current clinical practice using a decision tree with a 6 months' time horizon. RESULTS: Eighteen studies (36 reports) were included in the systematic review. PCT algorithms were associated with reduced antibiotic duration [WMD -3.19 days, 95% confidence interval (CI) -5.44 to -0.95 days, I (2) = 95.2%; four studies], hospital stay (WMD -3.85 days, 95% CI -6.78 to -0.92 days, I (2) = 75.2%; four studies) and a trend towards reduced intensive care unit (ICU) stay (WMD -2.03 days, 95% CI -4.19 to 0.13 days, I (2) = 81.0%; four studies). There were no differences for adverse clinical outcomes. PCT algorithms were associated with a reduction in the proportion of adults (RR 0.77, 95% CI 0.68 to 0.87; seven studies) and children (RR 0.86, 95% CI 0.80 to 0.93) receiving antibiotics, reduced antibiotic duration (two studies). There were no differences for adverse clinical outcomes. All but one of the studies in the ED were conducted in people presenting with respiratory symptoms. Cost-effectiveness: the base-case analyses indicated that PCT testing was cost-saving for (1) adults with confirmed or highly suspected sepsis in an ICU setting; (2) adults with suspected bacterial infection presenting to the ED; and (3) children with suspected bacterial infection presenting to the ED. Cost-savings ranged from £368 to £3268. Moreover, PCT-guided treatment resulted in a small quality-adjusted life-year (QALY) gain (ranging between < 0.001 and 0.005). Cost-effectiveness acceptability curves showed that PCT-guided treatment has a probability of ≥ 84% of being cost-effective for all settings and populations considered (at willingness-to-pay thresholds of £20,000 and £30,000 per QALY). CONCLUSIONS: The limited available data suggest that PCT testing may be effective and cost-effective when used to guide discontinuation of antibiotics in adults being treated for suspected or confirmed sepsis in ICU settings and initiation of antibiotics in adults presenting to the ED with respiratory symptoms and suspected bacterial infection. However, it is not clear that observed costs and effects are directly attributable to PCT testing, are generalisable outside people presenting with respiratory symptoms (for the ED setting) and would be reproducible in the UK NHS. Further studies are needed to assess the effectiveness of adding PCT algorithms to the information used to guide antibiotic treatment in children with suspected or confirmed sepsis in ICU settings. Additional research is needed to examine whether the outcomes presented in this report are fully generalisable to the UK. STUDY REGISTRATION: This study is registered as PROSPERO CRD42014010822. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Antibacterianos/uso terapêutico , Infecções Bacterianas/tratamento farmacológico , Calcitonina , Precursores de Proteínas , Sepse/tratamento farmacológico , Adulto , Biomarcadores , Peptídeo Relacionado com Gene de Calcitonina , Criança , Análise Custo-Benefício , Serviço Hospitalar de Emergência , Humanos , Unidades de Terapia Intensiva , Tempo de Internação , Modelos Econômicos , Sepse/diagnóstico , Sepse/economia , Avaliação da Tecnologia Biomédica , Resultado do Tratamento
13.
Health Technol Assess ; 19(58): 1-228, v-vi, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26215747

RESUMO

BACKGROUND: Patients with substantive bleeding usually require transfusion and/or (re-)operation. Red blood cell (RBC) transfusion is independently associated with a greater risk of infection, morbidity, increased hospital stay and mortality. ROTEM (ROTEM® Delta, TEM International GmbH, Munich, Germany; www.rotem.de), TEG (TEG® 5000 analyser, Haemonetics Corporation, Niles, IL, USA; www.haemonetics.com) and Sonoclot (Sonoclot® coagulation and platelet function analyser, Sienco Inc., Arvada, CO) are point-of-care viscoelastic (VE) devices that use thromboelastometry to test for haemostasis in whole blood. They have a number of proposed advantages over standard laboratory tests (SLTs): they provide a result much quicker, are able to identify what part of the clotting process is disrupted, and provide information on clot formation over time and fibrinolysis. OBJECTIVES: This assessment aimed to assess the clinical effectiveness and cost-effectiveness of VE devices to assist with the diagnosis, management and monitoring of haemostasis disorders during and after cardiac surgery, trauma-induced coagulopathy and post-partum haemorrhage (PPH). METHODS: Sixteen databases were searched to December 2013: MEDLINE (OvidSP), MEDLINE In-Process and Other Non-Indexed Citations and Daily Update (OvidSP), EMBASE (OvidSP), BIOSIS Previews (Web of Knowledge), Science Citation Index (SCI) (Web of Science), Conference Proceedings Citation Index (CPCI-S) (Web of Science), Cochrane Database of Systematic Reviews (CDSR), Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects (DARE), Health Technology Assessment (HTA) database, Latin American and Caribbean Health Sciences Literature (LILACS), International Network of Agencies for Health Technology Assessment (INAHTA), National Institute for Health Research (NIHR) HTA programme, Aggressive Research Intelligence Facility (ARIF), Medion, and the International Prospective Register of Systematic Reviews (PROSPERO). Randomised controlled trials (RCTs) were assessed for quality using the Cochrane Risk of Bias tool. Prediction studies were assessed using QUADAS-2. For RCTs, summary relative risks (RRs) were estimated using random-effects models. Continuous data were summarised narratively. For prediction studies, the odds ratio (OR) was selected as the primary effect estimate. The health-economic analysis considered the costs and quality-adjusted life-years of ROTEM, TEG and Sonoclot compared with SLTs in cardiac surgery and trauma patients. A decision tree was used to take into account short-term complications and longer-term side effects from transfusion. The model assumed a 1-year time horizon. RESULTS: Thirty-one studies (39 publications) were included in the clinical effectiveness review. Eleven RCTs (n=1089) assessed VE devices in patients undergoing cardiac surgery; six assessed thromboelastography (TEG) and five assessed ROTEM. There was a significant reduction in RBC transfusion [RR 0.88, 95% confidence interval (CI) 0.80 to 0.96; six studies], platelet transfusion (RR 0.72, 95% CI 0.58 to 0.89; six studies) and fresh frozen plasma to transfusion (RR 0.47, 95% CI 0.35 to 0.65; five studies) in VE testing groups compared with control. There were no significant differences between groups in terms of other blood products transfused. Continuous data on blood product use supported these findings. Clinical outcomes did not differ significantly between groups. There were no apparent differences between ROTEM or TEG; none of the RCTs evaluated Sonoclot. There were no data on the clinical effectiveness of VE devices in trauma patients or women with PPH. VE testing was cost-saving and more effective than SLTs. For the cardiac surgery model, the cost-saving was £43 for ROTEM, £79 for TEG and £132 for Sonoclot. For the trauma population, the cost-savings owing to VE testing were more substantial, amounting to per-patient savings of £688 for ROTEM compared with SLTs, £721 for TEG, and £818 for Sonoclot. This finding was entirely dependent on material costs, which are slightly higher for ROTEM. VE testing remained cost-saving following various scenario analyses. CONCLUSIONS: VE testing is cost-saving and more effective than SLTs, in both patients undergoing cardiac surgery and trauma patients. However, there were no data on the clinical effectiveness of Sonoclot or of VE devices in trauma patients. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005623. FUNDING: The NIHR Health Technology Assessment programme.


Assuntos
Transtornos da Coagulação Sanguínea/diagnóstico , Hemostasia/fisiologia , Testes Imediatos/economia , Tromboelastografia/economia , Tromboelastografia/métodos , Transtornos da Coagulação Sanguínea/fisiopatologia , Análise Custo-Benefício , Humanos
14.
Health Technol Assess ; 19(44): 1-234, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26118801

RESUMO

BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary hospital admissions and anxiety. OBJECTIVE: To assess the clinical effectiveness and cost-effectiveness of hs-cTn assays for the early (within 4 hours of presentation) rule-out of AMI in adults with acute chest pain. METHODS: Sixteen databases, including MEDLINE and EMBASE, research registers and conference proceedings, were searched to October 2013. Study quality was assessed using QUADAS-2. The bivariate model was used to estimate summary sensitivity and specificity for meta-analyses involving four or more studies, otherwise random-effects logistic regression was used. The health-economic analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different troponin (Tn) testing methods. The de novo model consisted of a decision tree and Markov model. A lifetime time horizon (60 years) was used. RESULTS: Eighteen studies were included in the clinical effectiveness review. The optimum strategy, based on the Roche assay, used a limit of blank (LoB) threshold in a presentation sample to rule out AMI [negative likelihood ratio (LR-) 0.10, 95% confidence interval (CI) 0.05 to 0.18]. Patients testing positive could then have a further test at 2 hours; a result above the 99th centile on either sample and a delta (Δ) of ≥ 20% has some potential for ruling in an AMI [positive likelihood ratio (LR+) 8.42, 95% CI 6.11 to 11.60], whereas a result below the 99th centile on both samples and a Δ of < 20% can be used to rule out an AMI (LR- 0.04, 95% CI 0.02 to 0.10). The optimum strategy, based on the Abbott assay, used a limit of detection (LoD) threshold in a presentation sample to rule out AMI (LR- 0.01, 95% CI 0.00 to 0.08). Patients testing positive could then have a further test at 3 hours; a result above the 99th centile on this sample has some potential for ruling in an AMI (LR+ 10.16, 95% CI 8.38 to 12.31), whereas a result below the 99th centile can be used to rule out an AMI (LR- 0.02, 95% CI 0.01 to 0.05). In the base-case analysis, standard Tn testing was both most effective and most costly. Strategies considered cost-effective depending upon incremental cost-effectiveness ratio thresholds were Abbott 99th centile (thresholds of < £6597), Beckman 99th centile (thresholds between £6597 and £30,042), Abbott optimal strategy (LoD threshold at presentation, followed by 99th centile threshold at 3 hours) (thresholds between £30,042 and £103,194) and the standard Tn test (thresholds over £103,194). The Roche 99th centile and the Roche optimal strategy [LoB threshold at presentation followed by 99th centile threshold and/or Δ20% (compared with presentation test) at 1-3 hours] were extendedly dominated in this analysis. CONCLUSIONS: There is some evidence to suggest that hs-CTn testing may provide an effective and cost-effective approach to early rule-out of AMI. Further research is needed to clarify optimal diagnostic thresholds and testing strategies. STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005939. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Dor no Peito/diagnóstico , Infarto do Miocárdio/diagnóstico , Troponina C/sangue , Adulto , Dor no Peito/economia , Dor no Peito/etiologia , Análise Custo-Benefício , Custos Hospitalares/estatística & dados numéricos , Humanos , Infarto do Miocárdio/sangue , Infarto do Miocárdio/economia , Troponina C/economia
15.
Health Technol Assess ; 18(62): 1-132, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25314637

RESUMO

BACKGROUND: Bowel cancer is the third most common cancer in the UK. Most bowel cancers are initially treated with surgery, but around 17% spread to the liver. When this happens, sometimes the liver tumour can be treated surgically, or chemotherapy may be used to shrink the tumour to make surgery possible. Kirsten rat sarcoma viral oncogene (KRAS) mutations make some tumours less responsive to treatment with biological therapies such as cetuximab. There are a variety of tests available to detect these mutations. These vary in the specific mutations that they detect, the amount of mutation they detect, the amount of tumour cells needed, the time to give a result, the error rate and cost. OBJECTIVES: To compare the performance and cost-effectiveness of KRAS mutation tests in differentiating adults with metastatic colorectal cancer whose metastases are confined to the liver and are unresectable and who may benefit from first-line treatment with cetuximab in combination with standard chemotherapy from those who should receive standard chemotherapy alone. DATA SOURCES: Thirteen databases, including MEDLINE and EMBASE, research registers and conference proceedings were searched to January 2013. Additional data were obtained from an online survey of laboratories participating in the UK National External Quality Assurance Scheme pilot for KRAS mutation testing. METHODS: A systematic review of the evidence was carried out using standard methods. Randomised controlled trials were assessed for quality using the Cochrane risk of bias tool. Diagnostic accuracy studies were assessed using the QUADAS-2 tool. There were insufficient data for meta-analysis. For accuracy studies we calculated sensitivity and specificity together with 95% confidence intervals (CIs). Survival data were summarised as hazard ratios and tumour response data were summarised as relative risks, with 95% CIs. The health economic analysis considered the long-term costs and quality-adjusted life-years associated with different tests followed by treatment with standard chemotherapy or cetuximab plus standard chemotherapy. The analysis took a 'no comparator' approach, which implies that the cost-effectiveness of each strategy will be presented only compared with the next most cost-effective strategy. The de novo model consisted of a decision tree and Markov model. RESULTS: The online survey indicated no differences between tests in batch size, turnaround time, number of failed samples or cost. The literature searches identified 7903 references, of which seven publications of five studies were included in the review. Two studies provided data on the accuracy of KRAS mutation testing for predicting response to treatment in patients treated with cetuximab plus standard chemotherapy. Four RCTs provided data on the clinical effectiveness of cetuximab plus standard chemotherapy compared with that of standard chemotherapy in patients with KRAS wild-type tumours. There were no clear differences in the treatment effects reported by different studies, regardless of which KRAS mutation test was used to select patients. In the 'linked evidence' analysis the Therascreen KRAS RGQ PCR Kit (QIAGEN) was more expensive but also more effective than pyrosequencing or direct sequencing, with an incremental cost-effectiveness ratio of £17,019 per quality-adjusted life-year gained. In the 'assumption of equal prognostic value' analysis the total costs associated with the various testing strategies were similar. LIMITATIONS: The results assume that the differences in outcomes between the trials were solely the result of the different mutation tests used to distinguish between patients; this assumption ignores other factors that might explain this variation. CONCLUSIONS: There was no strong evidence that any one KRAS mutation test was more effective or cost-effective than any other test. STUDY REGISTRATION: PROSPERO CRD42013003663. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Protocolos de Quimioterapia Combinada Antineoplásica/uso terapêutico , Neoplasias Colorretais/tratamento farmacológico , Neoplasias Colorretais/genética , Técnicas Genéticas/economia , Proteínas Proto-Oncogênicas/genética , Proteínas ras/genética , Anticorpos Monoclonais Humanizados/administração & dosagem , Antineoplásicos , Cetuximab , Neoplasias Colorretais/patologia , Análise Custo-Benefício , Humanos , Neoplasias Hepáticas/secundário , Cadeias de Markov , Mutação , Proteínas Proto-Oncogênicas p21(ras) , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto
16.
Health Technol Assess ; 18(32): 1-166, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24827857

RESUMO

BACKGROUND: Non-small cell lung cancer (NSCLC) is the most common form of lung cancer. Some epidermal growth factor receptor tyrosine kinase (EGFR-TK) mutations make tumours responsive to treatment with EGFR-TK inhibitors (EGFR-TKIs) but less responsive to treatment with standard chemotherapy. Patients with NSCLC are therefore tested for EGFR-TK tumour gene mutations to inform treatment decisions. There are a variety of tests available to detect these mutations. The different tests vary in the specific mutations that they attempt to detect, the amount of tumour cells needed for the test to work, the time that it takes to give a result, the error rate of the test, and the cost of the test. OBJECTIVE: To compare the performance and cost-effectiveness of EGFR-TK mutation tests used to identify previously untreated adults with locally advanced or metastatic NSCLC, who may benefit from first-line treatment with TKIs. DATA SOURCES: Twelve databases to August 2012 [including MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations and Daily Update (OvidSP), EMBASE, Cochrane Database of Systematic Reviews (CDSR), Cochrane Central Register of Controlled Trials (CENTRAL), Database of Abstracts of Reviews of Effects (DARE), Health Technology Assessment database (HTA), Science Citation Index (SCI), Latin American and Caribbean Health Sciences Literature (LILACS), BIOSIS Previews, NIHR Health Technology Assessment programme, PROSPERO (International Prospective Register of Systematic Reviews)], research registers and conference proceedings. A web-based survey gathered data on technical performance of EGFR-TK mutation tests. METHODS: Randomised controlled trials were assessed for methodological quality using the Cochrane risk of bias tool. Diagnostic accuracy studies were assessed using QUADAS-2. There were insufficient data for meta-analysis. For accuracy studies, we calculated sensitivity and specificity together with 95% confidence intervals (CIs). Survival data were summarised as hazard ratios and tumour response data as relative risks, with 95% CIs. The health-economic analysis considered the long-term costs and quality-adjusted life-years (QALYs) associated with different tests followed by treatment with either standard chemotherapy or a TKI. Direct sequencing was taken as the comparator. The de novo model consisted of a decision tree and a Markov model. RESULTS: The survey indicated no differences between tests in batch size, turnaround time, number of failed samples or cost. Six studies provided data on the accuracy of EGFR-TK mutation testing for predicting response to treatment with TKIs. Estimates of accuracy were similar across studies. Six analyses provided data on the clinical effectiveness of TKIs compared with standard chemotherapy. There were no clear differences in the treatment effects reported by different studies, regardless of which EGFR mutation test was used to select patients. Cost-effectiveness analysis using 'Evidence on comparative effectiveness available' and 'Linked evidence' approaches: Therascreen(®) EGFR polymerase chain reaction (PCR) Kit (Qiagen, Venlo, the Netherlands) was both less effective and less costly than direct sequencing of all exon 19-21 mutations at an incremental cost-effectiveness ratio of £32,167 (comparative) and £32,190 (linked) per QALY lost. 'Assumption of equal prognostic value' approach: the lowest total strategy cost was [commercial-in-confidence (CiC) information has been removed] [Sanger sequencing or Roche cobas EGFR Mutation Testing Kit(®) (Roche Molecular Systems, Inc., Branchburg, NJ, USA)] compared with (CiC information has been removed) for the most expensive strategy (fragment length analysis combined with pyrosequencing). LIMITATIONS: The cost-effectiveness analysis assumed that the differences in outcomes between the results of the trials were solely attributable to the different mutation tests used to distinguish between patients; this assumption ignores other factors that might explain this variation. CONCLUSION: There was no strong evidence that any one EGFR mutation test had greater accuracy than any other test. Re-testing of stored samples from previous studies, where patient outcomes are already known, could be used to provide information on the relative effectiveness of TKIs and standard chemotherapy in patients with EGFR mutation-positive and mutation-negative tumours, where mutation status is determined using tests for which adequate data are currently unavailable. STUDY REGISTRATION: PROSPERO CRD42012002828. FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Carcinoma Pulmonar de Células não Pequenas/genética , Receptores ErbB/genética , Neoplasias Pulmonares/tratamento farmacológico , Neoplasias Pulmonares/genética , Inibidores de Proteínas Quinases/uso terapêutico , Carcinoma Pulmonar de Células não Pequenas/mortalidade , Análise Custo-Benefício , Receptores ErbB/antagonistas & inibidores , Humanos , Neoplasias Pulmonares/mortalidade , Mutação , Metástase Neoplásica , Estadiamento de Neoplasias , Reação em Cadeia da Polimerase , Inibidores de Proteínas Quinases/economia , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Reprodutibilidade dos Testes , Avaliação da Tecnologia Biomédica
17.
Health Technol Assess ; 18(18): 1-106, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24656117

RESUMO

BACKGROUND: Ivacaftor (Kalydeco(®), Vertex Pharmaceuticals) is the first of a new class of drugs that target the underlying protein defect in cystic fibrosis (CF). It is aimed at patients with the G551D (glycine to aspartate change in nucleotide 1784 in exon 11) mutation; 5.7% of patients with CF in the UK have this mutation. OBJECTIVES: To review the clinical effectiveness and cost-effectiveness of ivacaftor for the treatment of CF in patients aged ≥ 6 years who have the G551D mutation. METHODS: Ten databases, including MEDLINE and EMBASE, were searched from inception to July 2012. Studies that evaluated ivacaftor for the treatment of adults and children (≥ 6 years) with at least one G551D mutation were eligible. There were insufficient data to conduct a formal meta-analysis. The manufacturer of ivacaftor, Vertex Pharmaceuticals, submitted a deterministic patient-level simulation model for the assessment of the lifetime cost-effectiveness of ivacaftor. We modified the model where values were not UK-specific or not recent, or where better estimates could be found. The only change to the model structure was the addition of lung transplantations. We changed utility values, annual decline in percentage predicted forced expiratory volume in 1 second (FEV1), and the baseline exacerbation rate, and used data from the CF Registry to estimate the relation between costs, age and percentage predicted FEV1. Estimates of treatment effect of ivacaftor came from the clinical effectiveness review. We modelled three scenarios for the longer-term effects of ivacaftor. We also modelled an 'optimistic' scenario for patients aged < 12 years with little lung damage. We conducted a budget impact analysis to estimate the total cost to the NHS of introducing ivacaftor in England. RESULTS: Three studies were included: a randomised controlled trial (RCT) in adults (n = 167) (≥ 12 years), a RCT in children (n = 26) (6-11 years), and an open-label extension study of the two RCTs. Both RCTs reported significantly greater changes from baseline in all measures of lung function in patients receiving ivacaftor than in those receiving placebo. The mean difference in change in percentage predicted FEV1 was 10.5 [95% confidence interval (CI) 8.5 to 12.5] percentage points in the adults' study and 10.0 (95% CI 4.5 to 15.5) percentage points in the children's study at 48 weeks. Improvements in lung function were seen across all subgroups investigated (age, sex, study region and lung function). There were significantly greater improvements in the ivacaftor group than in the placebo group for all outcomes assessed (exacerbations, quality of life, sweat chloride and weight) with the exception of quality of life in children. Improvements were maintained in the open-label trial. Adverse events were mainly minor and comparable across treatment groups. Both RCTs reported more withdrawals in the placebo group than in the ivacaftor group. The incremental cost-effectiveness ratio varied between £335,000 and £1,274,000 per quality-adjusted life-year gained. The total additional lifetime costs for all eligible CF patients in England ranged from £438M to £479M; the lifetime cost for standard care only was £72M. CONCLUSIONS: The available evidence suggests that ivacaftor is a clinically effective treatment for patients with CF and the G551D mutation; the high cost of ivacaftor may prove an obstacle in the uptake of this treatment. The main priority for further research is the long-term effectiveness of ivacaftor. STUDY REGISTRATION: This study is registered as PROSPERO CRD42012002516. SOURCE OF FUNDING: The National Institute for Health Research Health Technology Assessment programme.


Assuntos
Aminofenóis/economia , Aminofenóis/uso terapêutico , Fibrose Cística/tratamento farmacológico , Fibrose Cística/genética , Quinolonas/economia , Quinolonas/uso terapêutico , Adolescente , Adulto , Fatores Etários , Criança , Análise Custo-Benefício , Inglaterra , Feminino , Humanos , Transplante de Pulmão/economia , Masculino , Modelos Econômicos , Mutação , Anos de Vida Ajustados por Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto , Testes de Função Respiratória , Fatores Sexuais , Medicina Estatal
19.
Z Evid Fortbild Qual Gesundhwes ; 105(7): 498-503, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21958607

RESUMO

Reviewing and using diagnostic research for decision making involves complex issues about what are the exact diagnostic questions, how they should be studied, and to whom the results of such studies apply in real life. In this paper we aim to address some of the main issues concerning applicability of diagnostic research by looking at different diagnostic questions, what different study designs can be used and how bias and variability may impact on applicability. Users of diagnostic research should be aware of these issues in order to avoid confusion and misunderstandings about why modern diagnostic research addresses particular patient groups and uses certain study designs, whilst choosing to ignore others that at first glance seem relevant. We conclude that there are 4 main points to be addressed in doing and using diagnostic research, and these are: "Get the question right"; "Get the study design right"; "Include patients for whom the test will also be used in practice"; and "Educate users of research". Simple as they may seem, these points cover extremely complex issues in practice, and these need to be addressed by more communication between methodologists, practitioners and decision makers.


Assuntos
Testes Diagnósticos de Rotina/estatística & dados numéricos , Programas Nacionais de Saúde , Viés , Pesquisa Biomédica/economia , Pesquisa Biomédica/estatística & dados numéricos , Análise Custo-Benefício , Testes Diagnósticos de Rotina/economia , Alemanha , Humanos , Programas de Rastreamento/estatística & dados numéricos , Programas Nacionais de Saúde/economia , Valor Preditivo dos Testes
20.
Ann Intern Med ; 155(8): 529-36, 2011 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-22007046

RESUMO

In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.


Assuntos
Diagnóstico , Literatura de Revisão como Assunto , Inquéritos e Questionários , Viés , Medicina Baseada em Evidências , Humanos , Seleção de Pacientes , Controle de Qualidade , Padrões de Referência , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA