Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
PLoS Med ; 17(12): e1003478, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33326459

RESUMEN

BACKGROUND: People with reduced kidney function have increased cardiovascular disease (CVD) risk. We present a policy model that simulates individuals' long-term health outcomes and costs to inform strategies to reduce risks of kidney and CVDs in this population. METHODS AND FINDINGS: We used a United Kingdom primary healthcare database, the Clinical Practice Research Datalink (CPRD), linked with secondary healthcare and mortality data, to derive an open 2005-2013 cohort of adults (≥18 years of age) with reduced kidney function (≥2 measures of estimated glomerular filtration rate [eGFR] <90 mL/min/1.73 m2 ≥90 days apart). Data on individuals' sociodemographic and clinical characteristics at entry and outcomes (first occurrences of stroke, myocardial infarction (MI), and hospitalisation for heart failure; annual kidney disease stages; and cardiovascular and nonvascular deaths) during follow-up were extracted. The cohort was used to estimate risk equations for outcomes and develop a chronic kidney disease-cardiovascular disease (CKD-CVD) health outcomes model, a Markov state transition model simulating individuals' long-term outcomes, healthcare costs, and quality of life based on their characteristics at entry. Model-simulated cumulative risks of outcomes were compared with respective observed risks using a split-sample approach. To illustrate model value, we assess the benefits of partial (i.e., at 2013 levels) and optimal (i.e., fully compliant with clinical guidelines in 2019) use of cardioprotective medications. The cohort included 1.1 million individuals with reduced kidney function (median follow-up 4.9 years, 45% men, 19% with CVD, and 74% with only mildly decreased eGFR of 60-89 mL/min/1.73 m2 at entry). Age, kidney function status, and CVD events were the key determinants of subsequent morbidity and mortality. The model-simulated cumulative disease risks corresponded well to observed risks in participant categories by eGFR level. Without the use of cardioprotective medications, for 60- to 69-year-old individuals with mildly decreased eGFR (60-89 mL/min/1.73 m2), the model projected a further 22.1 (95% confidence interval [CI] 21.8-22.3) years of life if without previous CVD and 18.6 (18.2-18.9) years if with CVD. Cardioprotective medication use at 2013 levels (29%-44% of indicated individuals without CVD; 64%-76% of those with CVD) was projected to increase their life expectancy by 0.19 (0.14-0.23) and 0.90 (0.50-1.21) years, respectively. At optimal cardioprotective medication use, the projected health gains in these individuals increased by further 0.33 (0.25-0.40) and 0.37 (0.20-0.50) years, respectively. Limitations include risk factor measurements from the UK routine primary care database and limited albuminuria measurements. CONCLUSIONS: The CKD-CVD policy model is a novel resource for projecting long-term health outcomes and assessing treatment strategies in people with reduced kidney function. The model indicates clear survival benefits with cardioprotective treatments in this population and scope for further benefits if use of these treatments is optimised.


Asunto(s)
Enfermedades Cardiovasculares/prevención & control , Tasa de Filtración Glomerular , Riñón/fisiopatología , Modelos Teóricos , Servicios Preventivos de Salud , Insuficiencia Renal Crónica/terapia , Anciano , Anciano de 80 o más Años , Enfermedades Cardiovasculares/economía , Enfermedades Cardiovasculares/mortalidad , Bases de Datos Factuales , Inglaterra/epidemiología , Femenino , Costos de la Atención en Salud , Estado de Salud , Humanos , Masculino , Cadenas de Markov , Persona de Mediana Edad , Servicios Preventivos de Salud/economía , Pronóstico , Calidad de Vida , Insuficiencia Renal Crónica/economía , Insuficiencia Renal Crónica/mortalidad , Insuficiencia Renal Crónica/fisiopatología , Medición de Riesgo , Factores de Riesgo , Factores de Tiempo
2.
Clin Chem Lab Med ; 55(2): 167-180, 2017 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-27658148

RESUMEN

BACKGROUND: Point-of-care (POC) devices could be used to measure hemoglobin A1c (HbA1c) in the doctors' office, allowing immediate feedback of results to patients. Reports have raised concerns about the analytical performance of some of these devices. We carried out a systematic review and meta-analysis using a novel approach to compare the accuracy and precision of POC HbA1c devices. METHODS: Medline, Embase and Web of Science databases were searched in June 2015 for published reports comparing POC HbA1c devices with laboratory methods. Two reviewers screened articles and extracted data on bias, precision and diagnostic accuracy. Mean bias and variability between the POC and laboratory test were combined in a meta-analysis. Study quality was assessed using the QUADAS2 tool. RESULTS: Two researchers independently reviewed 1739 records for eligibility. Sixty-one studies were included in the meta-analysis of mean bias. Devices evaluated were A1cgear, A1cNow, Afinion, B-analyst, Clover, Cobas b101, DCA 2000/Vantage, HemoCue, Innovastar, Nycocard, Quo-Lab, Quo-Test and SDA1cCare. Nine devices had a negative mean bias which was significant for three devices. There was substantial variability in bias within devices. There was no difference in bias between clinical or laboratory operators in two devices. CONCLUSIONS: This is the first meta-analysis to directly compare performance of POC HbA1c devices. Use of a device with a mean negative bias compared to a laboratory method may lead to higher levels of glycemia and a lower risk of hypoglycaemia. The implications of this on clinical decision-making and patient outcomes now need to be tested in a randomized trial.


Asunto(s)
Hemoglobina Glucada/análisis , Sistemas de Atención de Punto , Humanos , Guías de Práctica Clínica como Asunto
3.
Br J Clin Pharmacol ; 79(5): 733-43, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25377481

RESUMEN

AIMS: Although there are reports that ß-adrenoceptor antagonists (beta-blockers) and diuretics can affect glycaemic control in people with diabetes mellitus, there is no clear information on how blood glucose concentrations may change and by how much. We report results from a systematic review to quantify the effects of these antihypertensive drugs on glycaemic control in adults with established diabetes. METHODS: We systematically reviewed the literature to identify randomized controlled trials in which glycaemic control was studied in adults with diabetes taking either beta-blockers or diuretics. We combined data on HbA1c and fasting blood glucose using fixed effects meta-analysis. RESULTS: From 3864 papers retrieved, we found 10 studies of beta-blockers and 12 studies of diuretics to include in the meta-analysis. One study included both comparisons, totalling 21 included reports. Beta-blockers increased fasting blood glucose concentrations by 0.64 mmol l(-1) (95% CI 0.24, 1.03) and diuretics by 0.77 mmol l(-1) (95% CI 0.14, 1.39) compared with placebo. Effect sizes were largest in trials of non-selective beta-blockers (1.33, 95% CI 0.72, 1.95) and thiazide diuretics (1.69, 95% CI 0.60, 2.69). Beta-blockers increased HbA1c concentrations by 0.75% (95% CI 0.30, 1.20) and diuretics by 0.24% (95% CI -0.17, 0.65) compared with placebo. There was no significant difference in the number of hypoglycaemic events between beta-blockers and placebo in three trials. CONCLUSIONS: Randomized trials suggest that thiazide diuretics and non-selective beta-blockers increase fasting blood glucose and HbA1c concentrations in patients with diabetes by moderate amounts. These data will inform prescribing and monitoring of beta-blockers and diuretics in patients with diabetes.


Asunto(s)
Antagonistas Adrenérgicos beta/efectos adversos , Glucemia/análisis , Diabetes Mellitus/sangre , Diabetes Mellitus/tratamiento farmacológico , Diuréticos/efectos adversos , Hemoglobina Glucada/análisis , Antagonistas Adrenérgicos beta/administración & dosificación , Antagonistas Adrenérgicos beta/uso terapéutico , Presión Sanguínea/efectos de los fármacos , Diuréticos/administración & dosificación , Diuréticos/uso terapéutico , Humanos , Hipertensión/sangre , Hipertensión/complicaciones , Hipertensión/tratamiento farmacológico , Ensayos Clínicos Controlados Aleatorios como Asunto
4.
JMIR Form Res ; 8: e39211, 2024 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-38175696

RESUMEN

BACKGROUND: There is substantial evidence exploring the reliability of running distance self-reporting and GPS wearable technology, but there are currently no studies investigating the reliability of participant self-reporting in comparison to GPS wearable technology. There is also a critical sports science and medical research gap due to a paucity of reliability studies assessing self-reported running pace. OBJECTIVE: The purpose of this study was to assess the reliability of weekly self-reported running distance and pace compared to a commercial GPS fitness watch, stratified by sex and age. These data will give clinicians and sports researchers insights into the reliability of runners' self-reported pace, which may improve training designs and rehabilitation prescriptions. METHODS: A prospective study of recreational runners was performed. Weekly running distance and average running pace were captured through self-report and a fitness watch. Baseline characteristics collected included age and sex. Intraclass correlational coefficients were calculated for weekly running distance and running pace for self-report and watch data. Bland-Altman plots assessed any systemic measurement error. Analyses were then stratified by sex and age. RESULTS: Younger runners reported improved weekly distance reliability (median 0.93, IQR 0.92-0.94). All ages demonstrated similar running pace reliability. Results exhibited no discernable systematic bias. CONCLUSIONS: Weekly self-report demonstrated good reliability for running distance and moderate reliability for running pace in comparison to the watch data. Similar reliability was observed for male and female participants. Younger runners demonstrated improved running distance reliability, but all age groups exhibited similar pace reliability. Running pace potentially should be monitored through technological means to increase precision.

5.
Med Sci Sports Exerc ; 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38768076

RESUMEN

PURPOSE: Step count is an intuitive measure of physical activity frequently quantified in health-related studies; however, accurate step counting is difficult in the free-living environment, with error routinely above 20% in wrist-worn devices against camera-annotated ground truth. This study aims to describe the development and validation of step count derived from a wrist-worn accelerometer and assess its association with cardiovascular and all-cause mortality in a large prospective cohort. METHODS: We developed and externally validated a self-supervised machine learning step detection model, trained on an open-source and step-annotated free-living dataset. 39 individuals will free-living ground-truth annotated step counts were used for model development. An open-source dataset with 30 individuals was used for external validation. Epidemiological analysis was performed using 75,263 UK Biobank participants without prevalent cardiovascular disease (CVD) or cancer. Cox regression was used to test the association of daily step count with fatal CVD and all-cause mortality after adjustment for potential confounders. RESULTS: The algorithm substantially outperformed reference models (free-living mean absolute percent error of 12.5%, versus 65-231%). Our data indicate an inverse dose-response association, where taking 6,430-8,277 daily steps was associated with 37% [25-48%] and 28% [20-35%] lower risk of fatal CVD and all-cause mortality up to seven years later, compared to those taking fewer steps each day. CONCLUSIONS: We have developed an open and transparent method that markedly improves the measurement of steps in large-scale wrist-worn accelerometer datasets. The application of this method demonstrated expected associations with CVD and all-cause mortality, indicating excellent face validity. This reinforces public health messaging for increasing physical activity and can help lay the groundwork for the inclusion of target step counts in future public health guidelines.

6.
medRxiv ; 2023 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-37205346

RESUMEN

Background: Step count is an intuitive measure of physical activity frequently quantified in a range of health-related studies; however, accurate quantification of step count can be difficult in the free-living environment, with step counting error routinely above 20% in both consumer and research-grade wrist-worn devices. This study aims to describe the development and validation of step count derived from a wrist-worn accelerometer and to assess its association with cardiovascular and all-cause mortality in a large prospective cohort study. Methods: We developed and externally validated a hybrid step detection model that involves self-supervised machine learning, trained on a new ground truth annotated, free-living step count dataset (OxWalk, n=39, aged 19-81) and tested against other open-source step counting algorithms. This model was applied to ascertain daily step counts from raw wrist-worn accelerometer data of 75,493 UK Biobank participants without a prior history of cardiovascular disease (CVD) or cancer. Cox regression was used to obtain hazard ratios and 95% confidence intervals for the association of daily step count with fatal CVD and all-cause mortality after adjustment for potential confounders. Findings: The novel step algorithm demonstrated a mean absolute percent error of 12.5% in free-living validation, detecting 98.7% of true steps and substantially outperforming other recent wrist-worn, open-source algorithms. Our data are indicative of an inverse dose-response association, where, for example, taking 6,596 to 8,474 steps per day was associated with a 39% [24-52%] and 27% [16-36%] lower risk of fatal CVD and all-cause mortality, respectively, compared to those taking fewer steps each day. Interpretation: An accurate measure of step count was ascertained using a machine learning pipeline that demonstrates state-of-the-art accuracy in internal and external validation. The expected associations with CVD and all-cause mortality indicate excellent face validity. This algorithm can be used widely for other studies that have utilised wrist-worn accelerometers and an open-source pipeline is provided to facilitate implementation.

7.
Pain ; 163(11): 2103-2111, 2022 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-35297813

RESUMEN

ABSTRACT: Placebos and their beneficial clinical and psychological effects are well-researched, but nocebo effects receive far less attention, despite being highly undesirable. The aim of this restricted scoping review was to examine how nocebo effects are represented in the biomedical literature and to identify the trends and gaps in existing knowledge. After searching 5 biomedical databases and 2 clinical trials registries (from their inception to December 23, 2020) for articles on nocebo effects or negative placebo effects, 1161 eligible publications were identified. The 2 main publication types were nonsystematic reviews (37.7%) and primary research studies (35.6%); only 85 publications (7.3%) were systematic reviews and meta-analyses. The nonsystematic reviews, many of them heavily opinion-based, may contribute to the amplification of narratives, attitudes, and beliefs about nocebo effects that do not objectively reflect the primary research. The primary research articles often used nocebo effects to explain results, rather than as the primary phenomenon under investigation. Most publications were concerned with both positive and negative placebo effects, rather than just nocebo effects. Over half of the abstracts were in the field of neurology, psychiatry, psychology, or neuroscience (52.8%). The nocebo effect was most frequently investigated in the context of pain. Studies were almost exclusively in adults and more often in healthy participants than in patients. In conclusion, in the biomedical literature, there is an overabundance of nonsystematic reviews and expert opinions and a lack of primary research and high-quality systematic reviews and meta-analyses specifically dealing with nocebo effects.


Asunto(s)
Efecto Nocebo , Adulto , Humanos , Voluntarios Sanos , Dolor , Efecto Placebo
8.
BJGP Open ; 4(1)2020.
Artículo en Inglés | MEDLINE | ID: mdl-32127362

RESUMEN

BACKGROUND: GPs prescribe multiple long-term treatments to their patients. For shared clinical decision-making, understanding of the absolute benefits and harms of individual treatments is needed. International evidence shows that doctors' knowledge of treatment effects is poor but, to the authors knowledge, this has not been researched among GPs in the UK. AIM: To measure the level and range of the quantitative understanding of the benefits and harms of treatments for common long-term conditions (LTCs) among GPs. DESIGN & SETTING: An online cross-sectional survey was distributed to GPs in the UK. METHOD: Participants were asked to estimate the percentage absolute risk reduction or increase conferred by 13 interventions across 10 LTCs on 17 important outcomes. Responses were collated and presented in a novel graphic format to allow detailed visualisation of the findings. Descriptive statistical analysis was performed. RESULTS: A total of 443 responders were included in the analysis. Most demonstrated poor (and in some cases very poor) knowledge of the absolute benefits and harms of treatments. Overall, an average of 10.9% of responses were correct allowing for ±1% margin in absolute risk estimates and 23.3% allowing a ±3% margin. Eighty-seven point seven per cent of responses overestimated and 8.9% of responses underestimated treatment effects. There was no tendency to differentially overestimate benefits and underestimate harms. Sixty-four point eight per cent of GPs self-reported 'low' to 'very low' confidence in their knowledge. CONCLUSION: GPs' knowledge of the absolute benefits and harms of treatments is poor, with inaccuracies of a magnitude likely to meaningfully affect clinical decision-making and impede conversations with patients regarding treatment choices.

9.
F1000Res ; 8: 1618, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-36225973

RESUMEN

Background: Evidence for kidney function monitoring intervals in primary care is weak, and based mainly on expert opinion. In the absence of trials of monitoring strategies, an approach combining a model for the natural history of kidney function over time combined with a cost-effectiveness analysis offers the most feasible approach for comparing the effects of monitoring under a variety of policies. This study aimed to create a model for kidney disease progression using routinely collected measures of kidney function. Methods: This is an open cohort study of patients aged ≥18 years, registered at 643 UK general practices contributing to the Clinical Practice Research Datalink between 1 April 2005 and 31 March 2014. At study entry, no patients were kidney transplant donors or recipients, pregnant or on dialysis. Hidden Markov models for estimated glomerular filtration rate (eGFR) stage progression were fitted to four patient cohorts defined by baseline albuminuria stage; adjusted for sex, history of heart failure, cancer, hypertension and diabetes, annually updated for age. Results: Of 1,973,068 patients, 1,921,949 had no recorded urine albumin at baseline, 37,947 had normoalbuminuria (<3mg/mmol), 10,248 had microalbuminuria (3-30mg/mmol), and 2,924 had macroalbuminuria (>30mg/mmol). Estimated annual transition probabilities were 0.75-1.3%, 1.5-2.5%, 3.4-5.4% and 3.1-11.9% for each cohort, respectively. Misclassification of eGFR stage was estimated to occur in 12.1% (95%CI: 11.9-12.2%) to 14.7% (95%CI: 14.1-15.3%) of tests. Male gender, cancer, heart failure and age were independently associated with declining renal function, whereas the impact of raised blood pressure and glucose on renal function was entirely predicted by albuminuria. Conclusions: True kidney function deteriorates slowly over time, declining more sharply with elevated urine albumin, increasing age, heart failure, cancer and male gender. Consecutive eGFR measurements should be interpreted with caution as observed improvement or deterioration may be due to misclassification.

10.
BMJ Open ; 9(6): e028062, 2019 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-31196901

RESUMEN

OBJECTIVES: To characterise serum creatinine and urinary protein testing in UK general practices from 2005 to 2013 and to examine how the frequency of testing varies across demographic factors, with the presence of chronic conditions and with the prescribing of drugs for which kidney function monitoring is recommended. DESIGN: Retrospective open cohort study. SETTING: Routinely collected data from 630 UK general practices contributing to the Clinical Practice Research Datalink. PARTICIPANTS: 4 573 275 patients aged over 18 years registered at up-to-standard practices between 1 April 2005 and 31 March 2013. At study entry, no patients were kidney transplant donors or recipients, pregnant or on dialysis. PRIMARY OUTCOME MEASURES: The rate of serum creatinine and urinary protein testing per year and the percentage of patients with isolated and repeated testing per year. RESULTS: The rate of serum creatinine testing increased linearly across all age groups. The rate of proteinuria testing increased sharply in the 2009-2010 financial year but only for patients aged 60 years or over. For patients with established chronic kidney disease (CKD), creatinine testing increased rapidly in 2006-2007 and 2007-2008, and proteinuria testing in 2009-2010, reflecting the introduction of Quality and Outcomes Framework indicators. In adjusted analyses, CKD Read codes were associated with up to a twofold increase in the rate of serum creatinine testing, while other chronic conditions and potentially nephrotoxic drugs were associated with up to a sixfold increase. Regional variation in serum creatinine testing reflected country boundaries. CONCLUSIONS: Over a nine-year period, there have been increases in the numbers of patients having kidney function tests annually and in the frequency of testing. Changes in the recommended management of CKD in primary care were the primary determinant, and increases persist even after controlling for demographic and patient-level factors. Future studies should address whether increased testing has led to better outcomes.


Asunto(s)
Creatinina/metabolismo , Medicina General/estadística & datos numéricos , Pruebas de Función Renal/estadística & datos numéricos , Proteinuria/diagnóstico , Insuficiencia Renal Crónica/diagnóstico , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Biomarcadores/metabolismo , Utilización de Instalaciones y Servicios , Femenino , Medicina General/normas , Humanos , Masculino , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Garantía de la Calidad de Atención de Salud , Estudios Retrospectivos , Reino Unido , Adulto Joven
11.
Diagn Progn Res ; 2: 13, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-31093562

RESUMEN

BACKGROUND: Competing risks occur when populations may experience outcomes that either preclude or alter the probability of experiencing the main study outcome(s). Many standard survival analysis methods do not account for competing risks. We used mortality risk in people with diabetes with and without albuminuria as a case study to investigate the impact of competing risks on measures of absolute and relative risk. METHODS: A population with type 2 diabetes was identified in Clinical Practice Research Datalink as part of a historical cohort study. Patients were followed for up to 9 years. To quantify differences in absolute risk estimates of cardiovascular and cancer, mortality standard (Kaplan-Meier) estimates were compared to competing-risks-adjusted (cumulative incidence competing risk) estimates. To quantify differences in measures of association, regression coefficients for the effect of albuminuria on the relative hazard of each outcome were compared between standard cause-specific hazard (CSH) models (Cox proportional hazards regression) and two competing risk models: the unstratified Lunn-McNeil model, which estimates CSH, and the Fine-Gray model, which estimates subdistribution hazard (SDH). RESULTS: In patients with normoalbuminuria, standard and competing-risks-adjusted estimates for cardiovascular mortality were 11.1% (95% confidence interval (CI) 10.8-11.5%) and 10.2% (95% CI 9.9-10.5%), respectively. For cancer mortality, these figures were 8.0% (95% CI 7.7-8.3%) and 7.2% (95% CI 6.9-7.5%). In patients with albuminuria, standard and competing-risks-adjusted estimates for cardiovascular mortality were 21.8% (95% CI 20.9-22.7%) and 18.5% (95% CI 17.8-19.3%), respectively. For cancer mortality, these figures were 10.7% (95% CI 10.0-11.5%) and 8.6% (8.1-9.2%). For the effect of albuminuria on cardiovascular mortality, regression coefficient values from multivariable standard CSH, competing risks CSH, and competing risks SDH models were 0.557 (95% CI 0.491-0.623), 0.561 (95% CI 0.494-0.628), and 0.456 (95% CI 0.389-0.523), respectively. For the effect of albuminuria on cancer mortality, these values were 0.237 (95% CI 0.148-0.326), 0.244 (95% CI 0.154-0.333), and 0.102 (95% CI 0.012-0.192), respectively. CONCLUSIONS: Studies of absolute risk should use methods that adjust for competing risks to avoid over-stating risk, such as the CICR estimator. Studies of relative risk should consider carefully which measure of association is most appropriate for the research question.

12.
BMJ ; 361: k1450, 2018 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-29785952

RESUMEN

OBJECTIVE: To assess the diagnostic accuracy of point-of-care natriuretic peptide tests in patients with chronic heart failure, with a focus on the ambulatory care setting. DESIGN: Systematic review and meta-analysis. DATA SOURCES: Ovid Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Embase, Health Technology Assessment Database, Science Citation Index, and Conference Proceedings Citation Index until 31 March 2017. STUDY SELECTION: Eligible studies evaluated point-of-care natriuretic peptide testing (B-type natriuretic peptide (BNP) or N terminal fragment pro B-type natriuretic peptide (NTproBNP)) against any relevant reference standard, including echocardiography, clinical examination, or combinations of these, in humans. Studies were excluded if reported data were insufficient to construct 2×2 tables. No language restrictions were applied. RESULTS: 42 publications of 39 individual studies met the inclusion criteria and 40 publications of 37 studies were included in the analysis. Of the 37 studies, 30 evaluated BNP point-of-care testing and seven evaluated NTproBNP testing. 15 studies were done in ambulatory care settings in populations with a low prevalence of chronic heart failure. Five studies were done in primary care. At thresholds >100 pg/mL, the sensitivity of BNP, measured with the point-of-care index device Triage, was generally high and was 0.95 (95% confidence interval 0.90 to 0.98) at 100 pg/mL. At thresholds <100 pg/mL, sensitivity ranged from 0.46 to 0.97 and specificity from 0.31 to 0.98. Primary care studies that used NTproBNP testing reported a sensitivity of 0.99 (0.57 to 1.00) and specificity of 0.60 (0.44 to 0.74) at 135 pg/mL. No statistically significant difference in diagnostic accuracy was found between point-of-care BNP and NTproBNP tests. CONCLUSIONS: Given the lack of studies in primary care, the paucity of NTproBNP data, and potential methodological limitations in these studies, large scale trials in primary care are needed to assess the role of point-of-care natriuretic peptide testing and clarify appropriate thresholds to improve care of patients with suspected or chronic heart failure.


Asunto(s)
Atención Ambulatoria , Factor Natriurético Atrial/sangre , Insuficiencia Cardíaca/sangre , Fragmentos de Péptidos/sangre , Pruebas en el Punto de Atención/normas , Biomarcadores/sangre , Enfermedad Crónica , Insuficiencia Cardíaca/fisiopatología , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Evaluación de la Tecnología Biomédica
13.
Trials ; 18(1): 323, 2017 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-28701195

RESUMEN

BACKGROUND: Temporal changes in the placebo arm of randomized controlled trials (RCTs) have not been thoroughly investigated, despite the fact that results of RCTs depend on the comparison between arms. METHODS: In this update of our earlier systematic review and meta-analysis, we set out to investigate the effect of assessment time and number of visits on the magnitude of change from baseline in the placebo arm of these trials. We used linear mixed-effects models to account for within-trial correlations. RESULTS: Across all 47 trials the magnitude of response in the placebo arm did not change with time (ß = -0.0070, 95% CI -0.024, 0.010) or visit (ß = -0.033, 95% CI -0.082, 0.017) and remained significantly different from baseline for at least 12 months or seven follow-up visits. Change in the placebo arm in trials with subjective outcomes was large (ß0 = 0.68, 95% CI 0.53, 0.82) and relatively constant across time (ß = -0.0042, 95% CI -0.024, 0.016) and visit (ß = -0.029, 95% CI -0.089, 0.031), whereas in trials with objective outcomes the response was smaller (ß0 = 0.28, 95% CI 0.11, 0.46) and diminished with time (ß = -0.030, 95% CI -0.050, -0.010), but not with visit (ß = -0.099, 95% CI -0.30, 0.11). For trials with assessed outcomes, there was no significant effect of time (ß = -0.0071, 95% CI -0.026, 0.011) or visit (ß = -0.032, 95% CI -0.33, 0.26); however, these results should be interpreted with caution due to the small number of studies, and high clinical heterogeneity between studies. In trials with pain as an outcome, the improvement was significant (ß0 = 0.91, 95% CI 0.75, 1.07), but there was no effect of time (ß = -0.013, 95% CI -0.06, 0.03) or visit (ß = -0.045, 95% CI -0.16, 0.069), and pain ratings remained significantly different from baseline for 12 months or seven visits. CONCLUSIONS: These results are consistent with our previous findings. In trials with subjective outcomes response in the placebo arm remains large and relatively constant for at least a year, which is interesting considering that this is an effect of a single application of an invasive procedure. The lack of effect of time and visit number on subjective outcomes raises further questions regarding whether the observed response is the result of placebo effect or the result of bias.


Asunto(s)
Efecto Placebo , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Proyectos de Investigación/normas , Humanos , Factores de Tiempo , Resultado del Tratamiento
14.
Trials ; 17(1): 589, 2016 12 12.
Artículo en Inglés | MEDLINE | ID: mdl-27955685

RESUMEN

BACKGROUND: Understanding changes in the placebo arm is essential for correct design and interpretation of randomized controlled trials (RCTs). It is assumed that placebo response, defined as the total improvement in the placebo arm of surgical trials, is large; however, its precise magnitude and properties remain to be characterized. To the best of our knowledge, the temporal changes in the placebo arm have not been investigated. The aim of this paper was to determine, in surgical RCTs, the magnitude of placebo response and how it is affected by duration of follow-up. METHODS: The databases of MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials and ClinicalTrials.gov were searched from their inception to 20 October 2015 for studies comparing the efficacy of a surgical intervention with placebo. Inclusion was not limited to any particular condition, intervention, outcome or patient population. The magnitude of placebo response was estimated using standardized mean differences (SMDs). Study estimates were pooled using random effects meta-analysis. Potential sources of heterogeneity were evaluated using stratification and meta-regression. RESULTS: Database searches returned 88 studies, but for 41 studies SMDs could not be calculated, leaving 47 trials (involving 1744 participants) eligible for inclusion. There were no temporal changes in placebo response within the analysed trials. Meta-regression analysis showed that duration of follow-up did not have a significant effect on the magnitude of the placebo response and that the strongest predictor of placebo response was subjectivity of the outcome. The pooled effect in the placebo arm of studies with subjective outcomes was large (0.64, 95% CI 0.5 to 0.8) and remained significantly different from zero regardless of the duration of follow-up, whereas for objective outcomes, the effect was small (0.11, 95% CI 0.04 to 0.26) or non-significant across all time points. CONCLUSIONS: This is the first study to investigate the temporal changes of placebo response in surgical trials and the first to investigate the sources of heterogeneity of placebo response. Placebo response in surgical trials was large for subjective outcomes, persisting as a time-invariant effect throughout blinded follow-up. Therefore, placebo response cannot be minimized in these types of outcomes through their appraisal at alternative time points. The analyses suggest that objective outcomes may be preferable as trial end-points. Where subjective outcomes are of primary interest, a placebo arm is necessary to control for placebo response.


Asunto(s)
Efecto Placebo , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Proyectos de Investigación , Procedimientos Quirúrgicos Operativos , Determinación de Punto Final , Humanos , Factores de Tiempo , Resultado del Tratamiento
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA