Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 184
1.
J Clin Epidemiol ; 133: 111-120, 2021 05.
Article En | MEDLINE | ID: mdl-33515655

OBJECTIVES: To evaluate design, methods, and reporting of impact studies of cardiovascular clinical prediction rules (CPRs). STUDY DESIGN AND SETTING: We conducted a systematic review. Impact studies of cardiovascular CPRs were identified by forward citation and electronic database searches. We categorized the design of impact studies as appropriate for randomized and nonrandomized experiments, excluding uncontrolled before-after study. For impact studies with appropriate study design, we assessed the quality of methods and reporting. We compared the quality of methods and reporting between impact and matched control studies. RESULTS: We found 110 impact studies of cardiovascular CPRs. Of these, 65 (59.1%) used inappropriate designs. Of 45 impact studies with appropriate design, 31 (68.9%) had substantial risk of bias. Mean number of reporting domains that impact studies with appropriate study design adhered to was 10.2 of 21 domains (95% confidence interval, 9.3 and 11.1). The quality of methods and reporting was not clearly different between impact and matched control studies. CONCLUSION: We found most impact studies either used inappropriate study design, had substantial risk of bias, or poorly complied with reporting guidelines. This appears to be a common feature of complex interventions. Users of CPRs should critically evaluate evidence showing the effectiveness of CPRs.


Cardiovascular Diseases/therapy , Clinical Decision Rules , Comparative Effectiveness Research/statistics & numerical data , Comparative Effectiveness Research/standards , Decision Support Techniques , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/standards , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged
4.
J Comp Eff Res ; 8(9): 709-719, 2019 07.
Article En | MEDLINE | ID: mdl-31290682

Aim: For comparative effectiveness research to achieve its purpose, providers and patients must use research evidence to make medical decisions. Therefore, this study examined factors associated with evidence-based decision-making by patients and providers. Methods: Data were collected via cross-sectional online surveys of patients (n = 603) and providers (n = 628) between November 2011 and January 2012. Results: For both patients and providers, evidence-based medical decision-making is associated with perceptions, that is, some combination of self efficacy, attitudes and opinions. However, whereas knowledge is the most consistent factor associated with decision-making for providers, it is not associated at all for patients. Conclusion: Efforts to promote evidence-based medical decision-making among patients and providers should focus on skills training to improve self efficacy, and messages that highlight the benefits of patient engagement in medical decisions.


Comparative Effectiveness Research/organization & administration , Decision Making , Evidence-Based Practice/organization & administration , Patient Participation/methods , Adult , Age Factors , Aged , Attitude of Health Personnel , Comparative Effectiveness Research/standards , Cross-Sectional Studies , Evidence-Based Practice/standards , Female , Health Behavior , Health Knowledge, Attitudes, Practice , Humans , Male , Middle Aged , Self Efficacy , Sex Factors , Socioeconomic Factors
6.
Clin Pharmacol Ther ; 106(1): 103-115, 2019 07.
Article En | MEDLINE | ID: mdl-31025311

Real-world evidence provides important information about the effects of medicines in routine clinical practice. To engender trust that evidence generated for regulatory purposes is sufficiently valid, transparency in the reasoning that underlies study design decisions is critical. Building on existing guidance and frameworks, we developed the Structured Preapproval and Postapproval Comparative study design framework to generate valid and transparent real-world Evidence (SPACE) as a process for identifying design elements and minimal criteria for feasibility and validity concerns, and for documenting decisions. Starting with an articulated research question, we identify key components of the randomized controlled trial needed to maximize validity, and pragmatic choices are considered when required. A causal diagram is used to justify the variables identified for confounding control, and key decisions, assumptions, and evidence are captured in a structured way. In this way, SPACE may improve dialogue and build trust among healthcare providers, patients, regulators, and researchers.


Comparative Effectiveness Research/methods , Product Surveillance, Postmarketing/methods , Research Design , Causality , Comparative Effectiveness Research/standards , Confounding Factors, Epidemiologic , Decision Making , Humans , Product Surveillance, Postmarketing/standards , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/standards , Reproducibility of Results
7.
Ann Rheum Dis ; 78(4): 562-569, 2019 04.
Article En | MEDLINE | ID: mdl-30755417

OBJECTIVE: To assess to what extent time-dependent biases (ie, immortal time bias (ITB) and time-lag bias (TLB)) occur in the latest rheumatology observational studies, describe their main mechanisms and increase the awareness on this topic. METHODS: We searched PubMed for observational studies on rheumatic diseases published in leading medical journals in the last 5 years. Only studies with a time-to-event analysis exploring the association of one or more interventional strategies with an outcome were included. Each study was labelled as free from bias, at risk of TLB, at risk of misclassified ITB if the period of immortal time was incorrectly attributed to an intervention group, or at risk of excluded ITB if the immortal time was discarded from the analysis. RESULTS: We included 78 papers. Most studies were performed in Europe or North America (46% each), were not industry funded (62%) and had a safety primary outcome (59%). In total, 13 (17%) studies were considered at risk of time-dependent biases. Among the studies at risk of ITB (n=8; 10%), in 5 (6%), waiting time to receive treatment was wrongly attributed to the treatment exposure group, which indicated misclassified ITB. Five (6%) studies were at risk of TLB: patients on conventional synthetic disease-modifying antirheumatic drugs (DMARD; first-line drugs) were compared with patients on biologic DMARDs (second or third-line drugs) without accounting for disease duration or prior medication use. CONCLUSIONS: One in six comparative effectiveness observational studies published in leading rheumatology journals is potentially flawed by time-dependent biases.


Comparative Effectiveness Research/methods , Observational Studies as Topic/methods , Rheumatic Diseases/therapy , Antirheumatic Agents/therapeutic use , Bias , Biological Products/therapeutic use , Comparative Effectiveness Research/standards , Humans , Observational Studies as Topic/standards , Research Design , Time Factors
8.
Nat Rev Clin Oncol ; 16(5): 312-325, 2019 05.
Article En | MEDLINE | ID: mdl-30700859

The use of data from the real world to address clinical and policy-relevant questions that cannot be answered using data from clinical trials is garnering increased interest. Indeed, data from cancer registries and linked treatment records can provide unique insights into patients, treatments and outcomes in routine oncology practice. In this Review, we explore the quality of real-world data (RWD), provide a framework for the use of RWD and draw attention to the methodological pitfalls inherent to using RWD in studies of comparative effectiveness. Randomized controlled trials and RWD remain complementary forms of medical evidence; studies using RWD should not be used as substitutes for clinical trials. The comparison of outcomes between nonrandomized groups of patients who have received different treatments in routine practice remains problematic. Accordingly, comparative effectiveness studies need to be designed and interpreted very carefully. With due diligence, RWD can be used to identify and close gaps in health care, offering the potential for short-term improvement in health-care systems by enabling them to achieve the achievable.


Comparative Effectiveness Research/standards , Neoplasms/therapy , Electronic Health Records , Humans , Randomized Controlled Trials as Topic , Registries
9.
Am J Phys Med Rehabil ; 98(3): 226-230, 2019 03.
Article En | MEDLINE | ID: mdl-30138127

In medical research, it is important to be able to examine whether there is a significant difference between two samples. With this, establishing an appropriate hypothesis is a critical, basic step for correct interpretation of results in inferential statistical data analysis. It is important to note that the aim of hypothesis testing is not to "accept" or "reject" the null hypothesis but to gauge the likelihood that the observed difference is genuine if the null hypothesis is true.Traditionally, the null hypothesis assumes that there is no statistically significant difference between the two groups. It has become more difficult to develop new treatments that are better than the standard of care. This review article summarizes and explains the methodology of the different types of clinical trials regarding the relevant basic statistical concepts and hypothesis testing.


Clinical Trials as Topic/standards , Comparative Effectiveness Research/standards , Physical and Rehabilitation Medicine/standards , Data Interpretation, Statistical , Humans , Research Design
10.
PLoS One ; 13(12): e0209869, 2018.
Article En | MEDLINE | ID: mdl-30592741

BACKGROUND: The Core Outcome Measures in Effectiveness Trials (COMET) database is a publically available, searchable repository of published and ongoing core outcome set (COS) studies. An annual systematic review update is carried out to maintain the currency of database content. METHODS: The methods used in the fourth update of the systematic review followed the same approach used in the original review and previous updates. Studies were eligible for inclusion if they reported the development of a COS, regardless of any restrictions by age, health condition or setting. Searches were carried out in March 2018 to identify studies that had been published or indexed between January 2017 and the end of December 2017. RESULTS: Forty-eight new studies, describing the development of 56 COS, were included. There has been an increase in the number of studies clearly specifying the scope of the COS in terms of the population (n = 43, 90%) and intervention (n = 48, 100%) characteristics. Public participation has continued to rise with over half (n = 27, 56%) of studies in the current review including input from members of the public. The rate of inclusion of all stakeholder groups has increased, in particular participation from non-clinical research experts has risen from 32% (mean average in previous reviews) to 62% (n = 29). Input from participants located in Australasia (n = 17; 41%), Asia (n = 18; 44%), South America (n = 13; 32%) and Africa (n = 7; 17%) have all increased since the previous reviews. CONCLUSION: This update included a pronounced increase in the number of new COS identified compared to the previous three updates. There was an improvement in the reporting of the scope, stakeholder participants and methods used. Furthermore, there has been an increase in participation from Australasia, Asia, South America and Africa. These advancements are reflective of the efforts made in recent years to raise awareness about the need for COS development and uptake, as well as developments in COS methodology.


Comparative Effectiveness Research , Databases, Bibliographic , Animals , Comparative Effectiveness Research/methods , Comparative Effectiveness Research/standards , Comparative Effectiveness Research/trends , Humans
11.
Ethn Dis ; 28(Suppl 2): 357-364, 2018.
Article En | MEDLINE | ID: mdl-30202188

Objective: With internal validity being a central goal of designed experiments, we seek to elucidate how community partnered participatory research (CPPR) impacts the internal validity of public health comparative-effectiveness research. Methods: Community Partners in Care (CPIC), a study comparing a community-coalition intervention to direct technical assistance for disseminating depression care to vulnerable populations, is used to illustrate design choices developed with attention to core CPPR principles. The study-design process is reviewed retrospectively and evaluated based on the resulting covariate balance across intervention arms and on broader peer-review assessments. Contributions of the CPIC Council and the study's design committee are highlighted. Results: CPPR principles contributed to building consensus around the use of randomization, creating a sampling frame, specifying geographic boundaries delimiting the scope of the investigation, grouping similar programs into pairs or other small blocks of units, collaboratively choosing random-number-generator seeds to determine randomized intervention assignments, and addressing logistical constraints in field operations. Study protocols yielded samples that were well-balanced on background characteristics across intervention arms. CPIC has been recognized for scientific merit, has drawn attention from policymakers, and has fueled ongoing research collaborations. Conclusions: Creative and collaborative fulfillment of CPPR principles reinforced the internal validity of CPIC, strengthening the study's scientific rigor by engaging complementary areas of knowledge and expertise among members of the investigative team.


Community-Based Participatory Research , Comparative Effectiveness Research , Depression/therapy , Adult , Community-Based Participatory Research/methods , Community-Based Participatory Research/standards , Comparative Effectiveness Research/methods , Comparative Effectiveness Research/standards , Female , Health Services Research/organization & administration , Humans , Intersectoral Collaboration , Male , Medically Underserved Area , Public Health/methods , Reproducibility of Results , Research Design
12.
J Clin Hypertens (Greenwich) ; 20(7): 1096-1099, 2018 07.
Article En | MEDLINE | ID: mdl-30003697

Blood pressure (BP) is a vital sign and the essential measurement for the diagnosis of hypertension. Therefore, its accurate measurement is a key element for the evaluation of many medical conditions and for the reliable diagnosis and efficient treatment of hypertension. In the last 3 decades prestigious organizations, such as the US Association for the Advancement of Medical Instrumentation (AAMI), the British Hypertension Society, the European Society of Hypertension (ESH) Working Group on BP Monitoring, and the International Organization for Standardization (ISO), have developed protocols for clinical validation of BP measuring devices. All these initiatives aim to standardize validation procedures and establish minimum accuracy standards for BP monitors. Unfortunately, only a few of the BP measuring devices available on the market have been subjected to independent validation using one of these protocols. Recently, the AAMI, ESH, and ISO experts agreed to develop a single universally acceptable standard (AAMI/ESH/ISO), which will replace all previous protocols. This major international initiative has been undertaken to best serve the needs of patients with hypertension, a public interested in cardiovascular health, practicing physicians, scientific researchers, regulatory bodies, and manufacturers. There is an urgent need to influence regulatory authorities throughout the world to make it mandatory for all BP measuring devices to have undergone independent validation before approval for marketing. Efforts need to be intensified to improve the accuracy of BP measuring devices, further optimize the validation procedure, and ensure that objective and unbiased validation data become available.


Blood Pressure Determination/instrumentation , Blood Pressure Monitors/standards , Blood Pressure/physiology , Hypertension/physiopathology , Comparative Effectiveness Research/standards , Humans , Hypertension/diagnosis , Hypertension/drug therapy , Marketing/legislation & jurisprudence , Organizations , Reproducibility of Results , Research Design , Societies, Medical/organization & administration
13.
Fertil Steril ; 109(6): 993-999, 2018 06.
Article En | MEDLINE | ID: mdl-29935660

Mild-stimulation protocols with in vitro fertilization (IVF) generally aim to use less medication than conventional IVF. This guideline evaluates pregnancy and live-birth rates in patients expected to be poor responders using mild ovarian stimulation and natural-cycle protocols vs conventional IVF.


Comparative Effectiveness Research/standards , Fertilization in Vitro/methods , Ovulation Induction/methods , Pregnancy Rate , Adult , Comparative Effectiveness Research/methods , Drug Resistance , Female , Fertility Agents, Female/therapeutic use , Fertilization in Vitro/standards , Humans , Infant, Newborn , Live Birth , Male , Ovulation Induction/standards , Pregnancy , Research Design , Treatment Outcome
14.
J Comp Eff Res ; 7(5): 503-515, 2018 05.
Article En | MEDLINE | ID: mdl-29463115

Comparative effectiveness research (CER) guidelines have been developed to direct the field toward the most rigorous study methodologies. A challenge, however, is how to ensure the best evidence is generated, and how to translate methodologically complex or nuanced CER findings into usable medical evidence. To reach that goal, it is important that both researchers and end users of CER output become knowledgeable about the elements that impact the quality and interpretability of CER. This paper distilled guidance on CER into a practical tool to assist both researchers and nonexperts with the critical review and interpretation of CER, with a focus on issues particularly relevant to CER in oncology.


Comparative Effectiveness Research/methods , Comparative Effectiveness Research/standards , Guidelines as Topic , Evidence-Based Practice/methods , Evidence-Based Practice/standards , Humans , Medical Oncology/methods
16.
Urol Oncol ; 36(4): 174-182, 2018 04.
Article En | MEDLINE | ID: mdl-29146037

BACKGROUND: The use of secondary data, such as claims or administrative data, in comparative effectiveness research has grown tremendously in recent years. PURPOSE: We believe that the current review can help investigators relying on secondary data to (1) gain insight into both the methodologies and statistical methods, (2) better understand the necessity of a rigorous planning before initiating a comparative effectiveness investigation, and (3) optimize the quality of their investigations. MAIN FINDINGS: Specifically, we review concepts of adjusted analyses and confounders, methods of propensity score analyses, and instrumental variable analyses, risk prediction models (logistic and time-to-event), decision-curve analysis, as well as the interpretation of the P value and hypothesis testing. CONCLUSIONS: Overall, we hope that the current review article can help research investigators relying on secondary data to perform comparative effectiveness research better understand the necessity of a rigorous planning before study start, and gain better insight in the choice of statistical methods so as to optimize the quality of the research study.


Comparative Effectiveness Research/standards , Medical Oncology/methods , Research Design/standards , Urology/methods , Comparative Effectiveness Research/methods , Guidelines as Topic , Logistic Models , Medical Oncology/standards , Propensity Score , Risk Assessment/methods , Urology/standards
17.
Evid Based Med ; 22(3): 81-84, 2017 Jun.
Article En | MEDLINE | ID: mdl-28600330

Guideline panels need to process a sizeable amount of information to issue a decision on whether to recommend a health technology or not. Grading of Recommendations Assessment, Development, and Evaluation (GRADE) is being frequently applied in guideline development to facilitate this task, typically for the synthesis of effectiveness research. Questions regarding the accuracy of medical tests are ubiquitous, and they temporally precede questions about therapy. However, literature summarising the experience of applying GRADE approach to accuracy evaluations is not as rich as one for effectiveness evidence. Type of study design (cross-sectional), two-dimensional nature of the performance measures (sensitivity and specificity), propensity towards a higher level of between-study heterogeneity, poor reporting of quality features and uncertainty about how best to assess for publication bias among other features make this task challenging. This article presents solutions adopted to addresses above challenges for judicious estimation of the strength of test accuracy evidence used to inform evidence syntheses for guideline development.


Comparative Effectiveness Research/standards , Diagnostic Techniques and Procedures/standards , Evidence-Based Medicine , Guidelines as Topic , Humans , Publication Bias , Research Design/standards , Sensitivity and Specificity , Uncertainty
18.
Curr Opin Urol ; 27(4): 354-359, 2017 Jul.
Article En | MEDLINE | ID: mdl-28570290

PURPOSE OF REVIEW: Secondary data analysis has become increasingly common in health services research, specifically comparative effectiveness research. While a comprehensive study of the techniques and methods for secondary data analysis is a wide-ranging topic, we sought to perform a descriptive study of some key methodological issues related to secondary data analyses and to provide a basic summary of techniques to address them. RECENT FINDINGS: In this study, we first address common issues seen in analysis of secondary datasets, and limitations of datasets with respect to bias. We cover some strategies for handling missing or incomplete data and a basic summary of three statistical approaches that can be used to address the problem of bias. SUMMARY: While it is unrealistic for surgeon scientists to aspire to the depth of knowledge of professional statisticians or data scientists, it is important for researchers and clinicians reading to understand some of the common pitfalls and issues when using secondary data to investigate clinical questions. Ultimately, the choice of analytical technique and the particular data sets used should be dictated by the research question and hypothesis being tested. Transparency about data handling and statistical techniques are vital elements of secondary data analysis.


Comparative Effectiveness Research/standards , Data Collection/methods , Data Interpretation, Statistical , Comparative Effectiveness Research/methods , Data Collection/statistics & numerical data , Humans , Research Design
19.
Wound Repair Regen ; 25(2): 192-209, 2017 04.
Article En | MEDLINE | ID: mdl-28370796

The United States Food and Drug Administration will consider the expansion of coverage indications for some drugs and devices based on real-world data. Real-world data accrual in patient registries has historically been via manual data entry from the medical chart at a time distant from patient care, which is fraught with systematic error. The efficient automated transmission of data directly from electronic health records is replacing this labor-intensive paradigm. However, real-world data collection is unfamiliar. The potential sources of bias arising from the source of data and data accrual, documentation, and aggregation have not been well defined. Furthermore, the technological aspects of data acquisition and transmission are less transparent. We explore opportunities for harnessing direct-from-electronic health record registry reporting and propose the ABCs of Registries (Analysis of Bias Criteria of Registries), which are an evaluation framework for publications to minimize potential bias of real-world data obtained directly from an electronic health record method. These standards are based on a point-of-care data documentation process using a common definitional framework and data dictionaries. By way of example, we describe a wound registry obtained directly from electronic health records. This qualified clinical data registry minimizes bias by ensuring complete and accurate point-of-care data capture, standardizes usual care linked to quality reporting, and prevents post-hoc vetting of outcomes. The resulting data are of high quality and integrity and can be used for comparative effectiveness research in wound care. In this way, the effort needed to succeed with the Quality Payment Program is leveraged to obtain the real-world data needed for comparative effectiveness research.


Comparative Effectiveness Research/methods , Electronic Health Records/statistics & numerical data , Registries , Research Design/standards , Wound Healing , Wounds and Injuries/therapy , Comparative Effectiveness Research/standards , Humans , Medicare , Prospective Payment System/standards , Quality of Health Care , United States , United States Food and Drug Administration
20.
Value Health ; 20(4): 520-532, 2017 Apr.
Article En | MEDLINE | ID: mdl-28407993

BACKGROUND: Randomized controlled trials provide robust data on the efficacy of interventions rather than on effectiveness. Health technology assessment (HTA) agencies worldwide are thus exploring whether real-world data (RWD) may provide alternative sources of data on effectiveness of interventions. Presently, an overview of HTA agencies' policies for RWD use in relative effectiveness assessments (REA) is lacking. OBJECTIVES: To review policies of six European HTA agencies on RWD use in REA of drugs. A literature review and stakeholder interviews were conducted to collect information on RWD policies for six agencies: the Dental and Pharmaceutical Benefits Agency (Sweden), the National Institute for Health and Care Excellence (United Kingdom), the Institute for Quality and Efficiency in Healthcare (Germany), the High Authority for Health (France), the Italian Medicines Agency (Italy), and the National Healthcare Institute (The Netherlands). The following contexts for RWD use in REA of drugs were reviewed: initial reimbursement discussions, pharmacoeconomic analyses, and conditional reimbursement schemes. We identified 13 policy documents and 9 academic publications, and conducted 6 interviews. RESULTS: Policies for RWD use in REA of drugs notably differed across contexts. Moreover, policies differed between HTA agencies. Such variations might discourage the use of RWD for HTA. CONCLUSIONS: To facilitate the use of RWD for HTA across Europe, more alignment of policies seems necessary. Recent articles and project proposals of the European network of HTA may provide a starting point to achieve this.


Comparative Effectiveness Research/legislation & jurisprudence , Evidence-Based Medicine/legislation & jurisprudence , Government Regulation , Health Policy/legislation & jurisprudence , Policy Making , Technology Assessment, Biomedical/legislation & jurisprudence , Comparative Effectiveness Research/economics , Comparative Effectiveness Research/standards , Consensus , Cost-Benefit Analysis , Europe , Evidence-Based Medicine/economics , Evidence-Based Medicine/standards , Guidelines as Topic , Health Care Costs , Health Policy/economics , Humans , Insurance, Health, Reimbursement , Interviews as Topic , Prohibitins , Technology Assessment, Biomedical/economics , Technology Assessment, Biomedical/standards
...