Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 12.929
Filter
1.
Ugeskr Laeger ; 186(27)2024 Jul 01.
Article in Danish | MEDLINE | ID: mdl-38953676

ABSTRACT

Healthcare research emphasises involvement of patients in the research process, recognizing that this can enhance the relevance, quality, and implementation of research. This article highlights the need for more systematic planning to successfully involve patients in research projects and provides guidance on key aspects that researchers should consider in the planning of involving patients in research. The article accentuates the importance of establishing clear frameworks and guidelines to promote transparency and facilitate implementation.


Subject(s)
Patient Participation , Humans , Biomedical Research , Health Services Research , Research Design/standards
4.
Pediatrics ; 154(1)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38832441

ABSTRACT

To identify priority areas to improve the design, conduct, and reporting of pediatric clinical trials, the international expert network, Standards for Research (StaR) in Child Health, was assembled and published the first 6 Standards in Pediatrics in 2012. After a recent review summarizing the 247 publications by StaR Child Health authors that highlight research practices that add value and reduce research "waste," the current review assesses the progress in key child health trial methods areas: consent and recruitment, containing risk of bias, roles of data monitoring committees, appropriate sample size calculations, outcome selection and measurement, and age groups for pediatric trials. Although meaningful change has occurred within the child health research ecosystem, measurable progress is still disappointingly slow. In this context, we identify and review emerging trends that will advance the agenda of increased clinical usefulness of pediatric trials, including patient and public engagement, Bayesian statistical approaches, adaptive designs, and platform trials. We explore how implementation science approaches could be applied to effect measurable improvements in the design, conducted, and reporting of child health research.


Subject(s)
Child Health , Clinical Trials as Topic , Research Design , Humans , Child , Research Design/standards , Clinical Trials as Topic/standards , Pediatrics/standards , Bayes Theorem
5.
Am J Speech Lang Pathol ; 33(4): 1608-1618, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38889209

ABSTRACT

PURPOSE: The speech-language-hearing sciences (SLHS) field relies on rigorous research to inform clinical practice and improve outcomes for individuals with communication, swallowing, and hearing needs. However, a significant challenge in our field is the lack of accessibility, transparency, and reproducibility of this research. Such insufficiencies limit the generalizability and impact of study findings, particularly intervention research, as it becomes difficult to replicate and use the interventions in both clinical practice and research. In this tutorial, we highlight one particularly useful tool, the Template for Intervention Description and Replication (TIDieR; Hoffmann et al., 2014) checklist, which researchers can follow to improve reproducibility practices in SLHS. CONCLUSIONS: We provide an overview and guide on using the TIDieR checklist with a practical example of its implementation. Additionally, we discuss the potential benefits of increased transparency and reproducibility for SLHS, including improved clinical outcomes and increased confidence in the effectiveness of interventions. We also provide specific recommendations for scientists, journal reviewers, editors, and editorial boards as they seek to adopt, implement, and encourage using the TIDieR checklist.


Subject(s)
Checklist , Speech-Language Pathology , Humans , Reproducibility of Results , Speech-Language Pathology/methods , Research Design/standards , Biomedical Research/standards
6.
Ophthalmologie ; 121(7): 595-604, 2024 Jul.
Article in German | MEDLINE | ID: mdl-38926192

ABSTRACT

Criteria for assessment of the significance of scientific articles are presented. The focus is on research design and methodology, illustrated by the classical study on prehospital volume treatment of severely injured individuals with penetrating torso injuries by Bickell et al. (1994). A well-thought out research design is crucial for the success of a scientific study and is documented in a study protocol beforehand. A hypothesis is a provisional explanation or prediction and must be testable, falsifiable, precise, and relevant. There are various types of randomization methods, with the randomized controlled trial being the gold standard for clinical interventional studies. When reading a scientific article it is important to verify whether the research design and setting align with the research question and whether potential sources of error have been considered and controlled. Critical scrutiny should also be applied to references, the funding and expertise of the researchers.


Subject(s)
Research Design , Research Design/standards , Humans , Biomedical Research/methods , Reading , Periodicals as Topic/standards , Randomized Controlled Trials as Topic/methods , Comprehension
7.
RMD Open ; 10(2)2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38886002

ABSTRACT

OBJECTIVE: To understand (1) what guidance exists to assess the methodological quality of qualitative research; (2) what methods exist to grade levels of evidence from qualitative research to inform recommendations within European Alliance of Associations for Rheumatology (EULAR). METHODS: A systematic literature review was performed in multiple databases including PubMed/Medline, EMBASE, Web of Science, COCHRANE and PsycINFO, from inception to 23 October 2020. Eligible studies included primary articles and guideline documents available in English, describing the: (1) development; (2) application of validated tools (eg, checklists); (3) guidance on assessing methodological quality of qualitative research and (4) guidance on grading levels of qualitative evidence. A narrative synthesis was conducted to identify key similarities between included studies. RESULTS: Of 9073 records retrieved, 51 went through to full-manuscript review, with 15 selected for inclusion. Six articles described methodological tools to assess the quality of qualitative research. The tools evaluated research design, recruitment, ethical rigour, data collection and analysis. Seven articles described one approach, focusing on four key components to determine how much confidence to place in findings from systematic reviews of qualitative research. Two articles focused on grading levels of clinical recommendations based on qualitative evidence; one described a qualitative evidence hierarchy, and another a research pyramid. CONCLUSION: There is a lack of consensus on the use of tools, checklists and approaches suitable for appraising the methodological quality of qualitative research and the grading of qualitative evidence to inform clinical practice. This work is expected to facilitate the inclusion of qualitative evidence in the process of developing recommendations at EULAR level.


Subject(s)
Qualitative Research , Research Design , Humans , Research Design/standards , Evidence-Based Medicine/standards , Evidence-Based Medicine/methods , Practice Guidelines as Topic
8.
Ugeskr Laeger ; 186(21)2024 May 20.
Article in Danish | MEDLINE | ID: mdl-38847313

ABSTRACT

There is an increasing number of PhD students in health sciences, but no formal reporting guideline for writing a thesis exists. This review provides a practical guide with an overview of the article-based/synopsis PhD thesis that consists of eight parts: 1) initial formalities, 2) introduction, 3) methodological considerations, 4) study presentations, 5) discussion, 6) conclusion, 7) perspectives, and 8) concluding formalities. It is elaborated with detailed information, practical advice, and a template, so the thesis complies with the demands of the Danish Graduate Schools of Health Sciences.


Subject(s)
Academic Dissertations as Topic , Writing , Writing/standards , Humans , Education, Graduate/standards , Guidelines as Topic , Research Design/standards , Denmark
9.
BMC Med Res Methodol ; 24(1): 130, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840047

ABSTRACT

BACKGROUND: Faced with the high cost and limited efficiency of classical randomized controlled trials, researchers are increasingly applying adaptive designs to speed up the development of new drugs. However, the application of adaptive design to drug randomized controlled trials (RCTs) and whether the reporting is adequate are unclear. Thus, this study aimed to summarize the epidemiological characteristics of the relevant trials and assess their reporting quality by the Adaptive designs CONSORT Extension (ACE) checklist. METHODS: We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL) and ClinicalTrials.gov from inception to January 2020. We included drug RCTs that explicitly claimed to be adaptive trials or used any type of adaptative design. We extracted the epidemiological characteristics of included studies to summarize their adaptive design application. We assessed the reporting quality of the trials by Adaptive designs CONSORT Extension (ACE) checklist. Univariable and multivariable linear regression models were used to the association of four prespecified factors with the quality of reporting. RESULTS: Our survey included 108 adaptive trials. We found that adaptive design has been increasingly applied over the years, and was commonly used in phase II trials (n = 45, 41.7%). The primary reasons for using adaptive design were to speed the trial and facilitate decision-making (n = 24, 22.2%), maximize the benefit of participants (n = 21, 19.4%), and reduce the total sample size (n = 15, 13.9%). Group sequential design (n = 63, 58.3%) was the most frequently applied method, followed by adaptive randomization design (n = 26, 24.1%), and adaptive dose-finding design (n = 24, 22.2%). The proportion of adherence to the ACE checklist of 26 topics ranged from 7.4 to 99.1%, with eight topics being adequately reported (i.e., level of adherence ≥ 80%), and eight others being poorly reported (i.e., level of adherence ≤ 30%). In addition, among the seven items specific for adaptive trials, three were poorly reported: accessibility to statistical analysis plan (n = 8, 7.4%), measures for confidentiality (n = 14, 13.0%), and assessments of similarity between interim stages (n = 25, 23.1%). The mean score of the ACE checklist was 13.9 (standard deviation [SD], 3.5) out of 26. According to our multivariable regression analysis, later published trials (estimated ß = 0.14, p < 0.01) and the multicenter trials (estimated ß = 2.22, p < 0.01) were associated with better reporting. CONCLUSION: Adaptive design has shown an increasing use over the years, and was primarily applied to early phase drug trials. However, the reporting quality of adaptive trials is suboptimal, and substantial efforts are needed to improve the reporting.


Subject(s)
Randomized Controlled Trials as Topic , Research Design , Humans , Research Design/standards , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Randomized Controlled Trials as Topic/standards , Checklist/methods , Checklist/standards , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Clinical Trials, Phase II as Topic/standards
10.
J Am Acad Psychiatry Law ; 52(2): 153-160, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38834368

ABSTRACT

A systematic review of the literature on restoration of competence to stand trial identified a predominance of retrospective case studies using descriptive and correlational statistics. Guided by National Institutes of Health (NIH) quality metrics and emphasizing study design, sample size, and statistical methods, the authors categorized a large majority of studies as fair in quality, underscoring the need for controlled designs, larger representative samples, and more sophisticated statistical analyses. Implications for the state of forensic research include the need to use large databases within jurisdictions and the importance of reliable methods that can be applied across jurisdictions and aggregated for meta-analysis. More sophisticated research methods can be advanced in forensic fellowship training where coordinated projects and curricula can encourage systematic approaches to forensic research.


Subject(s)
Mental Competency , Humans , Mental Competency/legislation & jurisprudence , Forensic Psychiatry/standards , Forensic Psychiatry/education , Research Design/standards , United States
12.
BMJ Open ; 14(6): e071136, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38889936

ABSTRACT

INTRODUCTION: Observational studies are fraught with several biases including reverse causation and residual confounding. Overview of reviews of observational studies (ie, umbrella reviews) synthesise systematic reviews with or without meta-analyses of cross-sectional, case-control and cohort studies, and may also aid in the grading of the credibility of reported associations. The number of published umbrella reviews has been increasing. Recently, a reporting guideline for overviews of reviews of healthcare interventions (Preferred Reporting Items for Overviews of Reviews (PRIOR)) was published, but the field lacks reporting guidelines for umbrella reviews of observational studies. Our aim is to develop a reporting guideline for umbrella reviews on cross-sectional, case-control and cohort studies assessing epidemiological associations. METHODS AND ANALYSIS: We will adhere to established guidance and prepare a PRIOR extension for systematic reviews of cross-sectional, case-control and cohort studies testing epidemiological associations between an exposure and an outcome, namely Preferred Reporting Items for Umbrella Reviews of Cross-sectional, Case-control and Cohort studies (PRIUR-CCC). Step 1 will be the project launch to identify stakeholders. Step 2 will be a literature review of available guidance to conduct umbrella reviews. Step 3 will be an online Delphi study sampling 100 participants among authors and editors of umbrella reviews. Step 4 will encompass the finalisation of PRIUR-CCC statement, including a checklist, a flow diagram, explanation and elaboration document. Deliverables will be (i) identifying stakeholders to involve according to relevant expertise and end-user groups, with an equity, diversity and inclusion lens; (ii) completing a narrative review of methodological guidance on how to conduct umbrella reviews, a narrative review of methodology and reporting in published umbrella reviews and preparing an initial PRIUR-CCC checklist for Delphi study round 1; (iii) preparing a PRIUR-CCC checklist with guidance after Delphi study; (iv) publishing and disseminating PRIUR-CCC statement. ETHICS AND DISSEMINATION: PRIUR-CCC has been approved by The Ottawa Health Science Network Research Ethics Board and has obtained consent (20220639-01H). Participants to step 3 will give informed consent. PRIUR-CCC steps will be published in a peer-reviewed journal and will guide reporting of umbrella reviews on epidemiological associations.


Subject(s)
Guidelines as Topic , Humans , Cross-Sectional Studies , Cohort Studies , Case-Control Studies , Research Design/standards , Systematic Reviews as Topic , Checklist , Observational Studies as Topic
13.
JMIR Res Protoc ; 13: e56271, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38842925

ABSTRACT

BACKGROUND: Globally, there are marked inconsistencies in how immunosuppression is characterized and subdivided into clinical risk groups. This is detrimental to the precision and comparability of disease surveillance efforts-which has negative implications for the care of those who are immunosuppressed and their health outcomes. This was particularly apparent during the COVID-19 pandemic; despite collective motivation to protect these patients, conflicting clinical definitions created international rifts in how those who were immunosuppressed were monitored and managed during this period. We propose that international clinical consensus be built around the conditions that lead to immunosuppression and their gradations of severity concerning COVID-19. Such information can then be formalized into a digital phenotype to enhance disease surveillance and provide much-needed intelligence on risk-prioritizing these patients. OBJECTIVE: We aim to demonstrate how electronic Delphi objectives, methodology, and statistical approaches will help address this lack of consensus internationally and deliver a COVID-19 risk-stratified phenotype for "adult immunosuppression." METHODS: Leveraging existing evidence for heterogeneous COVID-19 outcomes in adults who are immunosuppressed, this work will recruit over 50 world-leading clinical, research, or policy experts in the area of immunology or clinical risk prioritization. After 2 rounds of clinical consensus building and 1 round of concluding debate, these panelists will confirm the medical conditions that should be classed as immunosuppressed and their differential vulnerability to COVID-19. Consensus statements on the time and dose dependencies of these risks will also be presented. This work will be conducted iteratively, with opportunities for panelists to ask clarifying questions between rounds and provide ongoing feedback to improve questionnaire items. Statistical analysis will focus on levels of agreement between responses. RESULTS: This protocol outlines a robust method for improving consensus on the definition and meaningful subdivision of adult immunosuppression concerning COVID-19. Panelist recruitment took place between April and May of 2024; the target set for over 50 panelists was achieved. The study launched at the end of May and data collection is projected to end in July 2024. CONCLUSIONS: This protocol, if fully implemented, will deliver a universally acceptable, clinically relevant, and electronic health record-compatible phenotype for adult immunosuppression. As well as having immediate value for COVID-19 resource prioritization, this exercise and its output hold prospective value for clinical decision-making across all diseases that disproportionately affect those who are immunosuppressed. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/56271.


Subject(s)
COVID-19 , Delphi Technique , Immunosuppression Therapy , Humans , COVID-19/immunology , COVID-19/epidemiology , COVID-19/prevention & control , Immunosuppression Therapy/methods , Immunocompromised Host/immunology , Consensus , Risk Assessment/methods , SARS-CoV-2/immunology , Adult , Research Design/standards
14.
Trials ; 25(1): 373, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38858749

ABSTRACT

BACKGROUND: Surgical handover is associated with a significant risk of care failures. Existing research displays methodological deficiencies and little consensus on the outcomes that should be used to evaluate interventions in this area. This paper reports a protocol to develop a core outcome set (COS) to support standardisation, comparability, and evidence synthesis in future studies of surgical handover between doctors. METHODS: This study adheres to the Core Outcome Measures in Effectiveness Trials (COMET) initiative guidance for COS development, including the COS-Standards for Development (COS-STAD) and Reporting (COS-STAR) recommendations. It has been registered prospectively on the COMET database and will be led by an international steering group that includes surgical healthcare professionals, researchers, and patient and public partners. An initial list of reported outcomes was generated through a systematic review of interventions to improve surgical handover (PROSPERO: CRD42022363198). Findings of a qualitative evidence synthesis of patient and public perspectives on handover will augment this list, followed by a real-time Delphi survey involving all stakeholder groups. Each Delphi participant will then be invited to take part in at least one online consensus meeting to finalise the COS. ETHICS AND DISSEMINATION: This study was approved by the Royal College of Surgeons in Ireland (RCSI) Research Ethics Committee (202309015, 7th November 2023). Results will be presented at surgical scientific meetings and submitted to a peer-reviewed journal. A plain English summary will be disseminated through national websites and social media. The authors aim to integrate the COS into the handover curriculum of the Irish national surgical training body and ensure it is shared internationally with other postgraduate surgical training programmes. Collaborators will be encouraged to share the findings with relevant national health service functions and national bodies. DISCUSSION: This study will represent the first published COS for interventions to improve surgical handover, the first use of a real-time Delphi survey in a surgical context, and will support the generation of better-quality evidence to inform best practice. TRIAL REGISTRATION: Core Outcome Measures in Effectiveness Trials (COMET) initiative 2675.  http://www.comet-initiative.org/Studies/Details/2675 .


Subject(s)
Consensus , Delphi Technique , Patient Handoff , Humans , Patient Handoff/standards , Research Design/standards , Surgical Procedures, Operative/standards , Stakeholder Participation , Endpoint Determination/standards
15.
Trials ; 25(1): 353, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38822392

ABSTRACT

BACKGROUND: The SAVVY project aims to improve the analyses of adverse events (AEs) in clinical trials through the use of survival techniques appropriately dealing with varying follow-up times and competing events (CEs). This paper summarizes key features and conclusions from the various SAVVY papers. METHODS: Summarizing several papers reporting theoretical investigations using simulations and an empirical study including randomized clinical trials from several sponsor organizations, biases from ignoring varying follow-up times or CEs are investigated. The bias of commonly used estimators of the absolute (incidence proportion and one minus Kaplan-Meier) and relative (risk and hazard ratio) AE risk is quantified. Furthermore, we provide a cursory assessment of how pertinent guidelines for the analysis of safety data deal with the features of varying follow-up time and CEs. RESULTS: SAVVY finds that for both, avoiding bias and categorization of evidence with respect to treatment effect on AE risk into categories, the choice of the estimator is key and more important than features of the underlying data such as percentage of censoring, CEs, amount of follow-up, or value of the gold-standard. CONCLUSIONS: The choice of the estimator of the cumulative AE probability and the definition of CEs are crucial. Whenever varying follow-up times and/or CEs are present in the assessment of AEs, SAVVY recommends using the Aalen-Johansen estimator (AJE) with an appropriate definition of CEs to quantify AE risk. There is an urgent need to improve pertinent clinical trial reporting guidelines for reporting AEs so that incidence proportions or one minus Kaplan-Meier estimators are finally replaced by the AJE with appropriate definition of CEs.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Time Factors , Randomized Controlled Trials as Topic/standards , Practice Guidelines as Topic , Data Interpretation, Statistical , Risk Assessment , Research Design/standards , Risk Factors , Drug-Related Side Effects and Adverse Reactions , Bias , Survival Analysis , Follow-Up Studies , Treatment Outcome , Computer Simulation , Kaplan-Meier Estimate
16.
Zhen Ci Yan Jiu ; 49(6): 661-666, 2024 Jun 25.
Article in English, Chinese | MEDLINE | ID: mdl-38897811

ABSTRACT

The STRICTA checklist is the guideline for reporting clinical trials undertaken using acupuncture intervention. As an extension of the CONSORT checklist, the STRICTA checklist facilitates the reporting quality of acupuncture clinical trials. The clinical research paradigm changes along with the development of science and technology. It is crucial to ensure whether or not the existing STRICTA checklist guides the reporting clinical trials of acupuncture now and in the future as well. This paper introduces the development and the updating procedure of the STRICTA checklist, analyzes the characteristics of utility and the limitation, and proposes several suggestions on the difficulties and challenges encountered in the implementation of the STRICTA checklist of current version so as to advance the further update and improvement.


Subject(s)
Acupuncture Therapy , Checklist , Humans , Acupuncture Therapy/standards , Clinical Trials as Topic/standards , Research Design/standards
17.
Behav Ther ; 55(4): 856-871, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38937055

ABSTRACT

Recent publications within Contextual Behavioral Science provided a rationale for the expansion of intervention efficacy research using methods that capture idiographic factors and processes. We conducted a systematic review of the use and quality of single-case experimental designs (SCED) within the Acceptance and Commitment Therapy (ACT) literature in adult clinical populations. The systematic review was conducted according to PRISMA guidelines and the databases CINAHL, MEDLINE, PsycINFO, PsycArticles and OpenGrey were searched for peer-reviewed articles. Further studies were sought through review of reference lists of all full text studies. Studies were assessed against What Works Clearinghouse (WWC) single-case design standards. Twenty-six studies met eligibility criteria and were conducted within research teams all implementing multiple-baseline designs. Twenty-four studies did not meet WWC standards with most failing to ensure a degree of concurrence across participants. The extent of randomisation methods was also captured. The review highlights the sparsity of SCEDs within ACT literature in clinical populations and current methodological practices. Limitations of the review and implications for future research are discussed.


Subject(s)
Acceptance and Commitment Therapy , Research Design , Adult , Humans , Acceptance and Commitment Therapy/methods , Research Design/standards , Single-Case Studies as Topic
19.
Br J Hosp Med (Lond) ; 85(6): 1-13, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38941976

ABSTRACT

Aims/Background Blended learning has been a commonly adopted teaching mode in the medical education community in recent years. Many studies have shown that the blended learning mode is superior to the traditional teaching mode. Nonetheless, pinpointing the specific advantages provided by blended teaching methods is challenging, since multiple elements influence their effectiveness. This study aimed to investigate the reliability of the conclusions of published randomised controlled trials (RCTs) on blended learning in medical education by assessing their quality, and to provide suggestions for future related studies. Methods Two investigators searched PUBMED and EMBASE, and assessed RCTs related to medical blended learning published from January 1, 2010 to December 31, 2021. The analysis of the overall quality of each report was based on the 2010 consolidated standard of reporting trials (CONSORT) Statement applying a 28-point overall quality score. We also conducted a multivariate assessment including year of publication, region of the trial, journal, impact factor, sample size, and the primary outcome. Results A total of 22 RCTs closely relevant to medical blended learning were eventually selected for study. The results demonstrated that half of the studies failed to explicitly describe at least 34% of the items in the 2010 CONSORT Statement. Medical blended learning is an emerging new teaching mode, with 95.45% of RCTs published since 2010. However, many issues that we consider crucial were not satisfactorily addressed in the selected RCTs. Conclusion Although the 2010 CONSORT Statement was published more than a decade ago, the quality of RCTs remains unsatisfactory. Some important items were inadequately reported in many RCTs such as sample size, blinding, and concealment. We encourage researchers who focus on the effects of blended learning in medical education to incorporate the guidelines in the 2010 CONSORT Statement when designing and conducting relevant research. Researchers, reviewers, and editors also need to work together to improve the quality of relevant RCTs in accordance with the requirements of the 2010 CONSORT Statement.


Subject(s)
Education, Medical , Randomized Controlled Trials as Topic , Randomized Controlled Trials as Topic/standards , Humans , Education, Medical/methods , Education, Medical/standards , Learning , Research Design/standards , Reproducibility of Results
20.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714936

ABSTRACT

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Humans , Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Research Design/standards , Sample Size , Data Interpretation, Statistical , Models, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...