Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.445
Filter
Add more filters

Publication year range
1.
Nature ; 627(8002): 49-58, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38448693

ABSTRACT

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists' visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community's ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.


Subject(s)
Artificial Intelligence , Illusions , Knowledge , Research Design , Research Personnel , Humans , Artificial Intelligence/supply & distribution , Artificial Intelligence/trends , Cognition , Diffusion of Innovation , Efficiency , Reproducibility of Results , Research Design/standards , Research Design/trends , Risk , Research Personnel/psychology , Research Personnel/standards
2.
Mol Cell ; 82(2): 241-247, 2022 01 20.
Article in English | MEDLINE | ID: mdl-35063094

ABSTRACT

Quantitative optical microscopy-an emerging, transformative approach to single-cell biology-has seen dramatic methodological advancements over the past few years. However, its impact has been hampered by challenges in the areas of data generation, management, and analysis. Here we outline these technical and cultural challenges and provide our perspective on the trajectory of this field, ushering in a new era of quantitative, data-driven microscopy. We also contrast it to the three decades of enormous advances in the field of genomics that have significantly enhanced the reproducibility and wider adoption of a plethora of genomic approaches.


Subject(s)
Genomics/trends , Microscopy/trends , Optical Imaging/trends , Single-Cell Analysis/trends , Animals , Diffusion of Innovation , Genomics/history , High-Throughput Screening Assays/trends , History, 20th Century , History, 21st Century , Humans , Microscopy/history , Optical Imaging/history , Reproducibility of Results , Research Design/trends , Single-Cell Analysis/history
3.
Nature ; 620(7972): 47-60, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37532811

ABSTRACT

Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI toolsneed a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation.


Subject(s)
Artificial Intelligence , Research Design , Artificial Intelligence/standards , Artificial Intelligence/trends , Datasets as Topic , Deep Learning , Research Design/standards , Research Design/trends , Unsupervised Machine Learning
4.
PLoS Biol ; 20(1): e3001553, 2022 01.
Article in English | MEDLINE | ID: mdl-35100252

ABSTRACT

Meta-research involves the interrogation of every stage of the research lifecycle, from conception to publication and dissemination. Looking back over the first six years of PLOS Biology Meta-Research Articles highlights the important insights that can be obtained from such "research on research".


Subject(s)
Biomedical Research/methods , Research Design/trends , Bibliometrics , Biomedical Research/trends , Humans
5.
Nature ; 575(7781): 137-146, 2019 11.
Article in English | MEDLINE | ID: mdl-31695204

ABSTRACT

The goal of sex and gender analysis is to promote rigorous, reproducible and responsible science. Incorporating sex and gender analysis into experimental design has enabled advancements across many disciplines, such as improved treatment of heart disease and insights into the societal impact of algorithmic bias. Here we discuss the potential for sex and gender analysis to foster scientific discovery, improve experimental efficiency and enable social equality. We provide a roadmap for sex and gender analysis across scientific disciplines and call on researchers, funding agencies, peer-reviewed journals and universities to coordinate efforts to implement robust methods of sex and gender analysis.


Subject(s)
Engineering/methods , Engineering/standards , Research Design/standards , Research Design/trends , Science/methods , Science/standards , Sex Characteristics , Sex Factors , Animals , Artificial Intelligence , Female , Humans , Male , Molecular Targeted Therapy , Reproducibility of Results , Sample Size
6.
PLoS Biol ; 19(5): e3001009, 2021 05.
Article in English | MEDLINE | ID: mdl-34010281

ABSTRACT

The replicability of research results has been a cause of increasing concern to the scientific community. The long-held belief that experimental standardization begets replicability has also been recently challenged, with the observation that the reduction of variability within studies can lead to idiosyncratic, lab-specific results that cannot be replicated. An alternative approach is to, instead, deliberately introduce heterogeneity, known as "heterogenization" of experimental design. Here, we explore a novel perspective in the heterogenization program in a meta-analysis of variability in observed phenotypic outcomes in both control and experimental animal models of ischemic stroke. First, by quantifying interindividual variability across control groups, we illustrate that the amount of heterogeneity in disease state (infarct volume) differs according to methodological approach, for example, in disease induction methods and disease models. We argue that such methods may improve replicability by creating diverse and representative distribution of baseline disease state in the reference group, against which treatment efficacy is assessed. Second, we illustrate how meta-analysis can be used to simultaneously assess efficacy and stability (i.e., mean effect and among-individual variability). We identify treatments that have efficacy and are generalizable to the population level (i.e., low interindividual variability), as well as those where there is high interindividual variability in response; for these, latter treatments translation to a clinical setting may require nuance. We argue that by embracing rather than seeking to minimize variability in phenotypic outcomes, we can motivate the shift toward heterogenization and improve both the replicability and generalizability of preclinical research.


Subject(s)
Animal Experimentation/standards , Research Design/standards , Animals , Behavior, Animal/physiology , Brain Ischemia/metabolism , Humans , Meta-Analysis as Topic , Models, Animal , Phenotype , Reference Standards , Reproducibility of Results , Research Design/trends , Stroke/physiopathology
7.
PLoS Biol ; 19(5): e3001177, 2021 05.
Article in English | MEDLINE | ID: mdl-33951050

ABSTRACT

In an effort to better utilize published evidence obtained from animal experiments, systematic reviews of preclinical studies are increasingly more common-along with the methods and tools to appraise them (e.g., SYstematic Review Center for Laboratory animal Experimentation [SYRCLE's] risk of bias tool). We performed a cross-sectional study of a sample of recent preclinical systematic reviews (2015-2018) and examined a range of epidemiological characteristics and used a 46-item checklist to assess reporting details. We identified 442 reviews published across 43 countries in 23 different disease domains that used 26 animal species. Reporting of key details to ensure transparency and reproducibility was inconsistent across reviews and within article sections. Items were most completely reported in the title, introduction, and results sections of the reviews, while least reported in the methods and discussion sections. Less than half of reviews reported that a risk of bias assessment for internal and external validity was undertaken, and none reported methods for evaluating construct validity. Our results demonstrate that a considerable number of preclinical systematic reviews investigating diverse topics have been conducted; however, their quality of reporting is inconsistent. Our study provides the justification and evidence to inform the development of guidelines for conducting and reporting preclinical systematic reviews.


Subject(s)
Peer Review, Research/methods , Peer Review, Research/standards , Research Design/standards , Animal Experimentation/standards , Animals , Bias , Checklist/standards , Drug Evaluation, Preclinical/methods , Drug Evaluation, Preclinical/standards , Empirical Research , Epidemiologic Methods , Epidemiology/trends , Humans , Peer Review, Research/trends , Publications , Reproducibility of Results , Research Design/trends
10.
PLoS Biol ; 18(12): e3000937, 2020 12.
Article in English | MEDLINE | ID: mdl-33296358

ABSTRACT

Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.


Subject(s)
Data Collection/methods , Research Design/statistics & numerical data , Data Collection/standards , Data Collection/trends , Humans , Quality Control , Registries/statistics & numerical data , Research Design/trends
11.
Infancy ; 28(3): 507-531, 2023 05.
Article in English | MEDLINE | ID: mdl-36748788

ABSTRACT

Understanding the trends and predictors of attrition rate, or the proportion of collected data that is excluded from the final analyses, is important for accurate research planning, assessing data integrity, and ensuring generalizability. In this pre-registered meta-analysis, we reviewed 182 publications in infant (0-24 months) functional near-infrared spectroscopy (fNIRS) research published from 1998 to April 9, 2020, and investigated the trends and predictors of attrition. The average attrition rate was 34.23% among 272 experiments across all 182 publications. Among a subset of 136 experiments that reported the specific reasons for subject exclusion, 21.50% of the attrition was infant-driven, while 14.21% was signal-driven. Subject characteristics (e.g., age) and study design (e.g., fNIRS cap configuration, block/trial design, and stimulus type) predicted the total and subject-driven attrition rates, suggesting that modifying the recruitment pool or the study design can meaningfully reduce the attrition rate in infant fNIRS research. Based on the findings, we established guidelines for reporting the attrition rate for scientific transparency and made recommendations to minimize the attrition rates. This research can facilitate developmental cognitive neuroscientists in their quest toward increasingly rigorous and representative research.


Subject(s)
Research Design , Spectroscopy, Near-Infrared , Humans , Infant , Research Design/trends
12.
Circulation ; 144(23): e461-e471, 2021 12 07.
Article in English | MEDLINE | ID: mdl-34719260

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic has had worldwide repercussions for health care and research. In spring 2020, most non-COVID-19 research was halted, hindering research across the spectrum from laboratory-based experimental science to clinical research. Through the second half of 2020 and the first half of 2021, biomedical research, including cardiovascular science, only gradually restarted, with many restrictions on onsite activities, limited clinical research participation, and the challenges associated with working from home and caregiver responsibilities. Compounding these impediments, much of the global biomedical research infrastructure was redirected toward vaccine testing and deployment. This redirection of supply chains, personnel, and equipment has additionally hampered restoration of normal research activity. Transition to virtual interactions offset some of these limitations but did not adequately replace the need for scientific exchange and collaboration. Here, we outline key steps to reinvigorate biomedical research, including a call for increased support from the National Institutes of Health. We also call on academic institutions, publishers, reviewers, and supervisors to consider the impact of COVID-19 when assessing productivity, recognizing that the pandemic did not affect all equally. We identify trainees and junior investigators, especially those with caregiving roles, as most at risk of being lost from the biomedical workforce and identify steps to reduce the loss of these key investigators. Although the global pandemic highlighted the power of biomedical science to define, treat, and protect against threats to human health, significant investment in the biomedical workforce is required to maintain and promote well-being.


Subject(s)
Biomedical Research/trends , COVID-19 , Cardiology/trends , Research Design/trends , Research Personnel/trends , Advisory Committees , American Heart Association , Biomedical Research/education , Cardiology/education , Diffusion of Innovation , Education, Professional/trends , Forecasting , Humans , Public Opinion , Research Personnel/education , Time Factors , United States
13.
J Hepatol ; 76(1): 186-194, 2022 01.
Article in English | MEDLINE | ID: mdl-34592365

ABSTRACT

Despite several recent meta-analyses on the topic, the comparative risk of hepatocellular carcinoma in patients with chronic hepatitis B (CHB) receiving entecavir (ETV) or tenofovir disoproxil fumarate (TDF) remains controversial. The controversy partly results from the arbitrary nature of significance levels leading to contradictory conclusions from very similar datasets. However, the use of observational data, which is prone to both within- and between-study heterogeneity of patient characteristics, also lends additional uncertainty. The asynchronous introduction of ETV and TDF in East Asia, where the majority of these studies have been conducted, further complicates analyses, as does the ensuing difference in follow-up time between ETV and TDF cohorts. Researchers conducting meta-analyses in this area must make many methodological decisions to mitigate bias but are ultimately limited to the methodologies of the included studies. It is therefore important for researchers, as well as the audience of published meta-analyses, to be aware of the quality of observational studies and meta-analyses in terms of patient characteristics, study design and statistical methodologies. In this review, we aim to help clinicians navigate the published meta-analyses on this topic and to provide researchers with recommendations for future work.


Subject(s)
Carcinoma, Hepatocellular/diagnosis , Hepatitis B, Chronic/complications , Hepatitis B, Chronic/drug therapy , Research Design/trends , Antiviral Agents/therapeutic use , Carcinoma, Hepatocellular/etiology , Humans , Incidence , Liver Neoplasms/diagnosis , Liver Neoplasms/etiology , Meta-Analysis as Topic , Proportional Hazards Models , Risk Assessment/methods , Tenofovir/therapeutic use , Treatment Outcome
14.
Methods ; 195: 113-119, 2021 11.
Article in English | MEDLINE | ID: mdl-34492300

ABSTRACT

The protracted COVID 19 pandemic may indicate failures of scientific methodologies. Hoping to facilitate the evaluation and/or update of methods relevant in Biomedicine, several aspects of scientific processes are here explored. First, the background is reviewed. In particular, eight topics are analyzed: (i) the history of Higher Education models in reference to the pursuit of science and the type of student cognition pursued, (ii) whether explanatory or actionable knowledge is emphasized depending on the well- or ill-defined nature of problems, (iii) the role of complexity and dynamics, (iv) how differences between Biology and other fields influence methodologies, (v) whether theory, hypotheses or data drive scientific research, (vi) whether Biology is reducible to one or a few factors, (vii) the fact that data, to become actionable knowledge, require structuring, and (viii) the need of inter-/trans-disciplinary knowledge integration. To illustrate how these topics interact, a second section describes four temporal stages of scientific methods: conceptualization, operationalization, validation and evaluation. They refer to the transition from abstract (non-measurable) concepts (such as 'health') to the selection of concrete (measurable) operations (such as 'quantification of ́anti-virus specific antibody titers'). Conceptualization is the process that selects concepts worth investigating, which continues as operationalization when data-producing variables viewed to reflect critical features of the concepts are chosen. Because the operations selected are not necessarily valid, informative, and may fail to solve problems, validations and evaluations are critical stages, which require inter/trans-disciplinary knowledge integration. It is suggested that data structuring can substantially improve scientific methodologies applicable in Biology, provided that other aspects here mentioned are also considered. The creation of independent bodies meant to evaluate biologically oriented scientific methods is recommended.


Subject(s)
Biology/methods , COVID-19/epidemiology , COVID-19/prevention & control , Research Design , Biology/trends , Humans , Research Design/trends
15.
Methods ; 195: 120-127, 2021 11.
Article in English | MEDLINE | ID: mdl-34352372

ABSTRACT

This review discusses the philosophical foundations of what used to be called "the scientific method" and is nowadays often known as the scientific attitude. It used to be believed that scientific theories and methods aimed at the truth especially in the case of physics, chemistry and astronomy because these sciences were able to develop numerous scientific laws that made it possible to understand and predict many physical phenomena. The situation is different in the case of the biological sciences which deal with highly complex living organisms made up of huge numbers of constituents that undergo continuous dynamic processes; this leads to novel emergent properties in organisms that cannot be predicted because they are not present in the constituents before they have interacted with each other. This is one of the reasons why there are no universal scientific laws in biology. Furthermore, all scientific theories can only achieve a restricted level of predictive success because they remain valid only under the limited range of conditions that were used for establishing the theory' in the first place. Many theories that used to be accepted were subsequently shown to be false, demonstrating that scientific theories always remain tentative and can never be proven beyond and doubt. It is ironical that as scientists have finally accepted that approximate truths are perfectly adequate and that absolute truth is an illusion, a new irrational sociological phenomenon called Post-Truth conveyed by social media, the Internet and fake news has developed in the Western world that is convincing millions of people that truth simply does not exist. Misleading information is circulated with the intention to deceive and science denialism is promoted by denying the remarkable achievements of science and technology during the last centuries. Although the concept of intentional design is widely used to describe the methods that biologists use to make discoveries and inventions, it will be argued that the term is not appropriate for explaining the appearance of life on our planet nor for describing the scientific creativity of scientific investigators. The term rational for describing the development of new vaccines is also unjustified. Because the analysis of the COVID-19 pandemic requires contributions from biomedical and psycho-socioeconomic sciences, one scientific method alone would be insufficient for combatting the pandemic.


Subject(s)
Biological Science Disciplines/methods , COVID-19/prevention & control , Concept Formation , Research Design , Vaccinology/methods , Biological Science Disciplines/trends , COVID-19/epidemiology , COVID-19/genetics , Humans , Research Design/trends , Vaccinology/trends
16.
Methods ; 195: 72-76, 2021 11.
Article in English | MEDLINE | ID: mdl-33744396

ABSTRACT

The test positivity (TP) rate has emerged as an important metric for gauging the illness burden due to COVID-19. Given the importance of COVID-19 TP rates for understanding COVID-related morbidity, researchers and clinicians have become increasingly interested in comparing TP rates across countries. The statistical methods for performing such comparisons fall into two general categories: frequentist tests and Bayesian methods. Using data from Our World in Data (ourworldindata.org), we performed comparisons for two prototypical yet disparate pairs of countries: Bolivia versus the United States (large vs. small-to-moderate TP rates), and South Korea vs. Uruguay (two very small TP rates of similar magnitude). Three different statistical procedures were used: two frequentist tests (an asymptotic z-test and the 'N-1' chi-square test), and a Bayesian method for comparing two proportions (TP rates are proportions). Results indicated that for the case of large vs. small-to-moderate TP rates (Bolivia versus the United States), the frequentist and Bayesian approaches both indicated that the two rates were substantially different. When the TP rates were very small and of similar magnitude (values of 0.009 and 0.007 for South Korea and Uruguay, respectively), the frequentist tests indicated a highly significant contrast, despite the apparent trivial amount by which the two rates differ. The Bayesian method, in comparison, suggested that the TP rates were practically equivalent-a finding that seems more consistent with the observed data. When TP rates are highly similar in magnitude, frequentist tests can lead to erroneous interpretations. A Bayesian approach, on the other hand, can help ensure more accurate inferences and thereby avoid potential decision errors that could lead to costly public health and policy-related consequences.


Subject(s)
COVID-19 Testing/statistics & numerical data , COVID-19 Testing/trends , COVID-19/epidemiology , Data Interpretation, Statistical , Research Design/statistics & numerical data , Research Design/trends , Bayes Theorem , Bolivia/epidemiology , COVID-19/diagnosis , Humans , Republic of Korea/epidemiology , United States/epidemiology , Uruguay/epidemiology
18.
Am J Respir Crit Care Med ; 203(6): e11-e24, 2021 03 15.
Article in English | MEDLINE | ID: mdl-33719931

ABSTRACT

Background: Central sleep apnea (CSA) is common among patients with heart failure and has been strongly linked to adverse outcomes. However, progress toward improving outcomes for such patients has been limited. The purpose of this official statement from the American Thoracic Society is to identify key areas to prioritize for future research regarding CSA in heart failure.Methods: An international multidisciplinary group with expertise in sleep medicine, pulmonary medicine, heart failure, clinical research, and health outcomes was convened. The group met at the American Thoracic Society 2019 International Conference to determine research priority areas. A statement summarizing the findings of the group was subsequently authored using input from all members.Results: The workgroup identified 11 specific research priorities in several key areas: 1) control of breathing and pathophysiology leading to CSA, 2) variability across individuals and over time, 3) techniques to examine CSA pathogenesis and outcomes, 4) impact of device and pharmacological treatment, and 5) implementing CSA treatment for all individualsConclusions: Advancing care for patients with CSA in the context of heart failure will require progress in the arenas of translational (basic through clinical), epidemiological, and patient-centered outcome research. Given the increasing prevalence of heart failure and its associated substantial burden to individuals, society, and the healthcare system, targeted research to improve knowledge of CSA pathogenesis and treatment is a priority.


Subject(s)
Biomedical Research/statistics & numerical data , Biomedical Research/trends , Heart Failure , Research Design/trends , Sleep Apnea, Central , Societies, Medical/statistics & numerical data , Societies, Medical/trends , Adult , Aged , Aged, 80 and over , Female , Forecasting , Humans , Male , Middle Aged , Research Design/statistics & numerical data , United States
19.
J Med Internet Res ; 24(8): e33898, 2022 08 26.
Article in English | MEDLINE | ID: mdl-36018626

ABSTRACT

BACKGROUND: The RAND/UCLA Appropriateness Method (RAM), a variant of the Delphi Method, was developed to synthesize existing evidence and elicit the clinical judgement of medical experts on the appropriate treatment of specific clinical presentations. Technological advances now allow researchers to conduct expert panels on the internet, offering a cost-effective and convenient alternative to the traditional RAM. For example, the Department of Veterans Affairs recently used a web-based RAM to validate clinical recommendations for de-intensifying routine primary care services. A substantial literature describes and tests various aspects of the traditional RAM in health research; yet we know comparatively less about how researchers implement web-based expert panels. OBJECTIVE: The objectives of this study are twofold: (1) to understand how the web-based RAM process is currently used and reported in health research and (2) to provide preliminary reporting guidance for researchers to improve the transparency and reproducibility of reporting practices. METHODS: The PubMed database was searched to identify studies published between 2009 and 2019 that used a web-based RAM to measure the appropriateness of medical care. Methodological data from each article were abstracted. The following categories were assessed: composition and characteristics of the web-based expert panels, characteristics of panel procedures, results, and panel satisfaction and engagement. RESULTS: Of the 12 studies meeting the eligibility criteria and reviewed, only 42% (5/12) implemented the full RAM process with the remaining studies opting for a partial approach. Among those studies reporting, the median number of participants at first rating was 42. While 92% (11/12) of studies involved clinicians, 50% (6/12) involved multiple stakeholder types. Our review revealed that the studies failed to report on critical aspects of the RAM process. For example, no studies reported response rates with the denominator of previous rounds, 42% (5/12) did not provide panelists with feedback between rating periods, 50% (6/12) either did not have or did not report on the panel discussion period, and 25% (3/12) did not report on quality measures to assess aspects of the panel process (eg, satisfaction with the process). CONCLUSIONS: Conducting web-based RAM panels will continue to be an appealing option for researchers seeking a safe, efficient, and democratic process of expert agreement. Our literature review uncovered inconsistent reporting frameworks and insufficient detail to evaluate study outcomes. We provide preliminary recommendations for reporting that are both timely and important for producing replicable, high-quality findings. The need for reporting standards is especially critical given that more people may prefer to participate in web-based rather than in-person panels due to the ongoing COVID-19 pandemic.


Subject(s)
COVID-19 , Expert Testimony/methods , Internet/trends , Pandemics , Research Design/standards , Delphi Technique , Humans , Internet/standards , Patient Care , Reproducibility of Results , Research Design/trends
20.
Stroke ; 52(11): e702-e705, 2021 11.
Article in English | MEDLINE | ID: mdl-34525839

ABSTRACT

Background and Purpose: When reporting primary results from randomized controlled trials, recommendations include reporting results by sex. We reviewed the reporting of results by sex in contemporary acute stroke randomized controlled trials. Methods: We searched MEDLINE for articles reporting the primary results of phase 2 or 3 stroke randomized controlled trials published between 2010 and June 2020 in one of nine major clinical journals. Eligible trials were restricted to those with a therapeutic intervention initiated within one month of stroke onset. Of primary interest was the reporting of results by sex for the primary outcome. We performed bivariate analyses using Fisher exact tests to identify study-level factors associated with reporting by sex and investigated temporal trends using an exact test for trend. Results: Of the 115 studies identified, primary results were reported by sex in 37% (n=42). Reporting varied significantly by journal, with the New England Journal of Medicine (61%) and Lancet journals (40%) having the highest rates (P=0.03). Reporting also differed significantly by geographic region (21% Europe versus 48% Americas, P=0.03), trial phase (13% phase 2 versus 40% phase 3, P=0.05), and sample size (24% <250 participants versus 61% >750 participants, P<0.01). Although not statistically significant (P=0.11), there was a temporal trend in favor of greater reporting among later publications (25% 2010­2012 versus 48% 2019­2020). Conclusions: Although reporting of primary trial results by sex improved from 2010 to 2020, the prevalence of reporting in major journals is still low. Further efforts are required to encourage journals and authors to comply with current reporting recommendations.


Subject(s)
Randomized Controlled Trials as Topic/standards , Research Design/statistics & numerical data , Stroke/therapy , Female , Humans , Male , Research Design/trends , Sex Factors
SELECTION OF CITATIONS
SEARCH DETAIL