Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 170
Filtrar
1.
R Soc Open Sci ; 11(10): 240850, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39359470

RESUMO

Independent replications are very rare in the behavioural and social sciences. This is problematic because they can help to detect 'false positives' in published research and, in turn, contribute to scientific self-correction. The lack of replication studies is, among other factors, due to a rather passive editorial approach concerning replications by many journals, which does not encourage and may sometimes even actively discourage submission of replications. In this Perspective article, we advocate for a more proactive editorial approach concerning replications and suggest introducing journal-based replication marketplaces as a new publication track. We argue that such replication marketplaces could solve the long-standing problem of lacking independent replications. To establish these marketplaces, a designated part of a journal's editorial board identifies the most relevant new findings reported within the journal's pages and publicly offers them for replication. This public offering could be combined with small grants for authors to support these replications. Authors then compete for the first accepted registered report to conduct the related replications and can thus be sure that their replication will be published independent of the later findings. Replication marketplaces would not only increase the prevalence of independent replications but also help science to become more self-correcting.

2.
Heliyon ; 10(17): e36066, 2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39296115

RESUMO

Science and knowledge are studied by researchers across many disciplines, examining how they are developed, what their current boundaries are and how we can advance them. By integrating evidence across disparate disciplines, the holistic field of science of science can address these foundational questions. This field illustrates how science is shaped by many interconnected factors: the cognitive processes of scientists, the historical evolution of science, economic incentives, institutional influences, computational approaches, statistical, mathematical and instrumental foundations of scientific inference, scientometric measures, philosophical and ethical dimensions of scientific concepts, among other influences. Achieving a comprehensive overview of a multifaceted field like the science of science requires pulling together evidence from the many sub-fields studying science across the natural and social sciences and humanities. This enables developing an interdisciplinary perspective of scientific practice, a more holistic understanding of scientific processes and outcomes, and more nuanced perspectives to how scientific research is conducted, influenced and evolves. It enables leveraging the strengths of various disciplines to create a holistic view of the foundations of science. Different researchers study science from their own disciplinary perspective and use their own methods, and there is a large divide between quantitative and qualitative researchers as they commonly do not read or cite research using other methodological approaches. A broader, synthesizing paper employing a qualitative approach can however help provide a bridge between disciplines by pulling together aspects of science (economic, scientometric, psychological, philosophical etc.). Such an approach enables identifying, across the range of fields, the powerful role of our scientific methods and instruments in shaping most aspects of our knowledge and science, whereas economic, social and historical influences help shape what knowledge we pursue. A unifying theory is then outlined for science of science - the new-methods-drive-science theory.

3.
Psychon Bull Rev ; 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39289241

RESUMO

Research on best practices in theory assessment highlights that testing theories is challenging because they inherit a new set of assumptions as soon as they are linked to a specific methodology. In this article, we integrate and build on this work by demonstrating the breadth of these challenges. We show that tracking auxiliary assumptions is difficult because they are made at different stages of theory testing and at multiple levels of a theory. We focus on these issues in a reanalysis of a seminal study and its replications, both of which use a simple working-memory paradigm and a mainstream computational modeling approach. These studies provide the main evidence for "all-or-none" recognition models of visual working memory and are still used as the basis for how to measure performance in popular visual working-memory tasks. In our reanalysis, we find that core practical auxiliary assumptions were unchecked and violated; the original model comparison metrics and data were not diagnostic in several experiments. Furthermore, we find that models were not matched on "theory general" auxiliary assumptions, meaning that the set of tested models was restricted, and not matched in theoretical scope. After testing these auxiliary assumptions and identifying diagnostic testing conditions, we find evidence for the opposite conclusion. That is, continuous resource models outperform all-or-none models. Together, our work demonstrates why tracking and testing auxiliary assumptions remains a fundamental challenge, even in prominent studies led by careful, computationally minded researchers. Our work also serves as a conceptual guide on how to identify and test the gamut of auxiliary assumptions in theory assessment, and we discuss these ideas in the context of contemporary approaches to scientific discovery.

4.
Integr Med Res ; 13(3): 101068, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39253695

RESUMO

The field of traditional, complementary, and integrative medicine (TCIM) has garnered increasing attention due to its holistic approach to health and well-being. While the quantity of published research about TCIM has increased exponentially, critics have argued that the field faces challenges related to methodological rigour, reproducibility, and overall quality. This article proposes meta-research as one approach to evaluating and improving the quality of TCIM research. Meta-research, also known as research about research, can be defined as "the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives". By systematically evaluating methodological rigour, identifying biases, and promoting transparency, meta-research can enhance the reliability and credibility of TCIM research. Specific topics of interest that are discussed in this article include the following: 1) study design and research methodology, 2) reporting of research, 3) research ethics, integrity, and misconduct, 4) replicability and reproducibility, 5) peer review and journal editorial practices, 6) research funding: grants and awards, and 7) hiring, promotion, and tenure. For each topic, we provide case examples to illustrate meta-research applications in TCIM. We argue that meta-research initiatives can contribute to maintaining public trust, safeguarding research integrity, and advancing evidence based TCIM practice, while challenges include navigating methodological complexities, biases, and disparities in funding and academic recognition. Future directions involve tailored research methodologies, interdisciplinary collaboration, policy implications, and capacity building in meta-research.

5.
Proc Natl Acad Sci U S A ; 121(38): e2404035121, 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39236231

RESUMO

We discuss a relatively new meta-scientific research design: many-analyst studies that attempt to assess the replicability and credibility of research based on large-scale observational data. In these studies, a large number of analysts try to answer the same research question using the same data. The key idea is the greater the variation in results, the greater the uncertainty in answering the research question and, accordingly, the lower the credibility of any individual research finding. Compared to individual replications, the large crowd of analysts allows for a more systematic investigation of uncertainty and its sources. However, many-analyst studies are also resource-intensive, and there are some doubts about their potential to provide credible assessments. We identify three issues that any many-analyst study must address: 1) identifying the source of variation in the results; 2) providing an incentive structure similar to that of standard research; and 3) conducting a proper meta-analysis of the results. We argue that some recent many-analyst studies have failed to address these issues satisfactorily and have therefore provided an overly pessimistic assessment of the credibility of science. We also provide some concrete guidance on how future many-analyst studies could provide a more constructive assessment.

6.
Dysphagia ; 2024 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-39153045

RESUMO

Multiple bolus trials are administered during clinical and research swallowing assessments to comprehensively capture an individual's swallowing function. Despite valuable information obtained from these boluses, it remains common practice to use a single bolus (e.g., the worst score) to describe the degree of dysfunction. Researchers also often collapse continuous or ordinal swallowing measures into categories, potentially exacerbating information loss. These practices may adversely affect statistical power to detect and estimate smaller, yet potentially meaningful, treatment effects. This study sought to examine the impact of aggregating and categorizing penetration-aspiration scale (PAS) scores on statistical power and effect size estimates. We used a Monte Carlo approach to simulate three hypothetical within-subject treatment studies in Parkinson's disease and head and neck cancer across a range of data characteristics (e.g., sample size, number of bolus trials, variability). Different statistical models (aggregated or multilevel) as well as various PAS reduction approaches (i.e., types of categorizations) were performed to examine their impact on power and the accuracy of effect size estimates. Across all scenarios, multilevel models demonstrated higher statistical power to detect group-level longitudinal change and more accurate estimates compared to aggregated (worst score) models. Categorizing PAS scores also reduced power and biased effect size estimates compared to an ordinal approach, though this depended on the type of categorization and baseline PAS distribution. Multilevel models should be considered as a more robust approach for the statistical analysis of multiple boluses administered in standardized swallowing protocols due to its high sensitivity and accuracy to compare group-level changes in swallowing function. Importantly, this finding appears to be consistent across patient populations with distinct pathophysiology (i.e., PD and HNC) and patterns of airway invasion. The decision to categorize a continuous or ordinal outcome should be grounded in the clinical or research question with recognition that scale reduction may negatively affect the quality of statistical inferences in certain scenarios.

7.
Proc Natl Acad Sci U S A ; 121(32): e2403490121, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39078672

RESUMO

A typical empirical study involves choosing a sample, a research design, and an analysis path. Variation in such choices across studies leads to heterogeneity in results that introduce an additional layer of uncertainty, limiting the generalizability of published scientific findings. We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population, design, and analytical heterogeneity. Our framework suggests that after accounting for heterogeneity, the probability that the tested hypothesis is true for the average population, design, and analysis path can be much lower than implied by nominal error rates of statistically significant individual studies. We estimate each type's heterogeneity from 70 multilab replication studies, 11 prospective meta-analyses of studies employing different experimental designs, and 5 multianalyst studies. In our data, population heterogeneity tends to be relatively small, whereas design and analytical heterogeneity are large. Our results should, however, be interpreted cautiously due to the limited number of studies and the large uncertainty in the heterogeneity estimates. We discuss several ways to parse and account for heterogeneity in the context of different methodologies.

8.
Patterns (N Y) ; 5(6): 100968, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-39005482

RESUMO

The number of publications in biomedicine and life sciences has grown so much that it is difficult to keep track of new scientific works and to have an overview of the evolution of the field as a whole. Here, we present a two-dimensional (2D) map of the entire corpus of biomedical literature, based on the abstract texts of 21 million English articles from the PubMed database. To embed the abstracts into 2D, we used the large language model PubMedBERT, combined with t-SNE tailored to handle samples of this size. We used our map to study the emergence of the COVID-19 literature, the evolution of the neuroscience discipline, the uptake of machine learning, the distribution of gender imbalance in academic authorship, and the distribution of retracted paper mill articles. Furthermore, we present an interactive website that allows easy exploration and will enable further insights and facilitate future research.

9.
Neurotrauma Rep ; 5(1): 686-698, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39071986

RESUMO

Translation of spinal cord injury (SCI) therapeutics from pre-clinical animal studies into human studies is challenged by effect size variability, irreproducibility, and misalignment of evidence used by pre-clinical versus clinical literature. Clinical literature values reproducibility, with the highest grade evidence (class 1) consisting of meta-analysis demonstrating large therapeutic efficacy replicating across multiple studies. Conversely, pre-clinical literature values novelty over replication and lacks rigorous meta-analyses to assess reproducibility of effect sizes across multiple articles. Here, we applied modified clinical meta-analysis methods to pre-clinical studies, comparing effect sizes extracted from published literature to raw data on individual animals from these same studies. Literature-extracted data (LED) from numerical and graphical outcomes reported in publications were compared with individual animal data (IAD) deposited in a federally supported repository of SCI data. The animal groups from the IAD were matched with the same cohorts in the LED for a direct comparison. We applied random-effects meta-analysis to evaluate predictors of neuroconversion in LED versus IAD. We included publications with common injury models (contusive injuries) and standardized end-points (open field assessments). The extraction of data from 25 published articles yielded n = 1841 subjects, whereas IAD from these same articles included n = 2441 subjects. We observed differences in the number of experimental groups and animals per group, insufficient reporting of dropout animals, and missing information on experimental details. Meta-analysis revealed differences in effect sizes across LED versus IAD stratifications, for instance, severe injuries had the largest effect size in LED (standardized mean difference [SMD = 4.92]), but mild injuries had the largest effect size in IAD (SMD = 6.06). Publications with smaller sample sizes yielded larger effect sizes, while studies with larger sample sizes had smaller effects. The results demonstrate the feasibility of combining IAD analysis with traditional LED meta-analysis to assess effect size reproducibility in SCI.

10.
R Soc Open Sci ; 11(7): 240125, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39050728

RESUMO

Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.

11.
Adv Exp Med Biol ; 1455: 171-195, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38918352

RESUMO

A common research protocol in cognitive neuroscience is to train subjects to perform deliberately designed experiments while recording brain activity, with the aim of understanding the brain mechanisms underlying cognition. However, how the results of this protocol of research can be applied in technology is seldom discussed. Here, I review the studies on time processing of the brain as examples of this research protocol, as well as two main application areas of neuroscience (neuroengineering and brain-inspired artificial intelligence). Time processing is a fundamental dimension of cognition, and time is also an indispensable dimension of any real-world signal to be processed in technology. Therefore, one may expect that the studies of time processing in cognition profoundly influence brain-related technology. Surprisingly, I found that the results from cognitive studies on timing processing are hardly helpful in solving practical problems. This awkward situation may be due to the lack of generalizability of the results of cognitive studies, which are under well-controlled laboratory conditions, to real-life situations. This lack of generalizability may be rooted in the fundamental unknowability of the world (including cognition). Overall, this paper questions and criticizes the usefulness and prospect of the abovementioned research protocol of cognitive neuroscience. I then give three suggestions for future research. First, to improve the generalizability of research, it is better to study brain activity under real-life conditions instead of in well-controlled laboratory experiments. Second, to overcome the unknowability of the world, we can engineer an easily accessible surrogate of the object under investigation, so that we can predict the behavior of the object under investigation by experimenting on the surrogate. Third, the paper calls for technology-oriented research, with the aim of technology creation instead of knowledge discovery.


Assuntos
Encéfalo , Cognição , Pensamento , Humanos , Cognição/fisiologia , Encéfalo/fisiologia , Pensamento/fisiologia , Neurociência Cognitiva/métodos , Inteligência Artificial , Percepção do Tempo/fisiologia
12.
Proc Natl Acad Sci U S A ; 121(26): e2311009121, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38885376

RESUMO

Public and academic discourse on ageism focuses primarily on prejudices targeting older adults, implicitly assuming that this age group experiences the most age bias. We test this assumption in a large, preregistered study surveying Americans' explicit sentiments toward young, middle-aged, and older adults. Contrary to certain expectations about the scope and nature of ageism, responses from two crowdsourced online samples matched to the US adult population (N = 1,820) revealed that older adults garner the most favorable sentiments and young adults, the least favorable ones. This pattern held across a wide range of participant demographics and outcome variables, in both samples. Signaling derogation of young adults more than benign liking of older adults, participants high on SDO (i.e., a key antecedent of group prejudice) expressed even less favorable sentiments toward young adults-and more favorable ones toward older adults. In two follow-up, preregistered, forecasting surveys, lay participants (N = 500) were generally quite accurate at predicting these results; in contrast, social scientists (N = 241) underestimated how unfavorably respondents viewed young adults and how favorably they viewed older adults. In fact, the more expertise in ageism scientists had, the more biased their forecasts. In a rapidly aging world with exacerbated concerns over older adults' welfare, young adults also face increasing economic, social, political, and ecological hardship. Our findings highlight the need for policymakers and social scientists to broaden their understanding of age biases and develop theory and policies that ponder discriminations targeting all age groups.


Assuntos
Etarismo , Humanos , Etarismo/psicologia , Idoso , Adulto , Estados Unidos , Pessoa de Meia-Idade , Masculino , Feminino , Adulto Jovem , Fatores Etários
13.
Perspect ASHA Spec Interest Groups ; 9(3): 836-852, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38912383

RESUMO

Purpose: One manifestation of systemic inequities in communication sciences and disorders (CSD) is the chronic underreporting and underrepresentation of sex, gender, race, and ethnicity in research. The present study characterized recent demographic reporting practices and representation of participants across CSD research. Methods: We systematically reviewed and extracted key reporting and participant data from empirical studies conducted in the United States (US) with human participants published in the year 2020 in journals by the American Speech-Language-Hearing Association (ASHA; k = 407 articles comprising a total n = 80,058 research participants, search completed November 2021). Sex, gender, race, and ethnicity were operationalized per National Institutes of Health guidelines (National Institutes of Health, 2015a, 2015b). Results: Sex or gender was reported in 85.5% of included studies; race was reported in 33.7%; and ethnicity was reported in 13.8%. Sex and gender were clearly differentiated in 3.4% of relevant studies. Where reported, median proportions for race and ethnicity were significantly different from the US population, with underrepresentation noted for all non-White racial groups and Hispanic participants. Moreover, 64.7% of studies that reported sex or gender and 67.2% of studies that reported race or ethnicity did not consider these respective variables in analyses or discussion. Conclusion: At present, research published in ASHA journals frequently fails to report key demographic data summarizing the characteristics of participants. Moreover, apparent gaps in representation of minoritized racial and ethnic groups threaten the external validity of CSD research and broader health care equity endeavors in the US. Although our study is limited to a single year and publisher, our results point to several steps for readers that may bring greater accountability, consistency, and diversity to the discipline.

14.
J Neurosurg ; 141(4): 887-894, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-38728757

RESUMO

OBJECTIVE: Spin is characterized as a misinterpretation of results that, whether deliberate or unintentional, culminates in misleading conclusions and steers readers toward an excessively optimistic perspective of the data. The primary objective of this systematic review was to estimate the prevalence and nature of spin within the traumatic brain injury (TBI) literature. Additionally, the identification of associated factors is intended to provide guidance for future research practices. METHODS: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations were followed. A search of the MEDLINE/PubMed database was conducted to identify English-language articles published between January 1960 and July 2020. Inclusion criteria encompassed randomized controlled trials (RCTs) that exclusively enrolled TBI patients, investigating various interventions, whether surgical or nonsurgical, and that were published in high-impact journals. Spin was defined as 1) a focus on statistically significant results not based on the primary outcome; 2) interpreting statistically nonsignificant results for a superiority analysis of the primary outcome; 3) claiming or emphasizing the beneficial effect of the treatment despite statistically nonsignificant results; 4) conclusion focused in the per-protocol or as-treated analysis instead of the intention-to-treat (ITT) results; 5) incorrect statistical analysis; or 6) republication of a significant secondary analysis without proper acknowledgment of the primary outcome analysis result. Primary outcomes were those explicitly reported as such in the published article. Studies without a clear primary outcome were excluded. The study characteristics were described using traditional descriptive statistics and an exploratory inferential analysis was performed to identify those associated with spin. The studies' risk of bias was evaluated by the Cochrane Risk of Bias Tool. RESULTS: A total of 150 RCTs were included and 22% (n = 33) had spin, most commonly spin types 1 and 3. The overall risk of bias (p < 0.001), a neurosurgery department member as the first author (p = 0.009), absence of a statistician among authors (p = 0.042), and smaller sample sizes (p = 0.033) were associated with spin. CONCLUSIONS: The prevalence of spin in the TBI literature is high, even at leading medical journals. Studies with higher risks of bias are more frequently associated with spin. Critical interpretation of results and authors' conclusions is advisable regardless of the study design and published journal.


Assuntos
Lesões Encefálicas Traumáticas , Lesões Encefálicas Traumáticas/epidemiologia , Lesões Encefálicas Traumáticas/terapia , Humanos , Prevalência , Ensaios Clínicos Controlados Aleatórios como Assunto
15.
Perspect Psychol Sci ; : 17456916241252085, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38752984

RESUMO

We identify points of conflict and consensus regarding (a) controversial empirical claims and (b) normative preferences for how controversial scholarship-and scholars-should be treated. In 2021, we conducted qualitative interviews (n = 41) to generate a quantitative survey (N = 470) of U.S. psychology professors' beliefs and values. Professors strongly disagreed on the truth status of 10 candidate taboo conclusions: For each conclusion, some professors reported 100% certainty in its veracity and others 100% certainty in its falsehood. Professors more confident in the truth of the taboo conclusions reported more self-censorship, a pattern that could bias perceived scientific consensus regarding the inaccuracy of controversial conclusions. Almost all professors worried about social sanctions if they were to express their own empirical beliefs. Tenured professors reported as much self-censorship and as much fear of consequences as untenured professors, including fear of getting fired. Most professors opposed suppressing scholarship and punishing peers on the basis of moral concerns about research conclusions and reported contempt for peers who petition to retract papers on moral grounds. Younger, more left-leaning, and female faculty were generally more opposed to controversial scholarship. These results do not resolve empirical or normative disagreements among psychology professors, but they may provide an empirical context for their discussion.

16.
Front Psychol ; 15: 1374330, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38699572

RESUMO

Metascience scholars have long been concerned with tracking the use of rhetorical language in scientific discourse, oftentimes to analyze the legitimacy and validity of scientific claim-making. Psychology, however, has only recently become the explicit target of such metascientific scholarship, much of which has been in response to the recent crises surrounding replicability of quantitative research findings and questionable research practices. The focus of this paper is on the rhetoric of psychological measurement and validity scholarship, in both the theoretical and methodological and empirical literatures. We examine various discourse practices in published psychological measurement and validity literature, including: (a) clear instances of rhetoric (i.e., persuasion or performance); (b) common or rote expressions and tropes (e.g., perfunctory claims or declarations); (c) metaphors and other "literary" styles; and (d) ambiguous, confusing, or unjustifiable claims. The methodological approach we use is informed by a combination of conceptual analysis and exploratory grounded theory, the latter of which we used to identify relevant themes within the published psychological discourse. Examples of both constructive and useful or misleading and potentially harmful discourse practices will be given. Our objectives are both to contribute to the critical methodological literature on psychological measurement and connect metascience in psychology to broader interdisciplinary examinations of science discourse.

17.
Cogn Res Princ Implic ; 9(1): 27, 2024 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-38700660

RESUMO

The .05 boundary within Null Hypothesis Statistical Testing (NHST) "has made a lot of people very angry and been widely regarded as a bad move" (to quote Douglas Adams). Here, we move past meta-scientific arguments and ask an empirical question: What is the psychological standing of the .05 boundary for statistical significance? We find that graduate students in the psychological sciences show a boundary effect when relating p-values across .05. We propose this psychological boundary is learned through statistical training in NHST and reading a scientific literature replete with "statistical significance". Consistent with this proposal, undergraduates do not show the same sensitivity to the .05 boundary. Additionally, the size of a graduate student's boundary effect is not associated with their explicit endorsement of questionable research practices. These findings suggest that training creates distortions in initial processing of p-values, but these might be dampened through scientific processes operating over longer timescales.


Assuntos
Estatística como Assunto , Humanos , Adulto , Adulto Jovem , Interpretação Estatística de Dados , Masculino , Psicologia , Feminino
18.
J Sports Sci ; 42(7): 566-573, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38767324

RESUMO

Sport and sports research are inherently complex systems. This appears to be somewhat at odds with the current research paradigm in sport in which interventions are aimed are fixing or solving singular broken components within the system. In any complex system, such as sport, there are places where we can intervene to change behaviour and, ideally, system outcomes. Meadows influential work describes 12 different points with which to intervene in complex systems (termed "Leverage Points"), which are ordered from shallow to deeper based on their potential effectiveness to influence transformational change. Whether research in sport is aimed at shallow or deeper Leverage Points is unknown. This study aimed to assess highly impactful research in sports science, sports nutrition/metabolism, sports medicine, sport and exercise psychology, sports management, motor control, sports biomechanics and sports policy/law through a Leverage Points lens. The 10 most highly cited original-research manuscripts from each journal representing these fields were analysed for the Leverage Point with which the intervention described in the manuscript was focused. The results indicate that highly impactful research in sports science, sports nutrition/metabolism, sports biomechanics and sports medicine is predominantly focused at the shallow end of the Leverage Points hierarchy. Conversely, the interventions drawn from journals representing sports management and sports policy/law were focused on the deeper end. Other journals analysed had a mixed profile. Explanations for these findings include the dual practitioner/academic needing to "think fast" to solve immediate questions in sports science/medicine/nutrition, limited engagement with "working slow" systems and method experts and differences in incremental vs. non-incremental research strategies.


Assuntos
Medicina Esportiva , Esportes , Humanos , Esportes/fisiologia , Fenômenos Biomecânicos , Fator de Impacto de Revistas , Publicações Periódicas como Assunto , Bibliometria
19.
Int J Exerc Sci ; 17(7): 25-37, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38666001

RESUMO

To demonstrate how post-publication peer reviews-using journal article reporting standards-could improve the design and write-up of kinesiology research, the authors performed a post-publication peer review on one systematic literature review published in 2020. Two raters (1st & 2nd authors) critically appraised the case article between April and May 2021. The latest Journal Article Reporting Standards by the American Psychological Association relevant to the review were used: i.e., Table 1 (quantitative research standards) and Table 9 (research synthesis standards). A standard fully met was deemed satisfactory. Per Krippendorff's alpha-coefficient, inter-rater agreement was moderate for Table 1 (k-alpha = .57, raw-agreement = 72.2%) and poor for Table 9 (k-alpha = .09, raw-agreement = 53.6%). A 100% consensus was reached on all discrepancies. Results suggest the case article's Abstract, Methods, and Discussion sections required clarification or more detail. Per Table 9 standards, four sections were largely incomplete: i.e., Abstract (100%-incomplete), Introduction (66%-incomplete), Methods (75%-incomplete), and Discussion (66%-incomplete). Case article strengths included tabular summary of studies analyzed in the systematic review and a cautionary comment about the review's generalizability. The article's write-up gave detail to help the reader understand the scope of the study and decisions made by the authors. However, adequate detail was not provided to assess the credibility of all claims made in the article. This could affect readers' ability to obtain critical and nuanced understanding of the article's topics. The results of this critique should encourage (continuing) education on journal article reporting standards for diverse stakeholders (e.g., authors, reviewers).

20.
Perspect Psychol Sci ; 19(3): 590-601, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38652780

RESUMO

In the spirit of America's Shakespeare, August Wilson (1997), I have written this article as a testimony to the conditions under which I, and too many others, engage in scholarly discourse. I hope to make clear from the beginning that although the ideas presented here are not entirely my own-as they have been inherited from the minority of scholars who dared and managed to bring the most necessary, unpalatable, and unsettling truths about our discipline to the broader scientific community-I do not write for anyone but myself and those scholars who have felt similarly marginalized, oppressed, and silenced. And I write as a race scholar, meaning simply that I believe that race-and racism-affects the sociopolitical conditions in which humans, and scholars, develop their thoughts, feelings, and actions. I believe that it is important for all scholars to have a basic understanding of these conditions, as well as the landmines and pitfalls that define them, as they shape how research is conducted, reviewed, and disseminated. I also believe that to evolve one's discipline into one that is truly robust and objective, it must first become diverse and self-aware. Any effort to suggest otherwise, no matter how scholarly it might present itself, is intellectually unsound.


Assuntos
Diversidade Cultural , Psicologia , Humanos , Racismo , Política
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA