Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Eur Spine J ; 32(9): 3009-3014, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37306800

RESUMEN

BACKGROUND: Recent signs of fraudulent behaviour in spine RCTs have queried the integrity of trials in the field. RCTs are particularly important due to the weight they are accorded in guiding treatment decisions, and thus, ensuring RCTs' reliability is crucial. This study investigates the presence of non-random baseline frequency data in purported RCTs published in spine journals. METHODS: A PubMed search was performed to obtain all RCTs published in four spine journals (Spine, The Spine Journal, the Journal of Neurosurgery Spine, and European Spine Journal) between Jan-2016 and Dec-2020. Baseline frequency data were extracted, and variable-wise p values were calculated using the Pearson Chi-squared test. These p values were combined for each study into study-wise p values using the Stouffer method. Studies with p values below 0.01 and 0.05 and those above 0.95 and 0.99 were reviewed. Results were compared to Carlisle's 2017 survey of anaesthesia and critical care medicine RCTs. RESULTS: One hundred sixty-seven of the 228 studies identified were included. Study-wise p values were largely consistent with expected genuine randomized experiments. Slightly more study-wise p values above 0.99 were observed than expected, but a number of these had good explanations to account for that excess. The distribution of observed study-wise p values was more closely matched to the expected distribution than those in a similar survey of the anaesthesia and critical care medicine literature. CONCLUSION: The data surveyed do not show evidence of systemic fraudulent behaviour. Spine RCTs in major spine journals were found to be consistent with genuine random allocation and experimentally derived data.


Asunto(s)
Anestesia , Procedimientos Neuroquirúrgicos , Humanos , Reproducibilidad de los Resultados
2.
Annu Rev Psychol ; 67: 1-21, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26361053

RESUMEN

Throughout my career, I have pursued three theories related to intergroup prejudice--each with a different mentor. Each theory and its supporting research help us to understand prejudice and ways to ameliorate the problem. This autobiographical review article summarizes some of the advances in these three areas during the past six decades. For authoritarianism, the article advocates removing political content from its measurement, linking it with threat and dismissive-avoidant attachment, and studying how authoritarians avoid intergroup contact. Increased work on relative deprivation made possible an extensive meta-analysis that shows the theory, when appropriately measured, has far broader effects than previously thought. Increased research attention to intergroup contact similarly made possible a meta-analysis that established the pervasive effectiveness of intergroup contact to reduce prejudice under a wide range of conditions. The article closes by demonstrating how the three theories relate to each other and contribute to our understanding of prejudice and its reduction.


Asunto(s)
Autoritarismo , Procesos de Grupo , Relaciones Interpersonales , Prejuicio/psicología , Identificación Social , Historia del Siglo XX , Historia del Siglo XXI , Humanos , Psicología Social , Percepción Social
3.
Front Hum Neurosci ; 12: 103, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29615885

RESUMEN

Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This "naive" approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment.

4.
Infect Genet Evol ; 22: 91-3, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24444592

RESUMEN

In population genetics data analysis, researchers are often faced to the problem of decision making from a series of tests of the same null hypothesis. This is the case when one wants to test differentiation between pathogens found on different host species sampled from different locations (as many tests as number of locations). Many procedures are available to date but not all apply to all situations. Finding which tests are significant or if the whole series is significant, when tests are independent or not do not require the same procedures. In this note I describe several procedures, among the simplest and easiest to undertake, that should allow decision making in most (if not all) situations population geneticists (or biologists) should meet, in particular in host-parasite systems.


Asunto(s)
Genética de Población/métodos , Modelos Genéticos , Modelos Estadísticos
5.
Ann Appl Stat ; 8(4): 2150-2174, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25541588

RESUMEN

Microarray analysis to monitor expression activities in thousands of genes simultaneously has become routine in biomedical research during the past decade. a tremendous amount of expression profiles are generated and stored in the public domain and information integration by meta-analysis to detect differentially expressed (DE) genes has become popular to obtain increased statistical power and validated findings. Methods that aggregate transformed p-value evidence have been widely used in genomic settings, among which Fisher's and Stouffer's methods are the most popular ones. In practice, raw data and p-values of DE evidence are often not available in genomic studies that are to be combined. Instead, only the detected DE gene lists under a certain p-value threshold (e.g., DE genes with p-value < 0.001) are reported in journal publications. The truncated p-value information makes the aforementioned meta-analysis methods inapplicable and researchers are forced to apply a less efficient vote counting method or naïvely drop the studies with incomplete information. The purpose of this paper is to develop effective meta-analysis methods for such situations with partially censored p-values. We developed and compared three imputation methods-mean imputation, single random imputation and multiple imputation-for a general class of evidence aggregation methods of which Fisher's and Stouffer's methods are special examples. The null distribution of each method was analytically derived and subsequent inference and genomic analysis frameworks were established. Simulations were performed to investigate the type Ierror, power and the control of false discovery rate (FDR) for (correlated) gene expression data. The proposed methods were applied to several genomic applications in colorectal cancer, pain and liquid association analysis of major depressive disorder (MDD). The results showed that imputation methods outperformed existing naïve approaches. Mean imputation and multiple imputation methods performed the best and are recommended for future applications.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA