Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Behav Res Methods ; 46(1): 1-14, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23661222

RESUMEN

We develop a general measure of estimation accuracy for fundamental research designs, called v. The v measure compares the estimation accuracy of the ubiquitous ordinary least squares (OLS) estimator, which includes sample means as a special case, with a benchmark estimator that randomizes the direction of treatment effects. For sample and effect sizes common to experimental psychology, v suggests that OLS produces estimates that are insufficiently accurate for the type of hypotheses being tested. We demonstrate how v can be used to determine sample sizes to obtain minimum acceptable estimation accuracy. Software for calculating v is included as online supplemental material (R Core Team, 2012).


Asunto(s)
Interpretación Estadística de Datos , Análisis de los Mínimos Cuadrados , Psicología Experimental/métodos , Psicología Experimental/normas , Programas Informáticos , Análisis de Varianza , Benchmarking/métodos , Humanos , Metaanálisis como Asunto , Análisis de Regresión , Reproducibilidad de los Resultados , Proyectos de Investigación , Tamaño de la Muestra , Diseño de Software
2.
Psychon Bull Rev ; 31(4): 1461-1470, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38148472

RESUMEN

Psychometrics is historically grounded in the study of individual differences. Consequently, common metrics such as quantitative validity and reliability require between-person variance in a psychological variable to be meaningful. Experimental psychology, in contrast, deals with variance between treatments, and experiments often strive to minimise within-group person variance. In this article, I ask whether and how psychometric evaluation can be performed in experimental psychology. A commonly used strategy is to harness between-person variance in the treatment effect. Using simulated data, I show that this approach can be misleading when between-person variance is low, and in the face of methods variance. I argue that this situation is common in experimental psychology, because low between-person variance is desirable, and because methods variance is no more problematic in experimental settings than any other source of between-person variance. By relating validity and reliability with the corresponding concepts in measurement science outside psychology, I show how experiment-based calibration can serve to compare the psychometric quality of different measurement methods in experimental psychology.


Asunto(s)
Psicología Experimental , Psicometría , Psicometría/normas , Psicometría/instrumentación , Humanos , Reproducibilidad de los Resultados , Psicología Experimental/normas , Calibración
3.
Behav Res Methods ; 45(4): 1144-58, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23344740

RESUMEN

The aim of the present study was to provide normative data for the Croatian language using 346 visually presented objects (Cycowicz, Friedman, Rothstein, & Snodgrass Journal of Experimental Child Psychology 65:171-237, 1997; Roach, Schwartz, Martin, Grewal, & Brecher Clinical Aphasiology 24:121-133, 1996; Snodgrass & Vanderwart Journal of Experimental Psychology: Human Learning and Memory 6:174-215, 1980). Picture naming was standardized according to seven variables: naming latency, name agreement, familiarity, visual complexity, word length, number of syllables, and word frequency. The descriptive statistics and correlation pattern of the variables collected in the present study were consistent with normative studies in other languages. These normative data for pictorial stimuli named by young healthy Croatian native speakers will be useful in studies of perception, language, and memory, as well as for preoperative and intraoperative mapping of speech and language brain areas.


Asunto(s)
Pruebas del Lenguaje/normas , Lenguaje , Nombres , Análisis y Desempeño de Tareas , Percepción Visual , Adulto , Croacia , Femenino , Humanos , Masculino , Memoria , Psicología Experimental/instrumentación , Psicología Experimental/normas , Tiempo de Reacción , Reconocimiento en Psicología , Valores de Referencia , Adulto Joven
4.
Psychol Rep ; 112(1): 184-201, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23654036

RESUMEN

Experimental research in psychology is based on a causal model--the General Linear Model (GLM)--that assumes behavior has causes but not purposes. Powers (1978) used a control theory analysis to show that the results of psychological experiments based on such a model can be misleading if the organisms being studied are purposeful (control) systems. In the same paper, Powers presented evidence that organisms are such systems. Nevertheless, psychologists continue to use methods that ignore purpose because the behavior in most experiments appears to be non-purposeful (a caused result of variations in the independent variable). The experiments described in this paper show how purposeful behavior can appear to be caused by the independent variable when an organism's purposes are ignored. The results show how taking purpose into account using the control theory-based "Test for the Controlled Variable" can provide a productive new methodological direction for experimental research in psychology.


Asunto(s)
Objetivos , Modelos Psicológicos , Teoría Psicológica , Psicología Experimental/normas , Proyectos de Investigación/normas , Animales , Humanos , Psicología Experimental/métodos
5.
Psychol Methods ; 14(1): 1-5, 2009 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19271844

RESUMEN

L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of sample size requirements for a standardized difference between 2 means in between-subjects designs. Sample size formulas are derived here for general standardized linear contrasts of k > or = 2 means for both between-subjects designs and within-subjects designs. Special sample size formulas also are derived for the standardizer proposed by G. V. Glass (1976).


Asunto(s)
Intervalos de Confianza , Modelos Lineales , Psicología Experimental/estadística & datos numéricos , Humanos , Modelos Psicológicos , Psicología Experimental/normas , Proyectos de Investigación/normas , Proyectos de Investigación/estadística & datos numéricos , Tamaño de la Muestra
6.
Psychol Methods ; 14(3): 225-38, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19719359

RESUMEN

The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented.


Asunto(s)
Intervalos de Confianza , Interpretación Estadística de Datos , Metaanálisis como Asunto , Modelos Estadísticos , Psicología Experimental/normas , Antidepresivos/uso terapéutico , Ensayos Clínicos como Asunto/estadística & datos numéricos , Crimen/psicología , Crimen/estadística & datos numéricos , Trastorno Depresivo/tratamiento farmacológico , Humanos , Trastornos Migrañosos/terapia , Reconocimiento en Psicología , Terapia por Relajación/estadística & datos numéricos
7.
J Neurosci Methods ; 173(2): 235-40, 2008 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-18606188

RESUMEN

The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.


Asunto(s)
Conducta Animal/fisiología , Conducta/fisiología , Ciencias de la Conducta/métodos , Psicofísica/métodos , Diseño de Software , Programas Informáticos/tendencias , Animales , Ciencias de la Conducta/instrumentación , Encéfalo/fisiología , Humanos , Microcomputadores , Neurofisiología/instrumentación , Neurofisiología/métodos , Lenguajes de Programación , Psicología Experimental/normas , Psicología Experimental/tendencias , Psicofísica/instrumentación , Tiempo de Reacción/fisiología , Procesamiento de Señales Asistido por Computador/instrumentación , Programas Informáticos/normas , Validación de Programas de Computación , Factores de Tiempo , Interfaz Usuario-Computador
8.
J Gen Psychol ; 135(1): 105-12, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18318411

RESUMEN

In a recent article in The Journal of General Psychology, J. B. Hittner, K. May, and N. C. Silver (2003) described their investigation of several methods for comparing dependent correlations and found that all can be unsatisfactory, in terms of Type I errors, even with a sample size of 300. More precisely, when researchers test at the .05 level, the actual Type I error probability can exceed .10. The authors of this article extended J. B. Hittner et al.'s research by considering a variety of alternative methods. They found 3 that avoid inflating the Type I error rate above the nominal level. However, a Monte Carlo simulation demonstrated that when the underlying distribution of scores violated the assumption of normality, 2 of these methods had relatively low power and had actual Type I error rates well below the nominal level. The authors report comparisons with E. J. Williams' (1959) method.


Asunto(s)
Interpretación Estadística de Datos , Psicología Experimental/normas , Sesgo , Humanos , Método de Montecarlo , Reproducibilidad de los Resultados , Proyectos de Investigación
9.
Artículo en Inglés | MEDLINE | ID: mdl-29309191

RESUMEN

In this inaugural editorial, Isabel Gauthier, the incoming editor of the Journal of Experimental Psychology: Human Perception and Performance, introduces the new team of associate editors and discusses increasing women's representation on the editorial board as one of her first priorities. Gauthier also discusses her concerns with the publication process and beliefs about the changes that may be needed. (PsycINFO Database Record


Asunto(s)
Interpretación Estadística de Datos , Políticas Editoriales , Publicaciones Periódicas como Asunto , Psicología Experimental/normas , Tamaño de la Muestra , Humanos
10.
Psychon Bull Rev ; 24(2): 607-616, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27503194

RESUMEN

In psychology, the reporting of variance-accounted-for effect size indices has been recommended and widely accepted through the movement away from null hypothesis significance testing. However, most researchers have paid insufficient attention to the fact that effect sizes depend on the choice of the number of levels and their ranges in experiments. Moreover, the functional form of how and how much this choice affects the resultant effect size has not thus far been studied. We show that the relationship between the population effect size and number and range of levels is given as an explicit function under reasonable assumptions. Counterintuitively, it is found that researchers may affect the resultant effect size to be either double or half simply by suitably choosing the number of levels and their ranges. Through a simulation study, we confirm that this relation also applies to sample effect size indices in much the same way. Therefore, the variance-accounted-for effect size would be substantially affected by the basic research design such as the number of levels. Simple cross-study comparisons and a meta-analysis of variance-accounted-for effect sizes would generally be irrational unless differences in research designs are explicitly considered.


Asunto(s)
Interpretación Estadística de Datos , Psicología Experimental/métodos , Proyectos de Investigación/normas , Humanos , Psicología Experimental/normas
11.
Integr Psychol Behav Sci ; 49(3): 360-70, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25939530

RESUMEN

In this paper I discuss the relevance of the single-case approach in psychological research. Based upon work by Hurtado-Parrado and López-López (Integrative Psychological and Behavioral Science, 2015), who outlined the possibility that Single-Case Methods (SCMs) could be a valid alternative to Null Hypothesis Significance Testing (NHST), I introduce the idiographic approach (Salvatore and Valsiner Theory & Psychology, 20(6), 817-833, 2010; Valsiner Cultural & Psychology, 20(2), 147-159, 2014; Salvatore Culture & Psychology, 20(4), 477-500, 2014) based on the logic of abductive generalization, rather than the logic of inductive generalization. I present the theoretical, epistemological and methodological assumptions that this approach proposes; in particular, I discuss the re-conceptualization of some now obsolete rigid opposition, the inconsistency of sample use in psychological research, the relationship between uniqueness and general, the relationship between theory and phenomena, and finally the validation process.


Asunto(s)
Investigación Conductal/métodos , Psicología Experimental/métodos , Proyectos de Investigación/normas , Investigación Conductal/normas , Humanos , Psicología Experimental/normas
12.
Exp Psychol ; 49(4): 243-56, 2002.
Artículo en Inglés | MEDLINE | ID: mdl-12455331

RESUMEN

This article summarizes expertise gleaned from the first years of Internet-based experimental research and presents recommendations on: (1) ideal circumstances for conducting a study on the Internet; (2) what precautions have to be undertaken in Web experimental design; (3) which techniques have proven useful in Web experimenting; (4) which frequent errors and misconceptions need to be avoided; and (5) what should be reported. Procedures and solutions for typical challenges in Web experimenting are discussed. Topics covered include randomization, recruitment of samples, generalizability, dropout, experimental control, identity checks, multiple submissions, configuration errors, control of motivational confounding, and pre-testing. Several techniques are explained, including "warm-up," "high hurdle," password methods, "multiple site entry," randomization, and the use of incentives. The article concludes by proposing sixteen standards for Internet-based experimenting.


Asunto(s)
Internet , Psicología Experimental/normas , Humanos , Control de Calidad , Proyectos de Investigación/normas
13.
Am J Psychol ; 114(2): 283-302, 2001.
Artículo en Inglés | MEDLINE | ID: mdl-11430152

RESUMEN

This article traces the historical origin of social experimentation. It highlights the central role of psychology in establishing the randomized controlled design and its quasi-experimental derivatives. The author investigates the differences in the 19th- and 20th-century meaning of the expression "social experiment." She rejects the image of neutrality of social experimentation, arguing that its 20th-century advocates promoted specific representations of cognitive competence and moral trustworthiness. More specifically, she demonstrates that the randomized controlled experiment and its quasi-experimental derivatives epitomize the values of efficiency and impersonality characteristic of the liberal variation of the 20th-century welfare state.


Asunto(s)
Psicología Experimental/historia , Ensayos Clínicos Controlados Aleatorios como Asunto/historia , Ética Médica/historia , Europa (Continente) , Historia del Siglo XIX , Historia del Siglo XX , Humanos , Evaluación de Programas y Proyectos de Salud , Psicología Experimental/normas , Garantía de la Calidad de Atención de Salud/historia , Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Ciencias Sociales/historia , Estados Unidos
14.
Psychon Bull Rev ; 21(5): 1180-7, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24638826

RESUMEN

Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009 and 2012. When empirical studies succeed at a rate much higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings have been suppressed, the experiments or analyses were improper, or the theory does not properly account for the data. In total, problems appeared for 82 % (36 out of 44) of the articles in Psychological Science that had four or more experiments and could be analyzed.


Asunto(s)
Publicaciones Periódicas como Asunto/estadística & datos numéricos , Psicología Experimental , Estadística como Asunto/normas , Sesgo , Interpretación Estadística de Datos , Humanos , Modelos Estadísticos , Psicología Experimental/métodos , Psicología Experimental/normas , Psicología Experimental/estadística & datos numéricos , Tamaño de la Muestra
15.
Psychon Bull Rev ; 19(2): 151-6, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22351589

RESUMEN

Empirical replication has long been considered the final arbiter of phenomena in science, but replication is undermined when there is evidence for publication bias. Evidence for publication bias in a set of experiments can be found when the observed number of rejections of the null hypothesis exceeds the expected number of rejections. Application of this test reveals evidence of publication bias in two prominent investigations from experimental psychology that have purported to reveal evidence of extrasensory perception and to indicate severe limitations of the scientific method. The presence of publication bias suggests that those investigations cannot be taken as proper scientific studies of such phenomena, because critical data are not available to the field. Publication bias could partly be avoided if experimental psychologists started using Bayesian data analysis techniques.


Asunto(s)
Psicología Experimental/métodos , Sesgo de Publicación , Teorema de Bayes , Humanos , Memoria , Parapsicología , Psicología Experimental/normas , Tamaño de la Muestra
16.
Psychon Bull Rev ; 19(3): 395-404, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22441956

RESUMEN

Repeated measures designs are common in experimental psychology. Because of the correlational structure in these designs, the calculation and interpretation of confidence intervals is nontrivial. One solution was provided by Loftus and Masson (Psychonomic Bulletin & Review 1:476-490, 1994). This solution, although widely adopted, has the limitation of implying same-size confidence intervals for all factor levels, and therefore does not allow for the assessment of variance homogeneity assumptions (i.e., the circularity assumption, which is crucial for the repeated measures ANOVA). This limitation and the method's perceived complexity have sometimes led scientists to use a simplified variant, based on a per-subject normalization of the data (Bakeman & McArthur, Behavior Research Methods, Instruments, & Computers 28:584-589, 1996; Cousineau, Tutorials in Quantitative Methods for Psychology 1:42-45, 2005; Morey, Tutorials in Quantitative Methods for Psychology 4:61-64, 2008; Morrison & Weaver, Behavior Research Methods, Instruments, & Computers 27:52-56, 1995). We show that this normalization method leads to biased results and is uninformative with regard to circularity. Instead, we provide a simple, intuitive generalization of the Loftus and Masson method that allows for assessment of the circularity assumption.


Asunto(s)
Sesgo , Intervalos de Confianza , Psicología Experimental/métodos , Proyectos de Investigación , Humanos , Psicología Experimental/normas , Psicología Experimental/estadística & datos numéricos
17.
J Exp Anal Behav ; 97(3): 347-55, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-22693363

RESUMEN

Verbal behavior, as in the use of terms, is an important part of scientific activity in general and behavior analysis in particular. Many glossaries and dictionaries of behavior analysis have been published in English, but few in any other language. Here we review the area of behavior analytic terminology, its translations, and development in languages other than English. As an example, we use our own mother tongue, Finnish, which provides a suitable example of the process of translation and development of behavior analytic terminology, because it differs from Indo-European languages and entails specific advantages and challenges in the translation process. We have published three editions of a general dictionary of behavior analysis including 801 terms relevant to the experimental analysis of behavior and applied behavior analysis and one edition of a dictionary of applied and clinical behavior analysis containing 280 terms. Because this work has been important to us, we hope this review will encourage similar work by behavior analysts in other countries whose native language is not English. Behavior analysis as an advanced science deserves widespread international dissemination and proper translations are essential to that goal.


Asunto(s)
Ciencias de la Conducta , Terminología como Asunto , Traducción , Animales , Ciencias de la Conducta/organización & administración , Ciencias de la Conducta/normas , Diccionarios como Asunto , Ambiente , Humanos , Psicología Experimental/organización & administración , Psicología Experimental/normas , Vocabulario
18.
J Gen Psychol ; 139(4): 260-72, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-24837177

RESUMEN

A Monte-Carlo simulation was conducted to assess the extent that a correlation estimate can be inflated when an average-based measure is used in a commonly employed correlational design. The results from the simulation reveal that the inflation of the correlation estimate can be substantial, up to 76%. Additionally, data was re-analyzed from two previously published studies to determine the extent that the correlation estimate was inflated due to the use of an averaged based measure. The re-analyses reveal that correlation estimates had been inflated by just over 50% in both studies. Although these findings are disconcerting, we are somewhat comforted by the fact that there is a simple and easy analysis that can be employed to prevent the inflation of the correlation estimate that we have simulated and observed.


Asunto(s)
Interpretación Estadística de Datos , Estadística como Asunto , Humanos , Método de Montecarlo , Psicología Experimental/métodos , Psicología Experimental/normas , Estadística como Asunto/métodos , Estadística como Asunto/normas
20.
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA