Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
PLoS One ; 15(11): e0242271, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33186405

RESUMEN

Prior research has shown a serious lack of research transparency resulting from the failure to publish study results in a timely manner. The National Institutes of Health (NIH) has increased its use of publication rate and time to publication as metrics for grant productivity. In this study, we analyze the publications associated with all R01 and U01 grants funded from 2008 through 2014, providing sufficient time for these grants to publish their findings, and identify predictors of time to publication based on a number of variables, including if a grant was coded as a behavioral and social sciences research (BSSR) grant or not. Overall, 2.4% of the 27,016 R01 and U01 grants did not have a publication associated with the grant within 60 months of the project start date, and this rate of zero publications was higher for BSSR grants (4.6%) than for non-BSSR grants (1.9%). Mean time in months to first publication was 15.2 months, longer for BSSR grants (22.4 months) than non-BSSR grants (13.6 months). Survival curves showed a more rapid reduction of risk to publish from non-BSSR vs BSSR grants. Cox regression models showed that human research (vs. animal, neither, or both) and clinical trials research (vs. not) are the strongest predictors of time to publication and failure to publish, but even after accounting for these and other predictors, BSSR grants continued to show longer times to first publication and greater risk of no publications than non-BSSR grants. These findings indicate that even with liberal criteria for publication (any publication associated with a grant), a small percentage of R01 and U01 grantees fail to publish in a timely manner, and that a number of factors, including human research, clinical trial research, child research, not being an early stage investigator, and conducting behavioral and social sciences research increase the risk of time to first publication.


Asunto(s)
Ciencias de la Conducta/economía , Investigación Biomédica/economía , Organización de la Financiación , National Institutes of Health (U.S.)/economía , Publicaciones/economía , Publicaciones/estadística & datos numéricos , Ciencias Sociales/economía , Ciencias de la Conducta/estadística & datos numéricos , Investigación Biomédica/estadística & datos numéricos , Ciencias Sociales/estadística & datos numéricos , Estados Unidos
2.
Psychometrika ; 85(1): 232-246, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32232646

RESUMEN

Effect size indices are useful tools in study design and reporting because they are unitless measures of association strength that do not depend on sample size. Existing effect size indices are developed for particular parametric models or population parameters. Here, we propose a robust effect size index based on M-estimators. This approach yields an index that is very generalizable because it is unitless across a wide range of models. We demonstrate that the new index is a function of Cohen's d, [Formula: see text], and standardized log odds ratio when each of the parametric models is correctly specified. We show that existing effect size estimators are biased when the parametric models are incorrect (e.g., under unknown heteroskedasticity). We provide simple formulas to compute power and sample size and use simulations to assess the bias and standard error of the effect size estimator in finite samples. Because the new index is invariant across models, it has the potential to make communication and comprehension of effect size uniform across the behavioral sciences.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Psicometría/estadística & datos numéricos , Percepción del Tamaño/fisiología , Algoritmos , Comunicación , Comprensión/fisiología , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Análisis de los Mínimos Cuadrados , Funciones de Verosimilitud , Modelos Estadísticos , Oportunidad Relativa , Proyectos de Investigación , Tamaño de la Muestra , Programas Informáticos
3.
Psicothema ; 32(1): 115-121, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31954424

RESUMEN

BACKGROUND: Analysis of interaction or moderation effects between latent variables is a common requirement in the social sciences. However, when predictors are correlated, interaction and quadratic effects become more alike, making them difficult to distinguish. As a result, when data are drawn from a quadratic population model and the analysis model specifies interactions only, misleading results may be obtained. METHOD: This article addresses the consequences of different types of specification error in nonlinear structural equation models using a Monte Carlo study. RESULTS: Results show that fitting a model with interactions when quadratic effects are present in the population will almost certainly lead to erroneous detection of moderation effects, and that the same is true in the opposite scenario. Simultaneous estimation of interactions and quadratic effects yields correct results. CONCLUSIONS: Simultaneous estimation of interaction and quadratic effects prevents detection of spurious or misleading nonlinear effects. Results are discussed and recommendations are offered to applied researchers.


Asunto(s)
Método de Montecarlo , Dinámicas no Lineales , Ciencias de la Conducta/estadística & datos numéricos , Interpretación Estadística de Datos , Ciencias Sociales/estadística & datos numéricos
4.
Ann Behav Med ; 54(12): 942-947, 2020 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-33416835

RESUMEN

BACKGROUND: Artificial Intelligence (AI) is transforming the process of scientific research. AI, coupled with availability of large datasets and increasing computational power, is accelerating progress in areas such as genetics, climate change and astronomy [NeurIPS 2019 Workshop Tackling Climate Change with Machine Learning, Vancouver, Canada; Hausen R, Robertson BE. Morpheus: A deep learning framework for the pixel-level analysis of astronomical image data. Astrophys J Suppl Ser. 2020;248:20; Dias R, Torkamani A. AI in clinical and genomic diagnostics. Genome Med. 2019;11:70.]. The application of AI in behavioral science is still in its infancy and realizing the promise of AI requires adapting current practices. PURPOSES: By using AI to synthesize and interpret behavior change intervention evaluation report findings at a scale beyond human capability, the HBCP seeks to improve the efficiency and effectiveness of research activities. We explore challenges facing AI adoption in behavioral science through the lens of lessons learned during the Human Behaviour-Change Project (HBCP). METHODS: The project used an iterative cycle of development and testing of AI algorithms. Using a corpus of published research reports of randomized controlled trials of behavioral interventions, behavioral science experts annotated occurrences of interventions and outcomes. AI algorithms were trained to recognize natural language patterns associated with interventions and outcomes from the expert human annotations. Once trained, the AI algorithms were used to predict outcomes for interventions that were checked by behavioral scientists. RESULTS: Intervention reports contain many items of information needing to be extracted and these are expressed in hugely variable and idiosyncratic language used in research reports to convey information makes developing algorithms to extract all the information with near perfect accuracy impractical. However, statistical matching algorithms combined with advanced machine learning approaches created reasonably accurate outcome predictions from incomplete data. CONCLUSIONS: AI holds promise for achieving the goal of predicting outcomes of behavior change interventions, based on information that is automatically extracted from intervention evaluation reports. This information can be used to train knowledge systems using machine learning and reasoning algorithms.


Asunto(s)
Inteligencia Artificial , Terapia Conductista , Ciencias de la Conducta , Conductas Relacionadas con la Salud , Evaluación de Procesos y Resultados en Atención de Salud , Terapia Conductista/métodos , Terapia Conductista/estadística & datos numéricos , Ciencias de la Conducta/métodos , Ciencias de la Conducta/estadística & datos numéricos , Humanos , Evaluación de Procesos y Resultados en Atención de Salud/métodos , Evaluación de Procesos y Resultados en Atención de Salud/estadística & datos numéricos
5.
Multivariate Behav Res ; 55(4): 600-624, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31505988

RESUMEN

Multilevel SEM is an increasingly popular technique to analyze data that are both hierarchical and contain latent variables. The parameters are usually jointly estimated using a maximum likelihood estimator (MLE). This has the disadvantage that a large sample size is needed and misspecifications in one part of the model may influence the whole model. We propose an alternative stepwise estimation method, which is an extension of the Croon method for factor score regression. In this article, we extend this method to the multilevel setting. A simulation study was used to compare this new estimation method to the standard MLE. The Croon method outperformed MLE with regard to convergence rate, bias, MSE, and coverage, in particular when models contained a structural misspecification. In conclusion, the Croon method seems to be a promising alternative to MLE.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Análisis Multinivel/métodos , Estadística como Asunto/métodos , Análisis de Varianza , Sesgo , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Funciones de Verosimilitud , Modelos Estadísticos , Proyectos de Investigación , Tamaño de la Muestra , Estadística como Asunto/tendencias
6.
Multivariate Behav Res ; 55(4): 625-646, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31530179

RESUMEN

Propensity score (PS) methods are implemented by researchers to balance the differences between participants in control and treatment groups that exist in observational studies using a set of baseline covariates. Propensity scores are most commonly calculated using baseline covariates in a logistic regression model to predict the binary grouping variable (control versus treatment). Low reliability associated with the covariates can adversely impact the calculation of treatment effects in propensity score models. The incorporation of latent variables when calculating propensity scores has been suggested to offset the negative impact of covariate unreliability. Simulation studies were conducted to compare the performance of latent variable methods with traditional propensity score methods when estimating the treatment effect under conditions of covariate unreliability. The results indicated that using factor scores or composite variables to compute propensity scores resulted in biased estimates and inflated Type I error rates as compared to using latent factors to compute propensity scores in certain conditions. This was largely dependent upon the number of infallible covariates also included in the PS model and the outcome analysis model analyzed. Implications of the findings are discussed.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Simulación por Computador/estadística & datos numéricos , Análisis de Clases Latentes , Puntaje de Propensión , Ciencias de la Conducta/tendencias , Sesgo , Análisis Factorial , Femenino , Humanos , Modelos Logísticos , Masculino , Modelos Estadísticos , Modelos Teóricos , Método de Montecarlo , Estudios Observacionales como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Reproducibilidad de los Resultados
7.
Stat Methods Med Res ; 27(11): 3460-3477, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-28480829

RESUMEN

Agreement is an important concept in medical and behavioral sciences, in particular in clinical decision making where disagreements possibly imply a different patient management. The concordance correlation coefficient is an appropriate measure to quantify agreement between two scorers on a quantitative scale. However, this measure is based on the first two moments, which could poorly summarize the shape of the score distribution on bounded scales. Bounded outcome scores are common in medical and behavioral sciences. Typical examples are scores obtained on visual analog scales and scores derived as the number of positive items on a questionnaire. These kinds of scores often show a non-standard distribution, like a J- or U-shape, questioning the usefulness of the concordance correlation coefficient as agreement measure. The logit-normal distribution has shown to be successful in modeling bounded outcome scores of two types: (1) when the bounded score is a coarsened version of a latent score with a logit-normal distribution on the [0,1] interval and (2) when the bounded score is a proportion with the true probability having a logit-normal distribution. In the present work, a model-based approach, based on a bivariate generalization of the logit-normal distribution, is developed in a Bayesian framework to assess the agreement on bounded scales. This method permits to directly study the impact of predictors on the concordance correlation coefficient and can be simply implemented in standard Bayesian softwares, like JAGS and WinBUGS. The performances of the new method are compared to the classical approach using simulations. Finally, the methodology is used in two different medical domains: cardiology and rheumatology.


Asunto(s)
Modelos Estadísticos , Reproducibilidad de los Resultados , Teorema de Bayes , Ciencias de la Conducta/estadística & datos numéricos , Investigación Biomédica/estadística & datos numéricos
8.
PLoS One ; 11(5): e0155732, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27195701

RESUMEN

Assessing the research of individual scholars is currently a matter of serious concern and worldwide debate. In order to gauge the long-term efficacy and efficiency of this practice, we carried out a limited survey of the operation and outcome of Mexico's 30-year old National System of Investigators or SNI, the country's main instrument for stimulating competitive research in science and technology. A statistical random sample of researchers listed in the area of Humanities and Behavioral Sciences-one of SNI's first and better consolidated academic divisions comprising a wide range of research disciplines, from philosophy to pedagogy to archaeology to experimental brain research-was screened comparing individual ranks or "Levels of distinction" to actual compliance with the SNI's own evaluation criteria, as reflected in major public databases of scholarly production. The same analysis was applied to members of a recent Review Committee, integrated by top-level researchers belonging to that general area of knowledge, who have been in charge of assessing and ranking their colleagues. Our results for both sets of scholars show wide disparity of individual productivity within the same SNI Level, according to all key indicators officially required (books issued by prestigious publishers, research articles appeared in indexed journals, and formation of new scientists), as well as in impact estimated by numbers of citations. Statistical calculation from the data indicates that 36% of members in the Review Committee and 53% of researchers in the random sample do not satisfy the official criteria requested for their appointed SNI Levels. The findings are discussed in terms of possible methodological errors in our study, of relevance for the SNI at large in relation to independent appraisals, of the cost-benefit balance of the organization as a research policy tool, and of possible alternatives for its thorough restructuring. As it currently stands SNI is not a model for efficient and effectual national systems of research assessment.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Humanidades/estadística & datos numéricos , Edición/estadística & datos numéricos , Investigación , Comités Consultivos , Bibliometría , Bases de Datos Factuales , Humanos , Conocimiento , México , Método de Montecarlo , Reproducibilidad de los Resultados , Investigadores/estadística & datos numéricos
9.
Br J Math Stat Psychol ; 68(2): 342-62, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25773173

RESUMEN

This article proposes an approach to modelling partially cross-classified multilevel data where some of the level-1 observations are nested in one random factor and some are cross-classified by two random factors. Comparisons between a proposed approach to two other commonly used approaches which treat the partially cross-classified data as either fully nested or fully cross-classified are completed with a simulation study. Results show that the proposed approach demonstrates desirable performance in terms of parameter estimates and statistical inferences. Both the fully nested model and the fully cross-classified model suffer from biased estimates of some variance components and statistical inferences of some fixed effects. Results also indicate that the proposed model is robust against cluster size imbalance.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Simulación por Computador , Interpretación Estadística de Datos , Modelos Estadísticos , Análisis Multinivel , Ciencias Sociales/estadística & datos numéricos , Logro , Niño , Curriculum/estadística & datos numéricos , Evaluación Educacional/estadística & datos numéricos , Humanos , Método de Montecarlo , Recreación , Estudiantes/estadística & datos numéricos
10.
J Dent Educ ; 78(4): 638-47, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24843898

RESUMEN

The annual turnover of dental school faculty creates a varying number of vacant budgeted positions from year to year. The American Dental Education Association (ADEA) conducts an annual survey to determine the status and characteristics of these vacant faculty positions. The number of vacant budgeted faculty positions in U.S. dental schools increased throughout the 1990s, with a peak of 417 positions in 2005-06. Since that time, there has been a decrease in the number of estimated vacancies, falling to 227 in 2010-11. The 2008-09 to 2010-11 faculty vacancy surveys explored these decreases, along with information relevant to the number and characteristics of dental faculty vacancies, including data on the distribution of full-time, part-time, and volunteer faculty, reasons for faculty separations, and sources of new faculty.


Asunto(s)
Presupuestos , Docentes de Odontología/estadística & datos numéricos , Facultades de Odontología/economía , Personal Administrativo/estadística & datos numéricos , Ciencias de la Conducta/estadística & datos numéricos , Investigación Dental/estadística & datos numéricos , Operatoria Dental/estadística & datos numéricos , Empleo/estadística & datos numéricos , Odontología General/estadística & datos numéricos , Humanos , Ortodoncia/estadística & datos numéricos , Reorganización del Personal/estadística & datos numéricos , Práctica Privada/estadística & datos numéricos , Prostodoncia/estadística & datos numéricos , Jubilación/estadística & datos numéricos , Salarios y Beneficios/estadística & datos numéricos , Facultades de Odontología/organización & administración , Ciencia/estadística & datos numéricos , Factores de Tiempo , Estados Unidos , Voluntarios/estadística & datos numéricos
11.
Addiction ; 108(9): 1532-3, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23718564

RESUMEN

Advancement in science requires clarity of constructs.Like other fields in behavioral science, addiction research is being held back by researchers' use of different terms to mean similar things (synonymy) and the same term to mean different things (polysemy). Journals can help researchers to stay focused on novel and significant research questions by challenging new terms introduced without adequate justification and requiring authors to be parsimonious in their use of terms. To support construct lucidity, new modes of thinking about research integration are needed to keep up with the aggregate of relevant research.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Investigación Biomédica/estadística & datos numéricos , Trastornos Relacionados con Sustancias , Humanos
12.
Psychol Methods ; 18(2): 220-36, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23458720

RESUMEN

When 2 people interact in a relationship, the outcome of each person can be affected by both his or her own inputs and his or her partner's inputs. For Gaussian dyadic outcomes, linear mixed models taking into account the correlation within dyads are frequently used to estimate actor's and partner's effects based on the actor-partner interdependence model. In this article, we explore the potential of generalized linear mixed models (GLMMs) for the analysis of non- Gaussian dyadic outcomes. Several approximation techniques that are available in standard software packages for these GLMMs are investigated. Despite the different modeling options related to these different techniques, none of these have an overall satisfactory performance in estimating actor and partner effects and the within-dyad correlation, especially when the latter is negative and/or the number of dyads is small. An approach based on generalized estimating equations for the analysis of non-Gaussian dyadic data turns out to be an interesting alternative.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Relaciones Interpersonales , Modelos Estadísticos , Esposos/estadística & datos numéricos , Ansiedad/psicología , Femenino , Humanos , Modelos Lineales , Masculino , Análisis Multinivel/métodos , Negociación/psicología , Esposos/psicología , Distribuciones Estadísticas
13.
Psychol Methods ; 18(2): 186-219, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23527607

RESUMEN

Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories.


Asunto(s)
Teorema de Bayes , Ciencias de la Conducta/estadística & datos numéricos , Modelos Estadísticos , Sesgo , Interpretación Estadística de Datos , Humanos , Cadenas de Markov , Método de Montecarlo , Tamaño de la Muestra , Distribuciones Estadísticas , Factores de Tiempo
14.
Behav Res Methods ; 44(2): 532-45, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22083659

RESUMEN

In many areas of the behavioral sciences, different groups of objects are measured on the same set of binary variables, resulting in coupled binary object × variable data blocks. Take, as an example, success/failure scores for different samples of testees, with each sample belonging to a different country, regarding a set of test items. When dealing with such data, a key challenge consists of uncovering the differences and similarities between the structural mechanisms that underlie the different blocks. To tackle this challenge for the case of a single data block, one may rely on HICLAS, in which the variables are reduced to a limited set of binary bundles that represent the underlying structural mechanisms, and the objects are given scores for these bundles. In the case of multiple binary data blocks, one may perform HICLAS on each data block separately. However, such an analysis strategy obscures the similarities and, in the case of many data blocks, also the differences between the blocks. To resolve this problem, we proposed the new Clusterwise HICLAS generic modeling strategy. In this strategy, the different data blocks are assumed to form a set of mutually exclusive clusters. For each cluster, different bundles are derived. As such, blocks belonging to the same cluster have the same bundles, whereas blocks of different clusters are modeled with different bundles. Furthermore, we evaluated the performance of Clusterwise HICLAS by means of an extensive simulation study and by applying the strategy to coupled binary data regarding emotion differentiation and regulation.


Asunto(s)
Ciencias de la Conducta/métodos , Análisis por Conglomerados , Interpretación Estadística de Datos , Algoritmos , Ciencias de la Conducta/estadística & datos numéricos , Simulación por Computador , Emociones/fisiología , Análisis Factorial , Humanos , Modelos Psicológicos , Modelos Estadísticos , Proyectos de Investigación
15.
Br J Math Stat Psychol ; 64(Pt 2): 193-207, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21492128

RESUMEN

A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Intervalos de Confianza , Modelos Estadísticos , Análisis de Componente Principal , Pruebas Psicológicas/estadística & datos numéricos , Psicología/estadística & datos numéricos , Adolescente , Análisis de Varianza , Trastornos de Ansiedad/diagnóstico , Trastornos de Ansiedad/psicología , Trastorno Depresivo/diagnóstico , Trastorno Depresivo/psicología , Análisis Factorial , Femenino , Humanos , Estudios Longitudinales , Masculino , Trastornos Neuróticos/diagnóstico , Trastornos Neuróticos/psicología , Psicometría/estadística & datos numéricos , Reproducibilidad de los Resultados , Factores de Riesgo
16.
Behav Res Methods ; 43(1): 8-17, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21298573

RESUMEN

Many statistics packages print skewness and kurtosis statistics with estimates of their standard errors. The function most often used for the standard errors (e.g., in SPSS) assumes that the data are drawn from a normal distribution, an unlikely situation. Some textbooks suggest that if the statistic is more than about 2 standard errors from the hypothesized value (i.e., an approximate value for the critical value from the t distribution for moderate or large sample sizes when α = 5%), the hypothesized value can be rejected. This is an inappropriate practice unless the standard error estimate is accurate and the sampling distribution is approximately normal. We show distributions where the traditional standard errors provided by the function underestimate the actual values, often being 5 times too small, and distributions where the function overestimates the true values. Bootstrap standard errors and confidence intervals are more accurate than the traditional approach, although still imperfect. The reasons for this are discussed. We recommend that if you are using skewness and kurtosis statistics based on the 3rd and 4th moments, bootstrapping should be used to calculate standard errors and confidence intervals, rather than using the traditional standard. Software in the freeware R for this article provides these estimates.


Asunto(s)
Intervalos de Confianza , Interpretación Estadística de Datos , Algoritmos , Ciencias de la Conducta/estadística & datos numéricos , Humanos , Estándares de Referencia , Reproducibilidad de los Resultados
17.
Behav Res Methods ; 43(1): 1-7, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21287104

RESUMEN

We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Predicción/métodos , Modelos Lineales , Programas Informáticos , Conducta , Humanos , Pruebas de Personalidad , Estándares de Referencia , Adulto Joven
18.
Behav Res Methods ; 43(1): 18-36, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21287107

RESUMEN

This study examined the performance of selection criteria available in the major statistical packages for both mean model and covariance structure. Unbalanced designs due to missing data involving both a moderate and large number of repeated measurements and varying total sample sizes were investigated. The study also investigated the impact of using different estimation strategies for information criteria, the impact of different adjustments for calculating the criteria, and the impact of different distribution shapes. Overall, we found that the ability of consistent criteria in any of the their examined forms to select the correct model was superior under simple covariance patterns than under complex covariance patterns, and vice versa for the efficient criteria. The simulation studies covered in this paper also revealed that, regardless of method of estimation used, the consistent criteria based on number of subjects were more effective than the consistent criteria based on total number of observations, and vice versa for the efficient criteria. Furthermore, results indicated that, given a dataset with missing values, the efficient criteria were more affected than the consistent criteria by the lack of normality.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Modelos Estadísticos , Algoritmos , Análisis de Varianza , Teorema de Bayes , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Funciones de Verosimilitud , Estudios Longitudinales/estadística & datos numéricos , Reproducibilidad de los Resultados , Proyectos de Investigación , Tamaño de la Muestra
19.
Behav Res Methods ; 43(1): 56-65, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21287114

RESUMEN

In many areas of psychology, one is interested in disclosing the underlying structural mechanisms that generated an object by variable data set. Often, based on theoretical or empirical arguments, it may be expected that these underlying mechanisms imply that the objects are grouped into clusters that are allowed to overlap (i.e., an object may belong to more than one cluster). In such cases, analyzing the data with Mirkin's additive profile clustering model may be appropriate. In this model: (1) each object may belong to no, one or several clusters, (2) there is a specific variable profile associated with each cluster, and (3) the scores of the objects on the variables can be reconstructed by adding the cluster-specific variable profiles of the clusters the object in question belongs to. Until now, however, no software program has been publicly available to perform an additive profile clustering analysis. For this purpose, in this article, the ADPROCLUS program, steered by a graphical user interface, is presented. We further illustrate its use by means of the analysis of a patient by symptom data matrix.


Asunto(s)
Ciencias de la Conducta/estadística & datos numéricos , Análisis por Conglomerados , Modelos Estadísticos , Programas Informáticos , Interfaz Usuario-Computador , Algoritmos , Interpretación Estadística de Datos , Procesamiento Automatizado de Datos , Humanos , Internet
20.
Behav Res Methods ; 43(1): 37-55, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21287127

RESUMEN

A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"


Asunto(s)
Modelos Estadísticos , Pruebas Neuropsicológicas/estadística & datos numéricos , Pruebas Neuropsicológicas/normas , Algoritmos , Análisis de Varianza , Ciencias de la Conducta/estadística & datos numéricos , Ciencia Cognitiva/estadística & datos numéricos , Simulación por Computador , Interpretación Estadística de Datos , Femenino , Humanos , Masculino , Población , Desempeño Psicomotor , Tiempo de Reacción/fisiología , Análisis de Regresión , Reproducibilidad de los Resultados , Muestreo , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...