Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Behav Res Methods ; 56(3): 1852-1862, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37326772

RESUMO

A popular approach to the simulation of multivariate, non-normal data in the social sciences is to define a multivariate normal distribution first, and then alter its lower-dimensional marginals to achieve the shape of the distribution intended by the researchers. A consequence of this process is that the correlation structure is altered, so further methods are needed to specify an intermediate correlation matrix in the multivariate normal distribution step. Most of the techniques available in the literature estimate this intermediate correlation matrix bivariately (i.e., correlation by correlation), risking the possibility of generating a non-positive definite matrix. The present article addresses this issue by offering an algorithm that estimates all elements of the intermediate correlation matrix simultaneously, through stochastic approximation. A small simulation study demonstrates the feasibility of the present method to induce the correlation structure both in simulated and empirical data.


Assuntos
Algoritmos , Humanos , Método de Monte Carlo , Simulação por Computador
2.
Planta ; 245(2): 297-311, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27730411

RESUMO

MAIN CONCLUSION: Wax coverage on developing Arabidopsis leaf epidermis cells is constant and thus synchronized with cell expansion. Wax composition shifts from fatty acid to alkane dominance, mediated by CER6 expression. Epidermal cells bear a wax-sealed cuticle to hinder transpirational water loss. The amount and composition of the cuticular wax mixture may change as organs develop, to optimize the cuticle for specific functions during growth. Here, morphometrics, wax chemical profiling, and gene expression measurements were integrated to study developing Arabidopsis thaliana leaves and, thus, further our understanding of cuticular wax ontogeny. Before 5 days of age, cells at the leaf tip ceased dividing and began to expand, while cells at the leaf base switched from cycling to expansion at day 13, generating a cell age gradient along the leaf. We used this spatial age distribution together with leaves of different ages to determine that, as leaves developed, their wax compositions shifted from C24/C26 to C30/C32 and from fatty acid to alkane constituents. These compositional changes paralleled an increase in the expression of the elongase enzyme CER6 but not of alkane pathway enzymes, suggesting that CER6 transcriptional regulation is responsible for both chemical shifts. Leaves bore constant numbers of trichomes between 5 and 21 days of age and, thus, trichome density was higher on young leaves. During this time span, leaves of the trichome-less gl1 mutant had constant wax coverage, while wild-type leaf coverage was initially high and then decreased, suggesting that high trichome density leads to greater apparent coverage on young leaves. Conversely, wax coverage on pavement cells remained constant over time, indicating that wax accumulation is synchronized with cell expansion throughout leaf development.


Assuntos
Arabidopsis/genética , Regulação da Expressão Gênica de Plantas , Folhas de Planta/crescimento & desenvolvimento , Tricomas/fisiologia , Ceras/química , Arabidopsis/química , Proteínas de Arabidopsis/genética , Cromatografia Gasosa-Espectrometria de Massas/métodos , Mutação , Epiderme Vegetal/genética , Folhas de Planta/química , Folhas de Planta/metabolismo , Tricomas/metabolismo
3.
PLoS One ; 18(6): e0286680, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37384620

RESUMO

In this paper, we generalize the notion of measurement error on deterministic sample datasets to accommodate sample data that are random-variable-valued. This leads to the formulation of two distinct kinds of measurement error: intrinsic measurement error, and incidental measurement error. Incidental measurement error will be recognized as the traditional kind that arises from a set of deterministic sample measurements, and upon which the traditional measurement error modelling literature is based, while intrinsic measurement error reflects some subjective quality of either the measurement tool or the measurand itself. We define calibrating conditions that generalize common and classical types of measurement error models to this broader measurement domain, and explain how the notion of generalized Berkson error in particular mathematicizes what it means to be an expert assessor or rater for a measurement process. We then explore how classical point estimation, inference, and likelihood theory can be generalized to accommodate sample data composed of generic random-variable-valued measurements.


Assuntos
Medicamentos Genéricos , Teoria da Probabilidade , Humanos , Pesquisadores
4.
Psychol Methods ; 2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38095991

RESUMO

Polynomial regression is an old and commonly discussed modeling technique, though recommendations for its usage are widely variable. Here, we make the case that polynomial regression with second- and third-order terms should be part of every applied practitioners standard model-building toolbox, and should be taught to new students of the subject as the default technique to model nonlinearity. We argue that polynomial regression is superior to nonparametric alternatives for nonstatisticians due to its ease of interpretation, flexibility, and its nonreliance on sophisticated mathematics, like knots and kernel smoothing. This makes it the ideal default for nonstatisticians interested in building realistic models that can capture global as well as local effects of predictors on a response variable. Low-order polynomial regression can effectively model compact floor and ceiling effects, local linearity, and prevent inferring the presence of spurious interaction effects between distinct predictors when none are present. We also argue that the case against polynomial regression is largely specious, relying on either misconceptions around the method, strawman arguments, or historical artifacts. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

5.
Educ Psychol Meas ; 82(3): 517-538, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35444337

RESUMO

Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test, what is not properly recognized is that such a test loses power to meaningfully discriminate between target groups with each new subscale that is introduced. We quantify and describe this property via an analytical exposition highlighting the counterintuitive geometry implied by marginal threshold-setting in multiple dimensions. Recommendations are presented that encourage applied researchers to think jointly, rather than marginally, when setting cutoff scores to ensure an informative test.

6.
Psychol Methods ; 2022 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-35286105

RESUMO

The central limit theorem (CLT) is one of the most important theorems in statistics, and it is often introduced to social sciences researchers in an introductory statistics course. However, the recent replication crisis in the social sciences prompts us to investigate just how common certain misconceptions of statistical concepts are. The main purposes of this article are to investigate the misconceptions of the CLT among social sciences researchers and to address these misconceptions by clarifying the definition and properties of the CLT in a manner that is approachable to social science researchers. As part of our article, we conducted a survey to examine the misconceptions of the CLT among graduate students and researchers in the social sciences. We found that the most common misconception of the CLT is that researchers think the CLT is about the convergence of sample data to the normal distribution. We also found that most researchers did not realize that the CLT applies to both sample means and sample sums, and that the CLT has implications for many common statistical concepts and techniques. Our article addresses these misconceptions of the CLT by explaining the preliminaries needed to understand the CLT, introducing the formal definition of the CLT, and elaborating on the implications of the CLT. We hope that through this article, researchers can obtain a more accurate and nuanced understanding of how the CLT operates as well as its role in a variety of statistical concepts and techniques. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

7.
PLoS One ; 15(10): e0239821, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33002051

RESUMO

Random-variable-valued measurements (RVVMs) are proposed as a new framework for treating measurement processes that generate non-deterministic sample data. They operate by assigning a probability measure to each observed sample instantiation of a global measurement process for some particular random quantity of interest, thus allowing for the explicit quantification of response process error. Common methodologies to date treat only measurement processes that generate fixed values for each sample unit, thus generating full (though possibly inaccurate) information on the random quantity of interest. However, many applied research situations in the non-experimental sciences naturally contain response process error, e.g. when psychologists assess patient agreement with various diagnostic survey items or when conservation biologists perform formal assessments to classify species-at-risk. Ignoring the sample-unit-level uncertainty of response process error in such measurement processes can greatly compromise the quality of resulting inferences. In this paper, a general theory of RVVMs is proposed to handle response process error, and several applications are considered.


Assuntos
Algoritmos , Bioestatística/métodos , Análise de Variância , Animais , Variação Biológica da População , Aves/fisiologia , Interpretação Estatística de Dados , Depressão/diagnóstico , Humanos , Comportamento de Nidação , Testes Neuropsicológicos/estatística & dados numéricos , Inquéritos e Questionários/estatística & dados numéricos
8.
Educ Psychol Meas ; 80(5): 825-846, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32855561

RESUMO

Simulations concerning the distributional assumptions of coefficient alpha are contradictory. To provide a more principled theoretical framework, this article relies on the Fréchet-Hoeffding bounds, in order to showcase that the distribution of the items play a role on the estimation of correlations and covariances. More specifically, these bounds restrict the theoretical correlation range [-1, 1] such that certain correlation structures may be unfeasible. The direct implication of this result is that coefficient alpha is bounded above depending on the shape of the distributions. A general form of the Fréchet-Hoeffding bounds is derived for discrete random variables. R code and a user-friendly shiny web application are also provided so that researchers can calculate the bounds on their data.

9.
Educ Psychol Meas ; 79(6): 1184-1197, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31619844

RESUMO

Chalmers recently published a critique of the use of ordinal α proposed in Zumbo et al. as a measure of test reliability in certain research settings. In this response, we take up the task of refuting Chalmers' critique. We identify three broad misconceptions that characterize Chalmers' criticisms: (1) confusing assumptions with consequences of mathematical models, and confusing both with definitions, (2) confusion about the definitions and relevance of Stevens' scales of measurement, and (3) a failure to recognize that a measurement for a true quantity is a choice, not an absolute. On dissection of these misconceptions, we argue that Chalmers' critique of ordinal α is unfounded.

10.
Educ Psychol Meas ; 79(5): 813-826, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31488914

RESUMO

Within the context of moderated multiple regression, mean centering is recommended both to simplify the interpretation of the coefficients and to reduce the problem of multicollinearity. For almost 30 years, theoreticians and applied researchers have advocated for centering as an effective way to reduce the correlation between variables and thus produce more stable estimates of regression coefficients. By reviewing the theory on which this recommendation is based, this article presents three new findings. First, that the original assumption of expectation-independence among predictors on which this recommendation is based can be expanded to encompass many other joint distributions. Second, that for many jointly distributed random variables, even some that enjoy considerable symmetry, the correlation between the centered main effects and their respective interaction can increase when compared with the correlation of the uncentered effects. Third, that the higher order moments of the joint distribution play as much of a role as lower order moments such that the symmetry of lower dimensional marginals is a necessary but not sufficient condition for a decrease in correlation between centered main effects and their interaction. Theoretical and simulation results are presented to help conceptualize the issues.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA