Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Behav Res Methods ; 52(6): 2604-2622, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32519291

RESUMO

A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.


Assuntos
Reconhecimento Facial , Ira , Emoções , Expressão Facial , Humanos , Software
2.
Psychon Bull Rev ; 2024 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-38381300

RESUMO

A recent model of face processing proposes that face shape and motion are processed in parallel brain pathways. Although tested in neuroimaging, the assumptions of this theory remain relatively untested through controlled psychophysical studies until now. Recruiting undergraduate students over the age of 18, we test this hypothesis using a tight control of stimulus factors, through computerized three-dimensional face models and calibration of dimensional discriminability, and of decisional factors, through a model-based analysis using general recognition theory (GRT). Theoretical links between neural and perceptual forms of independence within GRT allowed us to derive the a priori hypotheses that perceptual separability of shape and motion should hold, while other forms of independence defined within GRT might fail. We found evidence to support both of those predictions.

3.
Psychon Bull Rev ; 30(2): 553-563, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36163609

RESUMO

In this study, we present a novel model-based analysis of the association between awareness and perceptual processing based on a multidimensional version of signal detection theory (general recognition theory, or GRT). The analysis fits a GRT model to behavioral data and uses the estimated model to construct a sensitivity versus awareness (SvA) curve, representing sensitivity in the discrimination task at each value of relative likelihood of awareness. This approach treats awareness as a continuum rather than a dichotomy, but also provides an objective benchmark for low likelihood of awareness. In two experiments, we assessed nonconscious facial expression recognition using SvA curves in a condition in which faces (fearful vs. neutral) were rendered invisible using continuous flash suppression (CFS) for 500 and 700 milliseconds. We predicted and found nonconscious processing of face emotion, in the form of higher than chance-level sensitivity in the area of low likelihood of awareness.


Assuntos
Conscientização , Reconhecimento Facial , Humanos , Emoções , Medo , Expressão Facial
4.
J Abnorm Psychol ; 130(5): 443-454, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34472882

RESUMO

Here, we take a computational approach to understand the mechanisms underlying face perception biases in depression. Thirty participants diagnosed with major depressive disorder and 30 healthy control participants took part in three studies involving recognition of identity and emotion in faces. We used signal detection theory to determine whether any perceptual biases exist in depression aside from decisional biases. We found lower sensitivity to happiness in general, and lower sensitivity to both happiness and sadness with ambiguous stimuli. Our use of highly-controlled face stimuli ensures that such asymmetry is truly perceptual in nature, rather than the result of studying expressions with inherently different discriminability. We found no systematic effect of depression on the perceptual interactions between face expression and identity. We also found that decisional strategies used in our task were different for people with depression and controls, but in a way that was highly specific to the stimulus set presented. We show through simulation that the observed perceptual effects, as well as other biases found in the literature, can be explained by a computational model in which channels encoding positive expressions are selectively suppressed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Transtorno Depressivo Maior , Reconhecimento Facial , Viés , Depressão , Emoções , Expressão Facial , Humanos
5.
Front Neurosci ; 13: 494, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31156374

RESUMO

Neuroimaging research is growing rapidly, providing expansive resources for synthesizing data. However, navigating these dense resources is complicated by the volume of research articles and variety of experimental designs implemented across studies. The advent of machine learning algorithms and text-mining techniques has advanced automated labeling of published articles in biomedical research to alleviate such obstacles. As of yet, a comprehensive examination of document features and classifier techniques for annotating neuroimaging articles has yet to be undertaken. Here, we evaluated which combination of corpus (abstract-only or full-article text), features (bag-of-words or Cognitive Atlas terms), and classifier (Bernoulli naïve Bayes, k-nearest neighbors, logistic regression, or support vector classifier) resulted in the highest predictive performance in annotating a selection of 2,633 manually annotated neuroimaging articles. We found that, when utilizing full article text, data-driven features derived from the text performed the best, whereas if article abstracts were used for annotation, features derived from the Cognitive Atlas performed better. Additionally, we observed that when features were derived from article text, anatomical terms appeared to be the most frequently utilized for classification purposes and that cognitive concepts can be identified based on similar representations of these anatomical terms. Optimizing parameters for the automated classification of neuroimaging articles may result in a larger proportion of the neuroimaging literature being annotated with labels supporting the meta-analysis of psychological constructs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA