Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Caries Res ; 56(2): 129-137, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35398845

RESUMEN

Visual attention is a significant gateway to a child's mind, and looking is one of the first behaviors young children develop. Untreated caries and the resulting poor dental aesthetics can have adverse emotional and social impacts on children's oral health-related quality of life due to its detrimental effects on self-esteem and self-concept. Therefore, we explored preschool children's eye movement patterns and visual attention to images with and without dental caries via eye movement analysis using hidden Markov models (EMHMM). We calibrated a convenience sample of 157 preschool children to the eye-tracker (Tobii Nano Pro) to ensure standardization. Consequently, each participant viewed the same standardized pictures with and without dental caries while an eye-tracking device tracked their eye movements. Subsequently, based on the sequence of viewed regions of interest (ROIs), a transition matrix was developed where the participants' previously viewed ROI informed their subsequently considered ROI. Hence, an individual's HMM was estimated from their eye movement data using a variational Bayesian approach to determine the optimal number of ROIs automatically. Consequently, this data-driven approach generated the visual task participants' most representative eye movement patterns. Preschool children exhibited two different eye movement patterns, distributed (78%) and selective (21%), which was statistically significant. Children switched between images with more similar probabilities in the distributed pattern while children remained looking at the same ROI than switching to the other ROI in the selective pattern. Nevertheless, all children exhibited an equal starting fixation on the right or left image and noticed teeth. The study findings reveal that most preschool children did not have an attentional bias to images with and without dental caries. Furthermore, only a few children selectively fixated on images with dental caries. Therefore, selective eye-movement patterns may strongly predict preschool children's sustained visual attention to dental caries. Nevertheless, future studies are essential to fully understand the developmental origins of differences in visual attention to common oral health presentations in children. Finally, EMHMM is appropriate for assessing inter-individual differences in children's visual attention.


Asunto(s)
Caries Dental , Teorema de Bayes , Preescolar , Caries Dental/diagnóstico por imagen , Tecnología de Seguimiento Ocular , Humanos , Salud Bucal , Calidad de Vida
2.
Dent Traumatol ; 38(5): 410-416, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35460595

RESUMEN

BACKGROUND/AIM: Traumatic dental injuries (TDIs) in the primary dentition may result in tooth discolouration and fractures. The aim of this child-centred study was to explore the differences between preschool children's eye movement patterns and visual attention to typical outcomes following TDIs to primary teeth. MATERIALS AND METHODS: An eye-tracker recorded 155 healthy preschool children's eye movements when they viewed clinical images of healthy teeth, tooth fractures and discolourations. The visual search pattern was analysed using the eye movement analysis with the Hidden Markov Models (EMHMM) approach and preference for the various regions of interest (ROIs). RESULTS: Two different eye movement patterns (distributed and selective) were identified (p < .05). Children with the distributed pattern shifted their fixations between the presented images, while those with the selective pattern remained focused on the same image they first saw. CONCLUSIONS: Preschool children noticed teeth. However, most of them did not have an attentional bias, implying that they did not interpret these TDI outcomes negatively. Only a few children avoided looking at images with TDIs indicating a potential negative impact. The EMHMM approach is appropriate for assessing inter-individual differences in children's visual attention to TDI outcomes.


Asunto(s)
Fracturas de los Dientes , Traumatismos de los Dientes , Preescolar , Tecnología de Seguimiento Ocular , Humanos , Diente Primario
3.
Behav Res Methods ; 53(6): 2473-2486, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-33929699

RESUMEN

The eye movement analysis with hidden Markov models (EMHMM) method provides quantitative measures of individual differences in eye-movement pattern. However, it is limited to tasks where stimuli have the same feature layout (e.g., faces). Here we proposed to combine EMHMM with the data mining technique co-clustering to discover participant groups with consistent eye-movement patterns across stimuli for tasks involving stimuli with different feature layouts. Through applying this method to eye movements in scene perception, we discovered explorative (switching between the foreground and background information or different regions of interest) and focused (mainly looking at the foreground with less switching) eye-movement patterns among Asian participants. Higher similarity to the explorative pattern predicted better foreground object recognition performance, whereas higher similarity to the focused pattern was associated with better feature integration in the flanker task. These results have important implications for using eye tracking as a window into individual differences in cognitive abilities and styles. Thus, EMHMM with co-clustering provides quantitative assessments on eye-movement patterns across stimuli and tasks. It can be applied to many other real-life visual tasks, making a significant impact on the use of eye tracking to study cognitive behavior across disciplines.


Asunto(s)
Movimientos Oculares , Individualidad , Pueblo Asiatico , Análisis por Conglomerados , Humanos , Percepción Visual
4.
Cogn Emot ; 34(8): 1704-1710, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32552552

RESUMEN

Theoretical models propose that attentional biases might account for the maintenance of social anxiety symptoms. However, previous eye-tracking studies have yielded mixed results. One explanation is that existing studies quantify eye-movements using arbitrary, experimenter-defined criteria such as time segments and regions of interests that do not capture the dynamic nature of overt visual attention. The current study adopted the Eye Movement analysis with Hidden Markov Models (EMHMM) approach for eye-movement analysis, a machine-learning, data-driven approach that can cluster people's eye-movements into different strategy groups. Sixty participants high and low in self-reported social anxiety symptoms viewed angry and neutral faces in a free-viewing task while their eye-movements were recorded. EMHMM analyses revealed novel associations between eye-movement patterns and social anxiety symptoms that were not evident with standard analytical approaches. Participants who adopted the same face-viewing strategy when viewing both angry and neutral faces showed higher social anxiety symptoms than those who transitioned between strategies when viewing angry versus neutral faces. EMHMM can offer novel insights into psychopathology-related attention processes.


Asunto(s)
Ansiedad/psicología , Sesgo Atencional/fisiología , Emociones/fisiología , Movimientos Oculares/fisiología , Expresión Facial , Adulto , Ansiedad/fisiopatología , Femenino , Hong Kong , Humanos , Masculino , Cadenas de Markov , Estudiantes/psicología , Estudiantes/estadística & datos numéricos , Adulto Joven
5.
Behav Res Methods ; 52(3): 1026-1043, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31712999

RESUMEN

Here we propose the eye movement analysis with switching hidden Markov model (EMSHMM) approach to analyzing eye movement data in cognitive tasks involving cognitive state changes. We used a switching hidden Markov model (SHMM) to capture a participant's cognitive state transitions during the task, with eye movement patterns during each cognitive state being summarized using a regular HMM. We applied EMSHMM to a face preference decision-making task with two pre-assumed cognitive states-exploration and preference-biased periods-and we discovered two common eye movement patterns through clustering the cognitive state transitions. One pattern showed both a later transition from the exploration to the preference-biased cognitive state and a stronger tendency to look at the preferred stimulus at the end, and was associated with higher decision inference accuracy at the end; the other pattern entered the preference-biased cognitive state earlier, leading to earlier above-chance inference accuracy in a trial but lower inference accuracy at the end. This finding was not revealed by any other method. As compared with our previous HMM method, which assumes no cognitive state change (i.e., EMHMM), EMSHMM captured eye movement behavior in the task better, resulting in higher decision inference accuracy. Thus, EMSHMM reveals and provides quantitative measures of individual differences in cognitive behavior/style, making a significant impact on the use of eyetracking to study cognitive behavior across disciplines.


Asunto(s)
Movimientos Oculares , Cara , Humanos , Individualidad , Cadenas de Markov , Probabilidad
6.
J Sleep Res ; 28(3): e12671, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-29493041

RESUMEN

Resting-state spontaneous neural activities consume far more biological energy than stimulus-induced activities, suggesting their significance. However, existing studies of sleep loss and emotional functioning have focused on how sleep deprivation modulates stimulus-induced emotional neural activities. The current study aimed to investigate the impacts of sleep deprivation on the brain network of emotional functioning using electroencephalogram during a resting state. Two established resting-state electroencephalogram indexes (i.e. frontal alpha asymmetry and frontal theta/beta ratio) were used to reflect the functioning of the emotion regulatory neural network. Participants completed an 8-min resting-state electroencephalogram recording after a well-rested night or 24 hr sleep deprivation. The Sleep Deprivation group had a heightened ratio of the power density in theta band to beta band (theta/beta ratio) in the frontal area than the Sleep Control group, suggesting an affective approach with reduced frontal cortical regulation of subcortical drive after sleep deprivation. There was also marginally more left-lateralized frontal alpha power (left frontal alpha asymmetry) in the Sleep Deprivation group compared with the Sleep Control group. Besides, higher theta/beta ratio and more left alpha lateralization were correlated with higher sleepiness and lower vigilance. The results converged in suggesting compromised emotional regulatory processes during resting state after sleep deprivation. Our work provided the first resting-state neural evidence for compromised emotional functioning after sleep loss, highlighting the significance of examining resting-state neural activities within the affective brain network as a default functional mode in investigating the sleep-emotion relationship.


Asunto(s)
Electroencefalografía/métodos , Emociones/fisiología , Privación de Sueño/fisiopatología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
7.
J Vis ; 19(4): 10, 2019 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-30952161

RESUMEN

Recent research has suggested that the visual span in stimulus identification can be enlarged through perceptual learning. Since both English and music reading involve left-to-right sequential symbol processing, music-reading experience may enhance symbol identification through perceptual learning particularly in the right visual field (RVF). In contrast, as Chinese can be read in all directions, and components of Chinese characters do not consistently form a left-right structure, this hypothesized RVF enhancement effect may be limited in Chinese character identification. To test these hypotheses, here we recruited musicians and nonmusicians who read Chinese as their first language (L1) and English as their second language (L2) to identify music notes, English letters, Chinese characters, and novel symbols (Tibetan letters) presented at different eccentricities and visual field locations on the screen while maintaining central fixation. We found that in English letter identification, significantly more musicians achieved above-chance performance in the center-RVF locations than nonmusicians. This effect was not observed in Chinese character or novel symbol identification. We also found that in music note identification, musicians outperformed nonmusicians in accuracy in the center-RVF condition, consistent with the RVF enhancement effect in the visual span observed in English-letter identification. These results suggest that the modulation of music-reading experience on the visual span for stimulus identification depends on the similarities in the perceptual processes involved.


Asunto(s)
Pueblo Asiatico , Comprensión/fisiología , Lenguaje , Música , Reconocimiento Visual de Modelos/fisiología , Lectura , Adulto , Femenino , Humanos , Aprendizaje , Masculino , Campos Visuales/fisiología , Adulto Joven
8.
Behav Res Methods ; 50(1): 362-379, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28409487

RESUMEN

How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.


Asunto(s)
Movimientos Oculares , Aprendizaje Automático , Cadenas de Markov , Estimulación Luminosa/métodos , Fijación Ocular , Humanos , Individualidad , Probabilidad , Análisis y Desempeño de Tareas
9.
J Vis ; 14(11)2014 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-25228627

RESUMEN

We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.


Asunto(s)
Movimientos Oculares/fisiología , Cara/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adolescente , Femenino , Humanos , Masculino , Cadenas de Markov , Modelos Estadísticos , Probabilidad , Adulto Joven
10.
IEEE Trans Pattern Anal Mach Intell ; 46(9): 5967-5985, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38517727

RESUMEN

We propose the gradient-weighted Object Detector Activation Maps (ODAM), a visual explanation technique for interpreting the predictions of object detectors. Utilizing the gradients of detector targets flowing into the intermediate feature maps, ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute. Compared to previous works on classification activation maps (CAM), ODAM generates instance-specific explanations rather than class-specific ones. We show that ODAM is applicable to one-stage, two-stage, and transformer-based detectors with different types of detector backbones and heads, and produces higher-quality visual explanations than the state-of-the-art in terms of both effectiveness and efficiency. We discuss two explanation tasks for object detection: 1) object specification: what is the important region for the prediction? 2) object discrimination: which object is detected? Aiming at these two aspects, we present a detailed analysis of the visual explanations of detectors and carry out extensive experiments to validate the effectiveness of the proposed ODAM. Furthermore, we investigate user trust on the explanation maps, how well the visual explanations of object detectors agrees with human explanations, as measured through human eye gaze, and whether this agreement is related with user trust. Finally, we also propose two applications, ODAM-KD and ODAM-NMS, based on these two abilities of ODAM. ODAM-KD utilizes the object specification of ODAM to generate top-down attention for key predictions and instruct the knowledge distillation of object detection. ODAM-NMS considers the location of the model's explanation for each prediction to distinguish the duplicate detected objects. A training scheme, ODAM-Train, is proposed to improve the quality on object discrimination, and help with ODAM-NMS.

11.
Br J Psychol ; 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38858823

RESUMEN

Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.

12.
Neural Netw ; 177: 106392, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38788290

RESUMEN

Explainable artificial intelligence (XAI) has been increasingly investigated to enhance the transparency of black-box artificial intelligence models, promoting better user understanding and trust. Developing an XAI that is faithful to models and plausible to users is both a necessity and a challenge. This work examines whether embedding human attention knowledge into saliency-based XAI methods for computer vision models could enhance their plausibility and faithfulness. Two novel XAI methods for object detection models, namely FullGrad-CAM and FullGrad-CAM++, were first developed to generate object-specific explanations by extending the current gradient-based XAI methods for image classification models. Using human attention as the objective plausibility measure, these methods achieve higher explanation plausibility. Interestingly, all current XAI methods when applied to object detection models generally produce saliency maps that are less faithful to the model than human attention maps from the same object detection task. Accordingly, human attention-guided XAI (HAG-XAI) was proposed to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activation functions and smoothing kernels to maximize the similarity between XAI saliency map and human attention map. The proposed XAI methods were evaluated on widely used BDD-100K, MS-COCO, and ImageNet datasets and compared with typical gradient-based and perturbation-based XAI methods. Results suggest that HAG-XAI enhanced explanation plausibility and user trust at the expense of faithfulness for image classification models, and it enhanced plausibility, faithfulness, and user trust simultaneously and outperformed existing state-of-the-art XAI methods for object detection models.


Asunto(s)
Inteligencia Artificial , Atención , Humanos , Atención/fisiología , Redes Neurales de la Computación
13.
J Cogn Neurosci ; 25(7): 998-1007, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23448523

RESUMEN

Hemispheric asymmetry in the processing of local and global features has been argued to originate from differences in frequency filtering in the two hemispheres, with little neurophysiological support. Here we test the hypothesis that this asymmetry takes place at an encoding stage beyond the sensory level, due to asymmetries in anatomical connections within each hemisphere. We use two simple encoding networks with differential connection structures as models of differential encoding in the two hemispheres based on a hypothesized generalization of neuroanatomical evidence from the auditory modality to the visual modality: The connection structure between columns is more distal in the language areas of the left hemisphere and more local in the homotopic regions in the right hemisphere. We show that both processing differences and differential frequency filtering can arise naturally in this neurocomputational model with neuroanatomically inspired differences in connection structures within the two model hemispheres, suggesting that hemispheric asymmetry in the processing of local and global features may be due to hemispheric asymmetry in connection structure rather than in frequency tuning.


Asunto(s)
Lateralidad Funcional/fisiología , Modelos Neurológicos , Percepción Visual/fisiología , Análisis de Varianza , Simulación por Computador , Humanos , Estimulación Luminosa
14.
Br J Psychol ; 114 Suppl 1: 17-20, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36951761

RESUMEN

Multiple factors have been proposed to contribute to the other-race effect in face recognition, including perceptual expertise and social-cognitive accounts. Here, we propose to understand the effect and its contributing factors from the perspectives of learning mechanisms that involve joint learning of visual attention strategies and internal representations for faces, which can be modulated by quality of contact with other-race individuals including emotional and motivational factors. Computational simulations of this process will enhance our understanding of interactions among factors and help resolve inconsistent results in the literature. In particular, since learning is driven by task demands, visual attention effects observed in different face-processing tasks, such as passive viewing or recognition, are likely to be task specific (although may be associated) and should be examined and compared separately. When examining visual attention strategies, the use of more data-driven and comprehensive eye movement measures, taking both spatial-temporal pattern and consistency of eye movements into account, can lead to novel discoveries in other-race face processing. The proposed framework and analysis methods may be applied to other tasks of real-life significance such as face emotion recognition, further enhancing our understanding of the relationship between learning and visual cognition.


Asunto(s)
Reconocimiento Visual de Modelos , Grupos Raciales , Humanos , Grupos Raciales/psicología , Aprendizaje , Reconocimiento en Psicología , Movimientos Oculares
15.
Emotion ; 23(4): 1028-1039, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35980687

RESUMEN

Recent research has suggested that dynamic emotion recognition involves strong audiovisual association; that is, facial or vocal information alone automatically induces perceptual processes in the other modality. We hypothesized that different emotions may differ in the automaticity of audiovisual association, resulting in differential audiovisual information processing. Participants judged the emotion of a talking-head video under audiovisual, video-only (with no sound), and audio-only (with a static neutral face) conditions. Among the six basic emotions, disgust had the largest audiovisual advantage over the unimodal conditions in recognition accuracy. In addition, in the recognition of all the emotions except for disgust, participants' eye-movement patterns did not change significantly across the three conditions, suggesting mandatory audiovisual information processing. In contrast, in disgust recognition, participants' eye movements in the audiovisual condition were less eyes-focused than the video-only condition and more eyes-focused than the audio-only condition, suggesting that audio information in the audiovisual condition interfered with eye-movement planning for important features (eyes) for disgust. In addition, those whose eye-movement pattern was affected less by concurrent disgusted voice information benefited more in recognition accuracy. Disgust recognition is learned later in life and thus may involve a reduced amount of audiovisual associative learning. Consequently, audiovisual association in disgust recognition is less automatic and demands more attentional resources than other emotions. Thus, audiovisual information processing in emotion recognition depends on the automaticity of audiovisual association of the emotion resulting from associative learning. This finding has important implications for real-life emotion recognition and multimodal learning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Asco , Reconocimiento Facial , Humanos , Tecnología de Seguimiento Ocular , Emociones , Cognición , Aprendizaje , Expresión Facial
16.
Sci Rep ; 13(1): 1704, 2023 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-36717669

RESUMEN

Using background music (BGM) during learning is a common behavior, yet whether BGM can facilitate or hinder learning remains inconclusive and the underlying mechanism is largely an open question. This study aims to elucidate the effect of self-selected BGM on reading task for learners with different characteristics. Particularly, learners' reading task performance, metacognition, and eye movements were examined, in relation to their personal traits including language proficiency, working memory capacity, music experience and personality. Data were collected from a between-subject experiment with 100 non-native English speakers who were randomly assigned into two groups. Those in the experimental group read English passages with music of their own choice played in the background, while those in the control group performed the same task in silence. Results showed no salient differences on passage comprehension accuracy or metacognition between the two groups. Comparisons on fine-grained eye movement measures reveal that BGM imposed heavier cognitive load on post-lexical processes but not on lexical processes. It was also revealed that students with higher English proficiency level or more frequent BGM usage in daily self-learning/reading experienced less cognitive load when reading with their BGM, whereas students with higher working memory capacity (WMC) invested more mental effort than those with lower WMC in the BGM condition. These findings further scientific understanding of how BGM interacts with cognitive tasks in the foreground, and provide practical guidance for learners and learning environment designers on making the most of BGM for instruction and learning.


Asunto(s)
Movimientos Oculares , Música , Humanos , Comprensión , Lenguaje , Lectura
17.
IEEE Trans Neural Netw Learn Syst ; 34(3): 1537-1551, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34464269

RESUMEN

The hidden Markov model (HMM) is a broadly applied generative model for representing time-series data, and clustering HMMs attract increased interest from machine learning researchers. However, the number of clusters ( K ) and the number of hidden states ( S ) for cluster centers are still difficult to determine. In this article, we propose a novel HMM-based clustering algorithm, the variational Bayesian hierarchical EM algorithm, which clusters HMMs through their densities and priors and simultaneously learns posteriors for the novel HMM cluster centers that compactly represent the structure of each cluster. The numbers K and S are automatically determined in two ways. First, we place a prior on the pair (K,S) and approximate their posterior probabilities, from which the values with the maximum posterior are selected. Second, some clusters and states are pruned out implicitly when no data samples are assigned to them, thereby leading to automatic selection of the model complexity. Experiments on synthetic and real data demonstrate that our algorithm performs better than using model selection techniques with maximum likelihood estimation.

18.
Dev Psychol ; 59(2): 353-363, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36342437

RESUMEN

Early attention bias to threat-related negative emotions may lead children to overestimate dangers in social situations. This study examined its emergence and how it might develop in tandem with a known predictor namely temperamental shyness for toddlers' fear of strangers in 168 Chinese toddlers. Measurable individual differences in such attention bias to fearful faces were found and remained stable from age 12 to 18 months. When shown photos of paired happy versus fearful or happy versus angry faces, toddlers initially gazed more and had longer initial fixation and total fixation at fearful faces compared with happy faces consistently. However, they initially gazed more at happy faces compared with angry faces consistently and had a longer total fixation at angry faces only at 18 months. Stranger anxiety at 12 months predicted attention bias to fearful faces at 18 months. Temperamentally shyer 12-month-olds went on to show stronger attention bias to fearful faces at 18 months, and their fear of strangers also increased more from 12 to 18 months. Together with prior research suggesting attention bias to angry or fearful faces foretelling social anxiety, the present findings point to likely positive feedback loops among attention bias to fearful faces, temperamental shyness, and stranger anxiety in early childhood. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Expresión Facial , Miedo , Humanos , Preescolar , Lactante , Miedo/psicología , Ansiedad , Ira , Felicidad , Emociones
19.
J Vis ; 12(2)2012 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-22375068

RESUMEN

In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Tiempo de Reacción/fisiología , Campos Visuales/fisiología , Adolescente , Cara , Femenino , Fijación Ocular , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
20.
Cogn Res Princ Implic ; 7(1): 64, 2022 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-35867196

RESUMEN

Use of face masks is one of the measures adopted by the general community to stop the transmission of disease during this ongoing COVID-19 pandemic. This wide use of face masks has indeed been shown to disrupt day-to-day face recognition. People with autism spectrum disorder (ASD) often have predisposed impairment in face recognition and are expected to be more vulnerable to this disruption in face recognition. Here, we recruited typically developing adult participants and those with ASD, and we measured their non-verbal intelligence, autism spectrum quotient, empathy quotient, and recognition performances of faces with and without a face mask covering the lower halves of the face. When faces were initially learned unobstructed, we showed that participants had a general reduced face recognition performance for masked faces. In contrast, when masked faces were first learned, typically developing adults benefit with an overall advantage in recognizing both masked and unmasked faces; while adults with ASD recognized unmasked faces with a significantly more reduced level of performance than masked faces-this face recognition discrepancy is predicted by a higher level of autistic traits. This paper also discusses how autistic traits influence processing of faces with and without face masks.


Asunto(s)
Trastorno del Espectro Autista , COVID-19 , Adulto , Humanos , Máscaras , Pandemias , Reconocimiento en Psicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA