Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Sci Rep ; 13(1): 174, 2023 01 04.
Article in English | MEDLINE | ID: mdl-36599964

ABSTRACT

Studies of the impact of face masks on emotional facial expression recognition are sparse in children. Moreover, to our knowledge no study has so far considered mask color (in adults and in children), even though this esthetic property is thought to have an impact on information processing. In order to explore these issues, the present study looked at whether first- and fifth-graders and young adults were influenced by the absence or presence (and color: pink, green, red, black, or white) of a face mask when asked to judge emotional facial expressions of fear, anger, sadness, or neutrality. Analysis of results suggested that the presence of a mask did affect the recognition of sad or fearful faces but did not influence significantly the perception of angry and neutral faces. Mask color slightly modulated the recognition of facial emotional expressions, without a systematic pattern that would allow a clear conclusion to be drawn. Moreover, none of these findings varied according to age group. The contribution of different facial areas to efficient emotion recognition is discussed with reference to methodological and theoretical considerations, and in the light of recent studies.


Subject(s)
Facial Recognition , Young Adult , Child , Humans , Cross-Sectional Studies , Emotions , Anger , Fear , Facial Expression
2.
Q J Exp Psychol (Hove) ; 75(7): 1330-1342, 2022 Jul.
Article in English | MEDLINE | ID: mdl-34623189

ABSTRACT

People's memory of what was said and who said what during dialogue plays a central role in mutual comprehension and subsequent adaptation. This article outlines that well-established effects in conversational memory such as the self-production and the emotional effects actually depend on the nature of the interaction. We specifically focus on the impact of the collaborative nature of the interaction, comparing participants' conversational memory in non-collaborative and collaborative interactive settings involving interactions between two people (i.e., dialogue). The findings reveal that the amplitude of these conversational memory effects depends on the collaborative vs. non-collaborative nature of the interaction. The effects are attenuated when people have the opportunity to collaborate because information that remained non-salient in the non-collaborative condition (neutral and partner-produced words) became salient in the collaborative condition to a level similar to otherwise salient information (emotional and self-produced words). We highlight the importance of these findings in the study of dialogue and conversational memory.


Subject(s)
Communication , Emotions , Humans
3.
Psychol Res ; 84(2): 514-527, 2020 Mar.
Article in English | MEDLINE | ID: mdl-30047022

ABSTRACT

When two dialogue partners need to refer to something, they jointly negotiate which referring expression should be used. If needed, the chosen referring expression is then reused throughout the interaction, which potentially has a direct, positive impact on subsequent communication. The purpose of this study was to determine if the way in which the partners view, or conceptualise, the referent under discussion, affects referring expression negotiation and subsequent communication. A matching task was preceded by an individual task during which participants were required to describe their conceptualisations of abstract tangram pictures. The results revealed that participants found it more difficult to converge on single referring expression during the matching task when they initially held different conceptualisations of the pictures. This had a negative impact on the remainder of the task. These findings are discussed in light of the shared versus mutual knowledge distinction, highlighting how the former directly contributes to the formation of the latter.


Subject(s)
Communication , Concept Formation , Cooperative Behavior , Female , Humans , Male , Psychomotor Performance , Young Adult
4.
J Exp Psychol Appl ; 24(4): 476-489, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30346193

ABSTRACT

This study examined the effect of the number of citations attributed to documents on third year psychology students' selection of bibliographical references. Our main assumption was that students would take high numbers of citations as accessible relevance cues and use them heuristically to facilitate decision making, potentially bypassing deeper relevance assessment based on semantic processing. Experiment 1 presented the students with a reference selection task while manipulating the number of citations attributed to references, and found that the number of citations had a strong impact on reference selection. Moreover, the effect was independent from topic familiarity and even from students' prior knowledge of what the number of citations meant. Experiment 2 used eye-tracking data to show that this "big number" effect was contingent upon the participants fixating the numbers of citations attributed to documents. Experiment 3 manipulated the semantic relevance of references to the search topic, and demonstrated that the less relevant references were 3 times more likely to be selected when they came with a high number of citations. Overall, the study shows that the number of citations significantly influences students' selections, competing with the semantic relevance of references. Implications for the teaching of online search skills are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Bibliographies as Topic , Decision Making , Heuristics , Students , Adolescent , Adult , Female , Humans , Knowledge , Male , Middle Aged , Young Adult
5.
Cognition ; 180: 52-58, 2018 11.
Article in English | MEDLINE | ID: mdl-29981968

ABSTRACT

The joint impact of emotion and production on conversational memory was examined in two experiments where pairs of participants took turns producing verbal information. They were instructed to produce out loud sentences based on either neutral or emotional (Experiment 1: negative; Experiment 2: positive) words. Each participant was then asked to recall as many words as possible (content memory) and to indicate who had produced each word (reality monitoring). The analyses showed that both self-production and emotion boost content memory, although emotion also impairs reality monitoring. This study sheds light on how both factors (emotion and production) may constrain language interaction memory through information saliency.


Subject(s)
Emotions/physiology , Mental Recall/physiology , Photic Stimulation/methods , Recognition, Psychology/physiology , Adolescent , Adult , Female , Humans , Male , Memory/physiology , Young Adult
6.
Mem Cognit ; 45(1): 151-167, 2017 01.
Article in English | MEDLINE | ID: mdl-27531139

ABSTRACT

According to the documents model framework (Britt, Perfetti, Sandak, & Rouet, 1999), readers' detection of contradictions within texts increases their integration of source-content links (i.e., who says what). This study examines whether conflict may also strengthen the relationship between the respective sources. In two experiments, participants read brief news reports containing two critical statements attributed to different sources. In half of the reports, the statements were consistent with each other, whereas in the other half they were discrepant. Participants were tested for source memory and source integration in an immediate item-recognition task (Experiment 1) and a cued recall task (Experiments 1 and 2). In both experiments, discrepancies increased readers' memory for sources. We found that discrepant sources enhanced retrieval of the other source compared to consistent sources (using a delayed recall measure; Experiments 1 and 2). However, discrepant sources failed to prime the other source as evidenced in an online recognition measure (Experiment 1). We argue that discrepancies promoted the construction of links between sources, but that integration did not take place during reading.


Subject(s)
Conflict, Psychological , Cues , Memory, Short-Term/physiology , Mental Recall/physiology , Recognition, Psychology/physiology , Adult , Female , Humans , Male , Young Adult
7.
J Exp Psychol Learn Mem Cogn ; 43(3): 350-368, 2017 03.
Article in English | MEDLINE | ID: mdl-27775404

ABSTRACT

As speakers interact, they add references to their common ground, which they can then reuse to facilitate listener comprehension. However, all references are not equally likely to be reused. The purpose of this study was to shed light on how the speakers' conceptualizations of the referents under discussion affect reuse (along with a generation effect in memory documented in previous studies on dialogic reuse). Two experiments were conducted in which participants interactively added references to their common ground. From each participant's point of view, these references either did or did not match their own conceptualization of the referents discussed, and were either self- or partner-generated. Although self-generated references were more readily accessible in memory than partner-generated ones (Experiment 1), reference reuse was mainly guided by conceptualization (Experiment 2). These results are in line with the idea that several different cues (conceptual match, memory accessibility) constrain reference reuse in dialogue. (PsycINFO Database Record


Subject(s)
Comprehension , Concept Formation/physiology , Memory/physiology , Pattern Recognition, Visual/physiology , Adolescent , Female , Humans , Male , Photic Stimulation , Young Adult
8.
Top Cogn Sci ; 8(4): 796-818, 2016 10.
Article in English | MEDLINE | ID: mdl-27541074

ABSTRACT

During dialog, references are presented, accepted, and potentially reused (depending on their accessibility in memory). Two experiments were conducted to examine reuse in a naturalistic setting (a walk in a familiar environment). In Experiment 1, where the participants interacted face to face, self-presented references and references accepted through verbatim repetition were reused more. Such biases persisted after the end of the interaction. In Experiment 2, where the participants interacted over the phone, reference reuse mainly depended on whether the participant could see the landmarks being referred to, although this bias seemed to be only transient. Consistent with the memory-based approach to dialog, these results shed light on how differences in accessibility in memory (due to how these references were initially added to the common ground or the media used) affect the unfolding of the interaction.


Subject(s)
Language , Adolescent , Adult , Association Learning , Female , Humans , Male , Telephone , Young Adult
9.
Dev Sci ; 19(6): 1087-1094, 2016 11.
Article in English | MEDLINE | ID: mdl-26690306

ABSTRACT

The association of colour with emotion constitutes a growing field of research, as it can affect how humans process their environment. Although there has been increasing interest in the association of red with negative valence in adults, little is known about how it develops. We therefore tested the red-negative association in children for the first time. Children aged 5-10 years performed a face categorization task in the form of a card-sorting task. They had to judge whether ambiguous faces shown against three different colour backgrounds (red, grey, green) seemed to 'feel good' or 'feel bad'. Results of logistic mixed models showed that - as previously demonstrated in adults - children across the age range provided significantly more 'feel bad' responses when the faces were given a red background. This finding is discussed in relation to colour-emotion association theories.


Subject(s)
Color , Emotions/physiology , Association , Child , Child, Preschool , Face , Female , Humans , Male
10.
Front Psychol ; 6: 322, 2015.
Article in English | MEDLINE | ID: mdl-25852625

ABSTRACT

In recent years, researchers have become interested in the way that the affective quality of contextual information transfers to a perceived target. We therefore examined the effect of a red (vs. green, mixed red/green, and achromatic) background - known to be valenced - on the processing of stimuli that play a key role in human interactions, namely facial expressions. We also examined whether the valenced-color effect can be modulated by gender, which is also known to be valenced. Female and male adult participants performed a categorization task of facial expressions of emotion in which the faces of female and male posers expressing two ambiguous emotions (i.e., neutral and surprise) were presented against the four different colored backgrounds. Additionally, this task was completed by collecting subjective ratings for each colored background in the form of five semantic differential scales corresponding to both discrete and dimensional perspectives of emotion. We found that the red background resulted in more negative face perception than the green background, whether the poser was female or male. However, whereas this valenced-color effect was the only effect for female posers, for male posers, the effect was modulated by both the nature of the ambiguous emotion and the decoder's gender. Overall, our findings offer evidence that color and gender have a common valence-based dimension.

11.
Exp Psychol ; 62(2): 98-109, 2015.
Article in English | MEDLINE | ID: mdl-25384639

ABSTRACT

In many visual displays such as virtual environments, human tasks involve objects superimposed on both complex and moving backgrounds. However, most studies investigated the influence of background complexity or background motion in isolation. Two experiments were designed to investigate the joint influences of background complexity and lateral motion on a simple shooting task typical of video games. Participants had to perform the task on the moving and static versions of backgrounds of three levels of complexity, while their eye movements were recorded. The backgrounds displayed either an abstract (Experiment 1) or a naturalistic (Experiment 2) virtual environment. The results showed that performance was impaired by background motion in both experiments. The effects of motion and complexity were additive for the abstract background and multiplicative for the naturalistic background. Eye movement recordings showed that performance impairments reflected at least in part the impact of the background visual features on gaze control.


Subject(s)
Eye Movements/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Perceptual Masking , Video Games , Adult , Attention/physiology , Discrimination, Psychological , Female , Humans , Male , Photic Stimulation/methods , Task Performance and Analysis
12.
J Exp Psychol Learn Mem Cogn ; 41(2): 574-85, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24999705

ABSTRACT

Not all pieces of information mentioned during an interaction are equally accessible in speakers' conversational memory. The current study sought to test whether 2 basic features of dialogue management (reference acceptance and reuse) affect reference recognition. Dyads of speakers were asked to discuss a route for an imaginary person, thus referring to the landmarks to be encountered. The results revealed that the participants' conversational memory for the references produced during the interaction depended on whether these had been reused during the interaction and by whom, along with landmark visibility during the interaction. These findings have implications for partner adaptation in dialogue, which depends in part on what speakers remember of past interactions.


Subject(s)
Adaptation, Psychological , Communication , Interpersonal Relations , Memory , Visual Perception , Adult , Female , Humans , Male , Neuropsychological Tests , Speech , Young Adult
13.
PLoS One ; 9(8): e104291, 2014.
Article in English | MEDLINE | ID: mdl-25098167

ABSTRACT

There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue-meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning.


Subject(s)
Color , Emotions , Eyeglasses , Facial Expression , Models, Biological , Visual Perception , Adolescent , Adult , Female , Humans
14.
Psychon Bull Rev ; 21(6): 1590-9, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24671777

ABSTRACT

Words that are produced aloud--and especially self-produced ones--are remembered better than words that are not, a phenomenon labeled the production effect in the field of memory research. Two experiments were conducted to determine whether this effect can be generalized to dialogue, and how it might affect dialogue management. Triads (Exp. 1) or dyads (Exp. 2) of participants interacted to perform a collaborative task. Analyzing reference reuse during the interaction revealed that the participants were more likely to reuse the references that they had presented themselves, on the one hand, and those that had been accepted through verbatim repetition, on the other. Analyzing reference recall suggested that the greater accessibility of self-presented references was only transient. Moreover, among partner-presented references, those discussed while the participant had actively taken part in the conversation were more likely to be recalled than those discussed while the participant had been inactive. These results contribute to a better understanding of how individual memory processes might contribute to collaborative dialogue.


Subject(s)
Cooperative Behavior , Mental Recall/physiology , Verbal Behavior/physiology , Adult , Humans , Speech/physiology , Speech Production Measurement , Young Adult
15.
PLoS One ; 8(12): e83657, 2013.
Article in English | MEDLINE | ID: mdl-24349539

ABSTRACT

Previous research has suggested that children do not rely on prosody to infer a speaker's emotional state because of biases toward lexical content or situational context. We hypothesized that there are actually no such biases and that young children simply have trouble in using emotional prosody. Sixty children from 5 to 13 years of age had to judge the emotional state of a happy or sad speaker and then to verbally explain their judgment. Lexical content and situational context were devoid of emotional valence. Results showed that prosody alone did not enable the children to infer emotions at age 5, and was still not fully mastered at age 13. Instead, they relied on contextual information despite the fact that this cue had no emotional valence. These results support the hypothesis that prosody is difficult to interpret for young children and that this cue plays only a subordinate role up until adolescence to infer others' emotions.


Subject(s)
Child Development/physiology , Emotions/physiology , Speech/physiology , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Male
16.
Ergonomics ; 56(12): 1863-76, 2013.
Article in English | MEDLINE | ID: mdl-24168472

ABSTRACT

The visual interfaces of virtual environments such as video games often show scenes where objects are superimposed on a moving background. Three experiments were designed to better understand the impact of the complexity and/or overall motion of two types of visual backgrounds often used in video games on the detection and use of superimposed, stationary items. The impact of background complexity and motion was assessed during two typical video game tasks: a relatively complex visual search task and a classic, less demanding shooting task. Background motion impaired participants' performance only when they performed the shooting game task, and only when the simplest of the two backgrounds was used. In contrast, and independently of background motion, performance on both tasks was impaired when the complexity of the background increased. Eye movement recordings demonstrated that most of the findings reflected the impact of low-level features of the two backgrounds on gaze control.


Subject(s)
Motion , Nystagmus, Optokinetic/physiology , Task Performance and Analysis , Video Games , Adolescent , Female , Humans , Male , Pattern Recognition, Visual , Perceptual Masking , Photic Stimulation , Young Adult
17.
Hum Factors ; 53(2): 103-17, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21702329

ABSTRACT

OBJECTIVE: Two experiments were conducted to investigate elements of the spatial design of video game interfaces. BACKGROUND: In most video games, both the objects and the background scene are moving. Players must pay attention to what appears in the background to anticipate events while looking at head-up displays. According to the proximity-compatibility principle, game-related information should be placed as close as possible to the anticipation zone. METHOD: Participants played a video game where they had to anticipate the upward movement of obstacles. The score location was manipulated. The average vertical gaze position and dispersion were used to assess anticipation and extent of visual scanning, respectively. RESULTS: Putting the score at the bottom rather than the top of the game window, within the anticipation zone, was expected to minimize attentional moves. Experiment I revealed lower average gaze positions and reduced extent of visual scanning in that condition, but the score performance did not improve significantly. Experiment 2 demonstrated that players' performance increased compared with the bottom condition when the score was displayed just below but outside the game window, despite an increased extent of visual scanning. CONCLUSION: Positioning the score just outside the anticipation zone facilitated anticipation of the movement of obstacles and led to better performance than when the score overlapped with the game anticipation zone. APPLICATION: For games requiring visual anticipation, contextual information should be located in the direction of anticipation but not within the anticipation zone. This recommendation complements the proximity compatibility principle for simple dynamic displays.


Subject(s)
Anticipation, Psychological , Data Display , Video Games , Adult , Eye Movement Measurements , Female , Humans , Male , Play and Playthings , Software Design , Young Adult
18.
J Speech Lang Hear Res ; 53(6): 1629-41, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20705750

ABSTRACT

PURPOSE: This study was aimed at determining the role of prosody and situational context in children's understanding of expressive utterances. Which one of these 2 cues will help children grasp the speaker's intention? Do children exhibit a "contextual bias" whereby they ignore prosody, such as the "lexical bias" found in other studies (M. Friend & J. Bryant, 2000)? METHOD: In the first experiment, a group of 5- to 9-year-old children and a group of adults performed a computerized judgment task. They had to determine the speaker's intention on the basis of an utterance produced with a particular prosody (positive or negative) in a particular situational context (positive or negative). In the second experiment, the same prosodic utterances were presented to 5- to 9-year-old children without a situational context. RESULTS: The 5- and 7-year-old children relied primarily on situational context, in contrast to adults, who relied on prosody. The 9-year-olds relied on both cues (Experiment 1). When prosody was the sole cue (Experiment 2), all children relied on this cue to infer the speaker's intention. CONCLUSIONS: The results are discussed and integrated into a larger conceptual framework that includes research on lexical bias and sarcasm.


Subject(s)
Emotions/physiology , Speech Acoustics , Speech Perception/physiology , Verbal Behavior/physiology , Adult , Child , Child Language , Child, Preschool , Cues , Female , Humans , Language , Male , Young Adult
19.
Ergonomics ; 53(1): 43-55, 2010 Jan.
Article in English | MEDLINE | ID: mdl-20069480

ABSTRACT

The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.


Subject(s)
Natural Language Processing , User-Computer Interface , Verbal Behavior , Adult , Communication , Communication Aids for Disabled , Feasibility Studies , Female , Humans , Internet , Male , Reading , Young Adult
20.
Hum Factors ; 49(6): 1045-53, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18074703

ABSTRACT

OBJECTIVE: This study examined the effects of user production (speaking and typing) and user reception (listening and reading) modes on natural language human-computer dialogue. BACKGROUND: Text-based dialogue is often more efficient than speech-based dialogue, but the latter is more dynamic and more suitable for mobile environments and hands-busy situations. The respective contributions of user production and reception modes have not previously been assessed. METHOD: Eighteen participants performed several information search tasks using a natural language information system in four experimental conditions: phone (speaking and listening), Web (typing and reading), and mixed (speaking and reading or typing and listening). RESULTS: Mental workload was greater and participants' repetitions of commands were more frequent when speech (speaking or listening) was used for both the user production and reception modes rather than text (typing or reading). Completion times were longer for listening than for reading. Satisfaction was lower, utterances were longer, and the interaction error rate was higher for speaking than typing. CONCLUSION: The production and reception modes both contribute to dialogue and mental workload. They have distinct contributions to performance, satisfaction, and the form of the discourse. APPLICATION: The most efficient configuration for interacting in natural language would appear to be speech for production and system prompts in text, as this combination decreases the time on task while improving dialogue involvement.


Subject(s)
Natural Language Processing , User-Computer Interface , Writing , Adult , Efficiency , Female , France , Humans , Male , Middle Aged , Surveys and Questionnaires , Task Performance and Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...