Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Acta Psychol (Amst) ; 229: 103713, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35988301

RESUMEN

It is generally assumed that someone's affective state can be correctly detected and interpreted by other people, nowadays even by computer algorithms, in their writing. However, it is unclear whether these perceptions match the actual experience and communicative intention of the author. Therefore, we investigated the relation between affect expression and perception in text in a two-part study. In Part 1, participants (authors) wrote about emotional experiences according to four combinations of two appraisals (High/Low Pleasantness, High/Low Control), rated the valence of each text, and annotated words using 22 emotions. In Part 2, another group of participants (readers) rated and annotated the same texts. We also compare the human evaluations to those provided by computerized text analysis. Results show that valence differed across conditions and that authors rated and annotated their texts differently than readers. Although the automatic analysis detected levels of positivity and negativity across conditions similar to human valence ratings, it relied on fewer and different words to do so. We discuss implications for affective science and automatic sentiment analysis.


Asunto(s)
Emociones , Lenguaje , Humanos
2.
Eur J Soc Psychol ; 51(7): 1198-1212, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35910663

RESUMEN

To date, there has been no systematic examination of cross-cultural differences in group-based shame, guilt, and regret following wrongdoing. Using a community sample (N = 1358), we examined people's reported experiences of shame, guilt, and regret following transgressions by themselves and by different identity groups (i.e., family, community, country) in Burkina Faso, Costa Rica, Indonesia, Japan, Jordan, the Netherlands, Poland, and the United States. We assessed whether any variation in this regard can be explained by the relative endorsement of individualistic or collectivistic values at the individual level and at the country level. Our findings suggest that people's reported experience of these emotions mostly depends on the transgression level. We also observe some variation across individuals and countries, which can be partially explained by the endorsement of collectivistic and individualistic values. The results highlight the importance of taking into account individual and cultural values when studying group-based emotions, as well as the identity groups involved in the transgression.

3.
J Exp Psychol Hum Percept Perform ; 46(10): 1164-1182, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32658540

RESUMEN

What are the mechanisms responsible for spontaneous cospeech gesture production? Driven by the close connection between cospeech gestures and object-related actions, recent research suggests that cospeech gestures originate in perceptual and motoric simulations that occur while speakers process information for speaking (Hostetter & Alibali, 2008). Here, we test this claim by highlighting object affordances during a communication task, inspired by the classic stimulus-response compatibility paradigm by Tucker and Ellis (1998). We compared cospeech gestures in situations where target objects were oriented toward the speakers' dominant hand (grasping potential enhanced), with situations where they were oriented toward the nondominant hand. Before the main experiment, we conducted a replication attempt of Tucker and Ellis' (1998: Experiment 1) to (re)establish the effect of stimulus compatibility, using contemporary items. Contrary to expectations, we could not replicate the original findings. Furthermore, compatibly with our replication results, the gesture data showed that enhancing grasping potential did not increase the amount of cospeech gestures produced. Vertical orientation nevertheless did, with upright objects eliciting more cospeech gestures than inverted ones, which does suggest a relation between affordance and gesture production. Our results challenge the automaticity of affordance effects, both in a classic stimulus-response compatibility experiment as well as in a more interactive dialogue setting and suggest that previous findings on cospeech gestures emerge from thinking and communicating about action-evoking content rather than from the affordance-compatibility of the presented objects. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Gestos , Desempeño Psicomotor/fisiología , Percepción Espacial/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
4.
PLoS One ; 15(5): e0233592, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32469910

RESUMEN

In this paper, we study the effect of verbalizing affective pictures on affective state and language production. Individuals describe (Study I: Spoken Descriptions of Pictures) or passively view (Study II: Passively Viewing Pictures) 40 pictures for the International Affective Picture System (IAPS) that gradually increase from neutral to either positive or negative content. We expected that both methods would result in successful affect induction, and that the effect would be stronger for verbally describing pictures than for passively viewing them. Results indicate that speakers indeed felt more negative after describing negative pictures, but that describing positive (compared to neutral) pictures did not result in a more positive state. Contrary to our hypothesis, no differences were found between describing and passively viewing the pictures. Furthermore, we analysed the verbal picture descriptions produced by participants on various dimensions. Results indicate that positive and negative pictures were indeed described with increasingly more affective language in the expected directions. In addition to informing our understanding of the relationship between (spoken) language production and affect, these results also potentially pave the way for a new method of affect induction that uses free expression.


Asunto(s)
Afecto , Lenguaje , Estimulación Luminosa , Adulto , Humanos , Masculino , Reconocimiento Visual de Modelos , Adulto Joven
6.
Lang Speech ; 63(4): 856-876, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31888403

RESUMEN

Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.


Asunto(s)
Estimulación Acústica , Lenguaje , Estimulación Luminosa , Percepción del Habla , Percepción del Timbre , Adulto , Pueblo Asiatico/psicología , Femenino , Humanos , Masculino , Fonética
7.
PLoS One ; 14(5): e0217419, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31125388

RESUMEN

Our affective state is influenced by daily events and our interactions with other people, which, in turn, can affect the way we communicate. In two studies, we investigated the influence of experiencing success or failure in a foosball (table soccer) game on participants' affective state and how this in turn influenced the way they report on the game itself. Winning or losing a match can further influence how they view their own team (compared to the opponent), which may also impact how they report on the match. In Study 1, we explored this by having participants play foosball matches in two dyads. They subsequently reported their affective state and team cohesiveness, and wrote two match reports, one from their own and one from their opponent's perspective. Indeed, while the game generally improved participants' moods, especially winning made them happier and more excited and losing made them more dejected, both in questionnaires and in the reports, which were analyzed with a word count tool. Study 2 experimentally investigated the effect of affective state on focus and distancing behavior. After the match, participants chose between preselected sentences (from Study 1) that differed in focus (mentioning the own vs. other team) or distancing (using we vs. the team name). Results show an effect for focus: winning participants preferred sentences that described their own performance positively while losing participants chose sentences that praised their opponent over negative sentences about themselves. No effect of distancing in pronoun use was found: winning and losing participants equally preferred the use of we vs. the use of their own team name. We discuss the implications of our findings with regard to models of language production, the self-serving bias, and the use of games to induce emotions in a natural way.


Asunto(s)
Conducta Competitiva , Emociones , Lenguaje , Logro , Adulto , Afecto , Femenino , Juegos Recreacionales , Humanos , Masculino , Fútbol , Adulto Joven
8.
Phonetica ; 76(4): 263-286, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30086551

RESUMEN

Although the way tones are acquired by second or foreign language learners has attracted some scholarly attention, detailed knowledge of the factors that promote efficient learning is lacking. In this article, we look at the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by non-native listeners, looking both at the relative strength of these two factors and their possible interactions. Both the accuracy and reaction time of the listeners were measured in a task of tone identification. Results showed that participants in the audio-visual condition distinguished tones more accurately than participants in the audio-only condition. Interestingly, this varied as a function of speaking style, but only for stimuli from specific speakers. Additionally, some tones (notably tone 3) were recognized more quickly and accurately than others.

9.
Front Psychol ; 9: 2077, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30455653

RESUMEN

We investigate whether smile mimicry and emotional contagion are evident in non-text-based computer-mediated communication (CMC). Via an ostensibly real-time audio-visual CMC platform, participants interacted with a confederate who either smiled radiantly or displayed a neutral expression throughout the interaction. Automatic analyses of expressions displayed by participants indicated that smile mimicry was at play: A higher level of activation of the facial muscle that characterizes genuine smiles was observed among participants who interacted with the smiling confederate than among participants who interacted with the unexpressive confederate. However, there was no difference in the self-reported level of joviality between participants in the two conditions. Our findings demonstrate that people mimic smiles in audio-visual CMC, but that even though the diffusion of emotions has been documented in text-based CMC in previous studies, we find no convincing support for the phenomenon of emotional contagion in non-text-based CMC.

10.
J Nonverbal Behav ; 41(4): 367-394, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29104335

RESUMEN

In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task's goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts.

11.
J Nonverbal Behav ; 41(1): 67-82, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28203037

RESUMEN

We examined the effects of social and cultural contexts on smiles displayed by children during gameplay. Eight-year-old Dutch and Chinese children either played a game alone or teamed up to play in pairs. Activation and intensity of facial muscles corresponding to Action Unit (AU) 6 and AU 12 were coded according to Facial Action Coding System. Co-occurrence of activation of AU 6 and AU 12, suggesting the presence of a Duchenne smile, was more frequent among children who teamed up than among children who played alone. Analyses of the intensity of smiles revealed an interaction between social and cultural contexts. Whereas smiles, both Duchenne and non-Duchenne, displayed by Chinese children who teamed up were more intense than those displayed by Chinese children who played alone, the effect of sociality on smile intensity was not observed for Dutch children. These findings suggest that the production of smiles by children in a competitive context is susceptible to both social and cultural factors.

12.
Cogn Sci ; 41 Suppl 6: 1493-1514, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-27322921

RESUMEN

It has often been observed that color is a highly preferred attribute for use in distinguishing descriptions, that is, referring expressions produced with the purpose of identifying an object within a visual scene. However, most of these observations were based on visual displays containing only colors that were maximally different in hue and for which the language of experimentation possessed basic color terms. The experiments described in this paper investigate whether speakers' preference for color is reduced if the color of the target referent is similar to that of the distractors. Because colors that look similar are often also harder to distinguish linguistically, we also examine the impact of the codability of color values. As a third factor, we investigate the salience of available alternative attributes and its impact on the use of color. The results of our experiments show that, while speakers are indeed less likely to use color when the colors in a display are similar, this effect is mostly due to the difficulty in naming similar colors. Color use for color with a basic color term is affected only when the colors of target and distractors are very similar (yet still distinguishable). The salience of our alternative attribute size, manipulated by varying the difference in size between target and distractors, had no impact on the use of color.


Asunto(s)
Percepción de Color/fisiología , Color , Lenguaje , Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Psicolingüística , Adulto Joven
13.
Top Cogn Sci ; 8(4): 819-836, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27529672

RESUMEN

Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be established. This study addresses the relationship between the verbal and gestural modalities during grounding in referential communication. We report data from a collaborative task where repeated references were elicited, and a time constraint was imposed to increase cognitive load. Our results reveal no differential effects of repetition or cognitive load on the semantic-based gesture rate, suggesting that representational gestures and speech are closely coordinated during grounding. However, gestures and speech differed in their execution, especially under time pressure. We argue that speech and gesture are two complementary streams that might be planned in conjunction but that unfold independently in later stages of language production, with speakers emphasizing the form of their gestures, but not of their words, to better meet the goals of the collaborative task.


Asunto(s)
Gestos , Habla , Cognición , Femenino , Humanos , Masculino , Adulto Joven
14.
Lang Cogn Neurosci ; 31(3): 430-440, 2016 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-27226970

RESUMEN

Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents.

15.
Iperception ; 6(5): 0301006615599139, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27648210

RESUMEN

In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study's aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings' pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners' and 20 CI users' emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor's emotions worse than the other actors'. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker's deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues.

16.
Cogn Emot ; 28(5): 936-46, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24350613

RESUMEN

In two studies, the robustness of anger recognition of bodily expressions is tested. In the first study, video recordings of an actor expressing four distinct emotions (anger, despair, fear, and joy) were structurally manipulated as to image impairment and body segmentation. The results show that anger recognition is more robust than other emotions to image impairment and to body segmentation. Moreover, the study showed that arms expressing anger were more robustly recognised than arms expressing other emotions. Study 2 added face blurring as a variable to the bodily expressions and showed that it decreased accurate emotion recognition-but more for recognition of joy and despair than for anger and fear. In sum, the paper indicates the robustness of anger recognition in multileveled deteriorated bodily expressions.


Asunto(s)
Ira/fisiología , Emoción Expresada/fisiología , Comunicación no Verbal/psicología , Reconocimiento en Psicología/fisiología , Adolescente , Adulto , Expresión Facial , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
17.
Cogn Sci ; 37(2): 395-411, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23294102

RESUMEN

This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly include a color attribute when the scene variation is high as compared with when this variation is low (even if this leads to overspecified descriptions). We argue that these findings are problematic for existing algorithms that aim to automatically generate psychologically realistic target descriptions, such as the Incremental Algorithm, as these algorithms make use of a fixed preference order per domain and do not take visual scene variation into account.


Asunto(s)
Percepción de Color , Color , Lenguaje , Señales (Psicología) , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Psicolingüística , Adulto Joven
18.
Perception ; 42(6): 642-57, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24422246

RESUMEN

Recent judgment studies have shown that people are able to fairly correctly attribute emotional states to others' bodily expressions. It is, however, not clear which movement qualities are salient, and how this applies to emotional gesture during speech-based interaction. In this study we investigated how the expression of emotions that vary on three major emotion dimensions-that is, arousal, valence, and potency-affects the perception of dynamic arm gestures. Ten professional actors enacted 12 emotions in a scenario-based social interaction setting. Participants (N = 43) rated all emotional expressions with muted sound and blurred faces on six spatiotemporal characteristics of gestural arm movement that were found to be related to emotion in previous research (amount of movement, movement speed, force, fluency, size, and height/vertical position). Arousal and potency were found to be strong determinants of the perception of gestural dynamics, whereas the differences between positive or negative emotions were less pronounced. These results confirm the importance of arm movement in communicating major emotion dimensions and show that gesture forms an integrated part of multimodal nonverbal emotion communication.


Asunto(s)
Emociones , Gestos , Relaciones Interpersonales , Comunicación no Verbal , Adolescente , Adulto , Nivel de Alerta , Atención , Femenino , Humanos , Juicio , Masculino , Percepción de Movimiento , Poder Psicológico , Adulto Joven
19.
Top Cogn Sci ; 4(2): 269-89, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22389150

RESUMEN

Psycholinguistic studies often look at the production of referring expressions in interactive settings, but so far few referring expression generation algorithms have been developed that are sensitive to earlier references in an interaction. Rather, such algorithms tend to rely on domain-dependent preferences for both content selection and linguistic realization. We present three experiments showing that humans may opt for dispreferred attributes and dispreferred modifier orderings when these were primed in a preceding interaction (without speakers being consciously aware of this). In addition, we show that speakers are more likely to produce overspecified references, including dispreferred attributes (although minimal descriptions with preferred attributes would suffice), when these were similarly primed.


Asunto(s)
Formación de Concepto , Habla , Vocabulario , Adulto , Algoritmos , Femenino , Humanos , Masculino , Procesamiento de Lenguaje Natural , Satisfacción Personal
20.
J Acoust Soc Am ; 128(3): 1322-36, 2010 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-20815467

RESUMEN

The important role of arousal in determining vocal parameters in the expression of emotion is well established. There is less evidence for the contribution of emotion dimensions such as valence and potency/control to vocal emotion expression. Here, an acoustic analysis of the newly developed Geneva Multimodal Emotional Portrayals corpus, is presented to examine the role of dimensions other than arousal. This corpus contains twelve emotions that systematically vary with respect to valence, arousal, and potency/control. The emotions were portrayed by professional actors coached by a stage director. The extracted acoustic parameters were first compared with those obtained from a similar corpus [Banse and Scherer (1996). J. Pers. Soc. Psychol. 70, 614-636] and shown to largely replicate the earlier findings. Based on a principal component analysis, seven composite scores were calculated and were used to determine the relative contribution of the respective vocal parameters to the emotional dimensions arousal, valence, and potency/control. The results show that although arousal dominates for many vocal parameters, it is possible to identify parameters, in particular spectral balance and spectral noise, that are specifically related to valence and potency/control.


Asunto(s)
Nivel de Alerta , Señales (Psicología) , Emociones , Acústica del Lenguaje , Percepción del Habla , Voz , Femenino , Humanos , Masculino , Análisis de Componente Principal , Medición de la Producción del Habla , Factores de Tiempo , Conducta Verbal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA