Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Risk Anal ; 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38742599

RESUMO

People typically use verbal probability phrases when discussing risks ("It is likely that this treatment will work"), both in written and spoken communication. When speakers are uncertain about risks, they can nonverbally signal this uncertainty by using prosodic cues, such as a rising, question-like intonation or a filled pause ("uh"). We experimentally studied the effects of these two prosodic cues on the listener's perceived speaker certainty and numerical interpretation of spoken verbal probability phrases. Participants (N = 115) listened to various verbal probability phrases that were uttered with a rising or falling global intonation and with or without a filled pause before the probability phrase. For each phrase, they gave a point estimate of their numerical interpretation in percentages and indicated how certain they thought the speaker was about the correctness of the probability phrase. Speakers were perceived as least certain when the verbal probability phrases were spoken with both prosodic uncertainty cues. Interpretation of verbal probability phrases varied widely across participants, especially when rising intonation was produced by the speaker. Overall, high probability phrases (e.g., "very likely") were estimated as lower (and low probability phrases, such as "unlikely," as higher) when they were uttered with a rising intonation. The effects of filled pauses were less pronounced, as were the uncertainty effects for medium probability phrases (e.g., "probable"). These results stress the importance of nonverbal communication when verbally communicating risks and probabilities to people, for example, in the context of doctor-patient communication.

2.
Phonetica ; 76(4): 263-286, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30086551

RESUMO

Although the way tones are acquired by second or foreign language learners has attracted some scholarly attention, detailed knowledge of the factors that promote efficient learning is lacking. In this article, we look at the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by non-native listeners, looking both at the relative strength of these two factors and their possible interactions. Both the accuracy and reaction time of the listeners were measured in a task of tone identification. Results showed that participants in the audio-visual condition distinguished tones more accurately than participants in the audio-only condition. Interestingly, this varied as a function of speaking style, but only for stimuli from specific speakers. Additionally, some tones (notably tone 3) were recognized more quickly and accurately than others.

3.
J Acoust Soc Am ; 141(6): 4727, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28679274

RESUMO

This study examines the influence of the position of prosodic heads (accented syllables) and prosodic edges (prosodic word and intonational phrase boundaries) on the timing of head movements. Gesture movements and prosodic events tend to be temporally aligned in the discourse, the most prominent part of gestures typically being aligned with prosodically prominent syllables in speech. However, little is known about the impact of the position of intonational phrase boundaries on gesture-speech alignment patterns. Twenty-four Catalan speakers produced spontaneous (experiment 1) and semi-spontaneous head gestures with a confirmatory function (experiment 2), along with phrase-final focused words in different prosodic conditions (stress-initial, stress-medial, and stress-final). Results showed (a) that the scope of head movements is the associated focused prosodic word, (b) that the left edge of the focused prosodic word determines where the interval of gesture prominence starts, and (c) that the speech-anchoring site for the gesture peak (or apex) depends both on the location of the accented syllable and the distance to the upcoming intonational phrase boundary. These results demonstrate that prosodic heads and edges have an impact on the timing of head movements, and therefore that prosodic structure plays a central role in the timing of co-speech gestures.


Assuntos
Sinais (Psicologia) , Gestos , Movimentos da Cabeça , Idioma , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
5.
Lang Speech ; 57(Pt 4): 470-86, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25536844

RESUMO

A central problem in recent research on speech production concerns the question to what extent speakers adapt their linguistic expressions to the needs of their addressees. It is claimed that speakers sometimes leak information about objects that are only visible for them and not for their listeners. Previous research only takes the occurrence of adjectives as evidence for the leakage of privileged information. The present study hypothesizes that leaked information is also encoded in the prosody of those adjectives. A production experiment elicited adjectives that leak information and adjectives that do not leak information. An acoustic analysis and prominence rating task showed that adjectives that leak information were uttered with a higher pitch and perceived as more prominent compared to adjectives that do not leak information. Furthermore, a guessing task suggested that the adjectives' prosody relates to how listeners infer possible privileged information.


Assuntos
Intenção , Relações Interpessoais , Semântica , Acústica da Fala , Percepção da Fala , Medida da Produção da Fala , Comportamento Verbal , Adolescente , Adulto , Comunicação , Feminino , Humanos , Masculino , Países Baixos , Reconhecimento Visual de Modelos , Psicolinguística , Espectrografia do Som , Adulto Jovem
6.
Lang Speech ; 57(Pt 1): 86-107, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24754222

RESUMO

We studied the effect of two social settings (collaborative versus competitive) on the visual and auditory expressions of uncertainty by children in two age groups (8 and 11). We conducted an experiment in which children played a quiz game in pairs. They either had to collaborate or compete with each other. We found that the Feeling-of-Knowing of eight-year-old children did not seem to be affected by the social setting, contrary to the Feeling-of-Knowing of 11-year-old children. In addition, we labelled children's expressions in clips taken from the experiment for various visual and auditory features. We found that children used some of these features to signal uncertainty and that older children exhibited clearer cues than younger children. In a subsequent perception test, adults rated children's certainty in clips used for labelling. It appeared that older children and children in competition expressed their confidence level more clearly than younger children and children in collaboration.


Assuntos
Comportamento Competitivo , Comportamento Cooperativo , Psicologia da Criança , Incerteza , Comportamento Verbal , Fatores Etários , Criança , Linguagem Infantil , Sinais (Psicologia) , Feminino , Humanos , Julgamento , Masculino , Psicoacústica , Comportamento Social , Percepção da Fala
7.
J Acoust Soc Am ; 134(3): 2182-96, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23967948

RESUMO

The present research investigates what drives the prosodic marking of contrastive information. For example, a typically developing speaker of a Germanic language like Dutch generally refers to a pink car as a "PINK car" (accented words in capitals) when a previously mentioned car was red. The main question addressed in this paper is whether contrastive intonation is produced with respect to the speaker's or (also) the listener's perspective on the preceding discourse. Furthermore, this research investigates the production of contrastive intonation by typically developing speakers and speakers with autism. The latter group is investigated because people with autism are argued to have difficulties accounting for another person's mental state and exhibit difficulties in the production and perception of accentuation and pitch range. To this end, utterances with contrastive intonation are elicited from both groups and analyzed in terms of function and form of prosody using production and perception measures. Contrary to expectations, typically developing speakers and speakers with autism produce functionally similar contrastive intonation as both groups account for both their own and their listener's perspective. However, typically developing speakers use a larger pitch range and are perceived as speaking more dynamically than speakers with autism, suggesting differences in their use of prosodic form.


Assuntos
Transtornos Globais do Desenvolvimento Infantil/fisiopatologia , Fonética , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Acústica , Adolescente , Desenvolvimento do Adolescente , Adulto , Estudos de Casos e Controles , Transtornos Globais do Desenvolvimento Infantil/diagnóstico , Transtornos Globais do Desenvolvimento Infantil/psicologia , Feminino , Humanos , Desenvolvimento da Linguagem , Masculino , Pessoa de Meia-Idade , Espectrografia do Som , Medida da Produção da Fala , Fatores de Tempo , Adulto Jovem
8.
Lang Speech ; : 238309231217689, 2023 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-38156473

RESUMO

The current study investigates the average effect: the tendency for humans to appreciate an averaged (face, bird, wristwatch, car, and so on) over an individual instance. The effect holds across cultures, despite varying conceptualizations of attractiveness. While much research has been conducted on the average effect in visual perception, much less is known about the extent to which this effect applies to language and speech. This study investigates the attractiveness of average speech rhythms in Dutch and Mandarin Chinese, two typologically different languages. This was tested in a series of perception experiments in either language in which native listeners chose the most attractive one from a pair of acoustically manipulated rhythms. For each language, two experiments were carried out to control for the potential influence of the acoustic manipulation on the average effect. The results confirm the average effect in both languages, and they do not exclude individual variation in the listeners' perception of attractiveness. The outcomes provide a new crosslinguistic perspective and give rise to alternative explanations to the average effect.

9.
J Acoust Soc Am ; 132(4): 2616-24, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039454

RESUMO

The current article describes research on whether the goodness of a particular speaking style correlates with the way speakers distribute pitch accents in their speech. Study 1 analyzed two Flemish newsreaders, who, according to poll ratings, had previously been judged to represent a good vs bad speaker. A perception study in which participants had to assess the quality of spoken paragraphs produced by either of the two speakers confirmed that one speaker was rated as significantly and consistently better than the other one. An exploration of the accent distributions in those paragraphs showed that the accent distributions of the better speaker were more similar to the ones of a gold standard, i.e., the accent distributions as predicted by two independent intonation experts. Study 2 compared synthetic versions of a selection of the paragraphs of study 1, generated by a Dutch text-to-speech system. It compared three basically identical versions of the texts, except that they had different accent distributions according to the gold standard, or to distributions as observed in the productions of the two newsreaders. A perception study revealed that the versions of the bad speaker were rated as being significantly worse than the other versions. The two studies thus show that variation in accent distribution can indeed affect the way spoken texts are assessed in terms of their perceived quality.


Assuntos
Idioma , Fonética , Rádio , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Análise de Variância , Feminino , Humanos , Masculino , Percepção da Altura Sonora , Espectrografia do Som , Medida da Produção da Fala , Fatores de Tempo
10.
Phonetica ; 69(4): 216-30, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-24060967

RESUMO

Some dialogues are perceived as running more smoothly than others. To some extent that impression could be related to how well speakers adapt their prosody to each other. Adaptation in prosody can be signaled by the use of pitch accents that indicate how utterances are structurally related to those of the interlocutor (prosodic function) or by copying the interlocutor's prosodic features (prosodic form). The same acoustic features, such as pitch, are involved in both ways of adaptation. Further, function and form may require a different prosody for successful adaptation in certain discourse contexts. In this study we investigate to what extent interlocutors are perceived as good adapters, depending on whether the prosody of both speakers is functionally coherent or similar in form. This is done in two perception tests using prosodically manipulated dialogues. Results show that coherent functional prosody can be a cue for speaker adaptation and that this cue is more powerful than similarity in prosodic form.


Assuntos
Adaptação Fisiológica/fisiologia , Comunicação , Fonética , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
11.
Front Artif Intell ; 5: 835298, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35434608

RESUMO

Different applications or contexts may require different settings for a conversational AI system, as it is clear that e.g., a child-oriented system would need a different interaction style than a warning system used in emergency situations. The current article focuses on the extent to which a system's usability may benefit from variation in the personality it displays. To this end, we investigate whether variation in personality is signaled by differences in specific audiovisual feedback behavior, with a specific focus on embodied conversational agents. This article reports about two rating experiments in which participants judged the personalities (i) of human beings and (ii) of embodied conversational agents, where we were specifically interested in the role of variability in audiovisual cues. Our results show that personality perceptions of both humans and artificial communication partners are indeed influenced by the type of feedback behavior used. This knowledge could inform developers of conversational AI on how to also include personality in their feedback behavior generation algorithms, which could enhance the perceived personality and in turn generate a stronger sense of presence for the human interlocutor.

12.
Lang Speech ; 64(1): 3-23, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31957542

RESUMO

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence ("Como você sabe"), either as a statement (meaning "As you know.") or as an echo question (meaning "As you know?"). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


Assuntos
Estimulação Acústica/métodos , Expressão Facial , Fonética , Estimulação Luminosa/métodos , Percepção da Fala/fisiologia , Adulto , Brasil , Sinais (Psicologia) , Feminino , Humanos , Idioma , Masculino
13.
J Acoust Soc Am ; 128(3): 1337-45, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20815468

RESUMO

This article examines potential prosodic predictors of emotional speech in utterances perceived as conveying that good or bad news is about to be delivered. Speakers were asked to call an experimental confederate to inform her about whether or not she had been given a job she had applied for. A perception study was then performed in which initial fragments of the recorded utterances, not containing any explicit lexical cues to emotional content, were presented to listeners who had to rate whether good or bad news would follow the utterance. The utterances were then examined to discover acoustic and prosodic features that distinguished between good and bad news. It was found that speakers in the production study were not simply reflecting their own positive or negative mood during the experiment, but rather appeared to be influenced by the valence of the positive or negative message they were preparing to deliver. Positive and negative utterances appeared to be judged differently with respect to a number of perceived attributes of the speakers' voices (like sounding hesitant or nervous). These attributes correlated with a number of automatically obtained acoustic features.


Assuntos
Sinais (Psicologia) , Emoções , Fonética , Acústica da Fala , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Psicoacústica , Detecção de Sinal Psicológico , Comportamento Verbal
14.
Phonetica ; 67(3): 127-46, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20926913

RESUMO

Previous studies have shown that characteristics of a person's first language (L1) may transfer to a second language (L2). The current study looks at the extent to which this holds for aspects of intonation as well. More specifically, we investigate to what extent traces of the L1 can be discerned in the way intonation is used in the L2 for two functions: (1) to highlight certain words by making them sound more prominent and (2) to signal continuation or finality in a list by manipulating the speech melody. To this end, the article presents an explorative study into the way focus and boundaries are marked prosodically in Zulu, and it also compares such prosodic functions in two variants of English in South Africa, i.e., English spoken as an L1, and English spoken as an L2/additional language by speakers who have Zulu as their L1. The latter language is commonly referred to as Black South African English. This comparison is interesting from a typological perspective, as Zulu is intonationally different from English, especially in the way prosody is exploited for signalling informationally important stretches of speech. Using a specific elicitation procedure, we found in a first study that speakers of South African English (as L1) mark focused words and position within a list by intonational means, just as in other L1 varieties of English, whereas Zulu only uses intonation for marking continuity or finality. A second study focused on speakers of Black South African English, and compared the prosody of proficient versus less proficient speakers. We found that the proficient speakers were perceptually equivalent to L1 speakers of English in their use of intonation for marking focus and boundaries. The less proficient speakers marked boundaries in a similar way as L1 speakers of English, but did not use prosody for signalling focus, analogous to what is typical of their native language. Acoustic observations match these perceptual results.


Assuntos
População Negra , Comparação Transcultural , Multilinguismo , Acústica da Fala , População Branca , Adulto , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos , Fonética , Semântica , África do Sul , Medida da Produção da Fala , Adulto Jovem
15.
Lang Speech ; 53(Pt 1): 3-30, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20415000

RESUMO

In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores raise faster for positive than negative conditions. Finally, the gating results suggest that incongruent emotions are perceived as more intense than congruent emotions, as the former get more extreme recognition scores than the latter, already after a short period of exposure.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Emoções , Expressão Facial , Percepção da Fala , Fala , Percepção Visual , Adolescente , Adulto , Recursos Audiovisuais , Feminino , Humanos , Idioma , Masculino , Detecção de Sinal Psicológico , Fatores de Tempo , Gravação em Vídeo , Adulto Jovem
16.
Lang Speech ; 63(4): 856-876, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31888403

RESUMO

Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.


Assuntos
Estimulação Acústica , Idioma , Estimulação Luminosa , Percepção da Fala , Percepção do Timbre , Adulto , Povo Asiático/psicologia , Feminino , Humanos , Masculino , Fonética
17.
Cogn Sci ; 43(12): e12804, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31858627

RESUMO

The temporal-focus hypothesis claims that whether people conceptualize the past or the future as in front of them depends on their cultural attitudes toward time; such conceptualizations can be independent from the space-time metaphors expressed through language. In this paper, we study how Chinese people conceptualize time on the sagittal axis to find out the respective influences of language and culture on mental space-time mappings. An examination of Mandarin speakers' co-speech gestures shows that some Chinese spontaneously perform past-in-front/future-at-back (besides future-in-front/past-at-back) gestures, especially when gestures are accompanying past-in-front/future-at-back space-time metaphors (Exp. 1). Using a temporal performance task, the study confirms that Chinese can conceptualize the future as behind and the past as in front of them, and that such space-time mappings are affected by the different expressions of Mandarin space-time metaphors (Exp. 2). Additionally, a survey on cultural attitudes toward time shows that Chinese tend to focus slightly more on the future than on the past (Exp. 3). Within the Chinese sample, we did not find evidence for the effect of participants' cultural temporal attitudes on space-time mappings, but a cross-cultural comparison of space-time mappings between Chinese, Moroccans, and Spaniards provides strong support for the temporal-focus hypothesis. Furthermore, the results of Exp. 2 are replicated even after controlling for factors such as cultural temporal attitudes and age (Exp. 3), which implies that linguistic sagittal temporal metaphors can indeed influence Mandarin speakers' space-time mappings. The findings not only contribute to a better understanding of Chinese people's sagittal temporal orientation, but also have additional implications for theories on the mental space-time mappings and the relationship between language and thought.


Assuntos
Comparação Transcultural , Gestos , Idioma , Percepção Espacial , Percepção do Tempo , Adulto , China , Feminino , Humanos , Masculino , Espanha , Adulto Jovem
18.
J Acoust Soc Am ; 123(1): 354-65, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18177165

RESUMO

The existence of auditory cues such as intonation, rhythm, and pausing that facilitate end-of-utterance detection is by now well established. It has been argued repeatedly that speakers may also employ visual cues to indicate that they are at the end of their utterance. This raises at least two questions, which are addressed in the current paper. First, which modalities do speakers use for signalling finality and nonfinality, and second, how sensitive are observers to these signals. Our goal is to investigate the relative contribution of three different conditions to end-of-utterance detection: the two unimodal ones, vision only and audio only, and their bimodal combination. Speaker utterances were collected via a novel semicontrolled production experiment, in which participants provided lists of words in an interview setting. The data thus collected were used in two perception experiments, which systematically compared responses to unimodal (audio only and vision only) and bimodal (audio-visual) stimuli. Experiment I is a reaction time experiment, which revealed that humans are significantly quicker in end-of-utterance detection when confronted with bimodal or audio-only stimuli, than for vision-only stimuli. No significant differences in reaction times were found between the bimodal and audio-only condition, and therefore a second experiment was conducted. Experiment II is a classification experiment, and showed that participants perform significantly better in the bimodal condition than in the two unimodal ones. Both the first and the second experiment revealed interesting differences between speakers in the various conditions, which indicates that some speakers are more expressive in the visual and others in the auditory modality.


Assuntos
Percepção Auditiva/fisiologia , Detecção de Sinal Psicológico/fisiologia , Percepção da Fala , Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Medida da Produção da Fala
19.
Front Psychol ; 9: 2077, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30455653

RESUMO

We investigate whether smile mimicry and emotional contagion are evident in non-text-based computer-mediated communication (CMC). Via an ostensibly real-time audio-visual CMC platform, participants interacted with a confederate who either smiled radiantly or displayed a neutral expression throughout the interaction. Automatic analyses of expressions displayed by participants indicated that smile mimicry was at play: A higher level of activation of the facial muscle that characterizes genuine smiles was observed among participants who interacted with the smiling confederate than among participants who interacted with the unexpressive confederate. However, there was no difference in the self-reported level of joviality between participants in the two conditions. Our findings demonstrate that people mimic smiles in audio-visual CMC, but that even though the diffusion of emotions has been documented in text-based CMC in previous studies, we find no convincing support for the phenomenon of emotional contagion in non-text-based CMC.

20.
J Nonverbal Behav ; 41(1): 67-82, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28203037

RESUMO

We examined the effects of social and cultural contexts on smiles displayed by children during gameplay. Eight-year-old Dutch and Chinese children either played a game alone or teamed up to play in pairs. Activation and intensity of facial muscles corresponding to Action Unit (AU) 6 and AU 12 were coded according to Facial Action Coding System. Co-occurrence of activation of AU 6 and AU 12, suggesting the presence of a Duchenne smile, was more frequent among children who teamed up than among children who played alone. Analyses of the intensity of smiles revealed an interaction between social and cultural contexts. Whereas smiles, both Duchenne and non-Duchenne, displayed by Chinese children who teamed up were more intense than those displayed by Chinese children who played alone, the effect of sociality on smile intensity was not observed for Dutch children. These findings suggest that the production of smiles by children in a competitive context is susceptible to both social and cultural factors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA