Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Risk Anal ; 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38742599

RESUMO

People typically use verbal probability phrases when discussing risks ("It is likely that this treatment will work"), both in written and spoken communication. When speakers are uncertain about risks, they can nonverbally signal this uncertainty by using prosodic cues, such as a rising, question-like intonation or a filled pause ("uh"). We experimentally studied the effects of these two prosodic cues on the listener's perceived speaker certainty and numerical interpretation of spoken verbal probability phrases. Participants (N = 115) listened to various verbal probability phrases that were uttered with a rising or falling global intonation and with or without a filled pause before the probability phrase. For each phrase, they gave a point estimate of their numerical interpretation in percentages and indicated how certain they thought the speaker was about the correctness of the probability phrase. Speakers were perceived as least certain when the verbal probability phrases were spoken with both prosodic uncertainty cues. Interpretation of verbal probability phrases varied widely across participants, especially when rising intonation was produced by the speaker. Overall, high probability phrases (e.g., "very likely") were estimated as lower (and low probability phrases, such as "unlikely," as higher) when they were uttered with a rising intonation. The effects of filled pauses were less pronounced, as were the uncertainty effects for medium probability phrases (e.g., "probable"). These results stress the importance of nonverbal communication when verbally communicating risks and probabilities to people, for example, in the context of doctor-patient communication.

2.
Lang Speech ; : 238309231217689, 2023 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-38156473

RESUMO

The current study investigates the average effect: the tendency for humans to appreciate an averaged (face, bird, wristwatch, car, and so on) over an individual instance. The effect holds across cultures, despite varying conceptualizations of attractiveness. While much research has been conducted on the average effect in visual perception, much less is known about the extent to which this effect applies to language and speech. This study investigates the attractiveness of average speech rhythms in Dutch and Mandarin Chinese, two typologically different languages. This was tested in a series of perception experiments in either language in which native listeners chose the most attractive one from a pair of acoustically manipulated rhythms. For each language, two experiments were carried out to control for the potential influence of the acoustic manipulation on the average effect. The results confirm the average effect in both languages, and they do not exclude individual variation in the listeners' perception of attractiveness. The outcomes provide a new crosslinguistic perspective and give rise to alternative explanations to the average effect.

3.
Front Artif Intell ; 5: 835298, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35434608

RESUMO

Different applications or contexts may require different settings for a conversational AI system, as it is clear that e.g., a child-oriented system would need a different interaction style than a warning system used in emergency situations. The current article focuses on the extent to which a system's usability may benefit from variation in the personality it displays. To this end, we investigate whether variation in personality is signaled by differences in specific audiovisual feedback behavior, with a specific focus on embodied conversational agents. This article reports about two rating experiments in which participants judged the personalities (i) of human beings and (ii) of embodied conversational agents, where we were specifically interested in the role of variability in audiovisual cues. Our results show that personality perceptions of both humans and artificial communication partners are indeed influenced by the type of feedback behavior used. This knowledge could inform developers of conversational AI on how to also include personality in their feedback behavior generation algorithms, which could enhance the perceived personality and in turn generate a stronger sense of presence for the human interlocutor.

4.
Lang Speech ; 64(1): 3-23, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31957542

RESUMO

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence ("Como você sabe"), either as a statement (meaning "As you know.") or as an echo question (meaning "As you know?"). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


Assuntos
Estimulação Acústica/métodos , Expressão Facial , Fonética , Estimulação Luminosa/métodos , Percepção da Fala/fisiologia , Adulto , Brasil , Sinais (Psicologia) , Feminino , Humanos , Idioma , Masculino
6.
Lang Speech ; 63(4): 856-876, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31888403

RESUMO

Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.


Assuntos
Estimulação Acústica , Idioma , Estimulação Luminosa , Percepção da Fala , Percepção do Timbre , Adulto , Povo Asiático/psicologia , Feminino , Humanos , Masculino , Fonética
7.
Cogn Sci ; 43(12): e12804, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31858627

RESUMO

The temporal-focus hypothesis claims that whether people conceptualize the past or the future as in front of them depends on their cultural attitudes toward time; such conceptualizations can be independent from the space-time metaphors expressed through language. In this paper, we study how Chinese people conceptualize time on the sagittal axis to find out the respective influences of language and culture on mental space-time mappings. An examination of Mandarin speakers' co-speech gestures shows that some Chinese spontaneously perform past-in-front/future-at-back (besides future-in-front/past-at-back) gestures, especially when gestures are accompanying past-in-front/future-at-back space-time metaphors (Exp. 1). Using a temporal performance task, the study confirms that Chinese can conceptualize the future as behind and the past as in front of them, and that such space-time mappings are affected by the different expressions of Mandarin space-time metaphors (Exp. 2). Additionally, a survey on cultural attitudes toward time shows that Chinese tend to focus slightly more on the future than on the past (Exp. 3). Within the Chinese sample, we did not find evidence for the effect of participants' cultural temporal attitudes on space-time mappings, but a cross-cultural comparison of space-time mappings between Chinese, Moroccans, and Spaniards provides strong support for the temporal-focus hypothesis. Furthermore, the results of Exp. 2 are replicated even after controlling for factors such as cultural temporal attitudes and age (Exp. 3), which implies that linguistic sagittal temporal metaphors can indeed influence Mandarin speakers' space-time mappings. The findings not only contribute to a better understanding of Chinese people's sagittal temporal orientation, but also have additional implications for theories on the mental space-time mappings and the relationship between language and thought.


Assuntos
Comparação Transcultural , Gestos , Idioma , Percepção Espacial , Percepção do Tempo , Adulto , China , Feminino , Humanos , Masculino , Espanha , Adulto Jovem
8.
Phonetica ; 76(4): 263-286, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30086551

RESUMO

Although the way tones are acquired by second or foreign language learners has attracted some scholarly attention, detailed knowledge of the factors that promote efficient learning is lacking. In this article, we look at the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by non-native listeners, looking both at the relative strength of these two factors and their possible interactions. Both the accuracy and reaction time of the listeners were measured in a task of tone identification. Results showed that participants in the audio-visual condition distinguished tones more accurately than participants in the audio-only condition. Interestingly, this varied as a function of speaking style, but only for stimuli from specific speakers. Additionally, some tones (notably tone 3) were recognized more quickly and accurately than others.

9.
Front Psychol ; 9: 2077, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30455653

RESUMO

We investigate whether smile mimicry and emotional contagion are evident in non-text-based computer-mediated communication (CMC). Via an ostensibly real-time audio-visual CMC platform, participants interacted with a confederate who either smiled radiantly or displayed a neutral expression throughout the interaction. Automatic analyses of expressions displayed by participants indicated that smile mimicry was at play: A higher level of activation of the facial muscle that characterizes genuine smiles was observed among participants who interacted with the smiling confederate than among participants who interacted with the unexpressive confederate. However, there was no difference in the self-reported level of joviality between participants in the two conditions. Our findings demonstrate that people mimic smiles in audio-visual CMC, but that even though the diffusion of emotions has been documented in text-based CMC in previous studies, we find no convincing support for the phenomenon of emotional contagion in non-text-based CMC.

10.
J Acoust Soc Am ; 141(6): 4727, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28679274

RESUMO

This study examines the influence of the position of prosodic heads (accented syllables) and prosodic edges (prosodic word and intonational phrase boundaries) on the timing of head movements. Gesture movements and prosodic events tend to be temporally aligned in the discourse, the most prominent part of gestures typically being aligned with prosodically prominent syllables in speech. However, little is known about the impact of the position of intonational phrase boundaries on gesture-speech alignment patterns. Twenty-four Catalan speakers produced spontaneous (experiment 1) and semi-spontaneous head gestures with a confirmatory function (experiment 2), along with phrase-final focused words in different prosodic conditions (stress-initial, stress-medial, and stress-final). Results showed (a) that the scope of head movements is the associated focused prosodic word, (b) that the left edge of the focused prosodic word determines where the interval of gesture prominence starts, and (c) that the speech-anchoring site for the gesture peak (or apex) depends both on the location of the accented syllable and the distance to the upcoming intonational phrase boundary. These results demonstrate that prosodic heads and edges have an impact on the timing of head movements, and therefore that prosodic structure plays a central role in the timing of co-speech gestures.


Assuntos
Sinais (Psicologia) , Gestos , Movimentos da Cabeça , Idioma , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA