Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Biol Rev Camb Philos Soc ; 97(6): 2057-2075, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35818133

RESUMO

A critical feature of language is that the form of words need not bear any perceptual similarity to their function - these relationships can be 'arbitrary'. The capacity to process these arbitrary form-function associations facilitates the enormous expressive power of language. However, the evolutionary roots of our capacity for arbitrariness, i.e. the extent to which related abilities may be shared with animals, is largely unexamined. We argue this is due to the challenges of applying such an intrinsically linguistic concept to animal communication, and address this by proposing a novel conceptual framework highlighting a key underpinning of linguistic arbitrariness, which is nevertheless applicable to non-human species. Specifically, we focus on the capacity to associate alternative functions with a signal, or alternative signals with a function, a feature we refer to as optionality. We apply this framework to a broad survey of findings from animal communication studies and identify five key dimensions of communicative optionality: signal production, signal adjustment, signal usage, signal combinatoriality and signal perception. We find that optionality is widespread in non-human animals across each of these dimensions, although only humans demonstrate it in all five. Finally, we discuss the relevance of optionality to behavioural and cognitive domains outside of communication. This investigation provides a powerful new conceptual framework for the cross-species investigation of the origins of arbitrariness, and promises to generate original insights into animal communication and language evolution more generally.


Assuntos
Comunicação Animal , Idioma , Animais
2.
Ann N Y Acad Sci ; 1453(1): 99-113, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31482571

RESUMO

Speech is a distinctive feature of our species. It is the default channel for language and constitutes our primary mode of social communication. Determining the evolutionary origins of speech is a challenging prospect, in large part because it appears to be unique in the animal kingdom. However, direct comparisons between speech and other forms of acoustic communication, both in humans (music) and animals (vocalization), suggest that important components of speech are shared across domains and species. In this review, we focus on a single aspect of speech-temporal patterning-examining similarities and differences across speech, music, and animal vocalization. Additional structure is provided by focusing on three specific functions of temporal patterning across domains: (1) emotional expression, (2) social interaction, and (3) unit identification. We hypothesize an evolutionary trajectory wherein the ability to identify units within a continuous stream of vocal sounds derives from social vocal interaction, which, in turn, derives from vocal emotional communication. This hypothesis implies that unit identification has parallels in music and precursors in animal vocal communication. Accordingly, we demonstrate the potential of comparisons between fundamental domains of biological acoustic communication to provide insight into the evolution of language.


Assuntos
Evolução Biológica , Música , Fala/fisiologia , Vocalização Animal/fisiologia , Animais , Emoções/fisiologia , Humanos , Relações Interpessoais , Idioma , Fatores de Tempo
3.
J Comp Psychol ; 133(4): 520-541, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31259563

RESUMO

Recently, evidence for acoustic universals in vocal communication was found by demonstrating that humans can identify levels of arousal in vocalizations produced by species across three biological classes (Filippi et al., 2017). Here, we extend this work by testing whether two vocal learning species, humans and chickadees, can discriminate vocalizations of high and low arousal using operant discrimination go/no-go tasks. Stimuli included vocalizations from nine species: giant panda, American alligator, common raven, hourglass treefrog, African elephant, Barbary macaque, domestic pig, black-capped chickadee, and human. Subjects were trained to respond to high or low arousal vocalizations, then tested with additional high and low arousal vocalizations produced by each species. Chickadees (Experiment 1) and humans (Experiment 2) learned to discriminate between high and low arousal stimuli and significantly transferred the discrimination to additional panda, human, and chickadee vocalizations. Finally, we conducted discriminant function analyses using four acoustic measures, finding evidence suggesting that fundamental frequency played a role in responding during the task. However, these analyses also suggest roles for other acoustic factors as well as familiarity. In sum, the results from these studies provide evidence that chickadees and humans are capable of perceiving arousal in vocalizations produced by multiple species. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Nível de Alerta/fisiologia , Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Inibição Psicológica , Aprendizagem/fisiologia , Aves Canoras/fisiologia , Vocalização Animal/fisiologia , Animais , Formação de Conceito/fisiologia , Condicionamento Operante/fisiologia , Aprendizagem por Discriminação/fisiologia , Feminino , Humanos , Masculino , Especificidade da Espécie , Transferência de Experiência/fisiologia
4.
Iperception ; 10(2): 2041669519846135, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31065333

RESUMO

Comparative research investigating how nonhuman animals generalize patterns of auditory stimuli often uses sequences of human speech syllables and reports limited generalization abilities in animals. Here, we reverse this logic, testing humans with stimulus sequences tailored to squirrel monkeys. When test stimuli are familiar (human voices), humans succeed in two types of generalization. However, when the same structural rule is instantiated over unfamiliar but perceivable sounds within squirrel monkeys' optimal hearing frequency range, human participants master only one type of generalization. These findings have methodological implications for the design of comparative experiments, which should be fair towards all tested species' proclivities and limitations.

5.
Front Neurosci ; 12: 20, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29467601

RESUMO

Language and music share many commonalities, both as natural phenomena and as subjects of intellectual inquiry. Rather than exhaustively reviewing these connections, we focus on potential cross-pollination of methodological inquiries and attitudes. We highlight areas in which scholarship on the evolution of language may inform the evolution of music. We focus on the value of coupled empirical and formal methodologies, and on the futility of mysterianism, the declining view that the nature, origins and evolution of language cannot be addressed empirically. We identify key areas in which the evolution of language as a discipline has flourished historically, and suggest ways in which these advances can be integrated into the study of the evolution of music.

6.
R Soc Open Sci ; 4(8): 161035, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28878961

RESUMO

We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word-meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.

7.
Proc Biol Sci ; 284(1859)2017 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-28747478

RESUMO

Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes-Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.


Assuntos
Nível de Alerta , Emoções , Vocalização Animal , Acústica , Animais , Humanos , Idioma , Vertebrados
8.
Curr Zool ; 63(4): 445-456, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29492004

RESUMO

The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low- and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

9.
Cogn Emot ; 31(5): 879-891, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-27140872

RESUMO

Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of "happy" and "sad" were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of "happy" and "sad" were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.


Assuntos
Emoções , Linguística , Percepção da Fala , Teste de Stroop , Percepção Visual , Expressão Facial , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
10.
Front Hum Neurosci ; 10: 586, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27994544

RESUMO

Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.

11.
Front Psychol ; 7: 1393, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27733835

RESUMO

Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.

12.
Front Psychol ; 5: 1468, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25566144

RESUMO

This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.

13.
PLoS One ; 7(4): e35626, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22558181

RESUMO

The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.


Assuntos
Música/psicologia , Ovulação/psicologia , Comportamento Sexual/psicologia , Adolescente , Adulto , Animais , Evolução Biológica , Feminino , Humanos , Pessoa de Meia-Idade , Ovulação/fisiologia , Comportamento Sexual/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA