Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 15(3): e0229109, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32130244

RESUMO

Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.


Assuntos
Idioma , Música/psicologia , Nível de Percepção Sonora/fisiologia , Semântica , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Grupo com Ancestrais do Continente Asiático/psicologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Vias Neurais/fisiologia , Tempo de Reação , Adulto Jovem
2.
J Cogn Neurosci ; 32(7): 1221-1241, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31933432

RESUMO

Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.

3.
PLoS One ; 13(11): e0207265, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30419066

RESUMO

BACKGROUND: Previous literature has shown a putative relationship between playing a musical instrument and a benefit in various cognitive domains. However, to date it still remains unknown whether the exposure to a musically-enriched environment instead of playing an instrument yourself might also increase cognitive domains such as language, mathematics or executive sub-functions such as for example planning or working memory in primary school children. DESIGN: Cross-sectional. METHOD: Exposure to a musically-enriched environment like listening to music at home, during play or when attending concerts was assessed using a comprehensive intake questionnaire administered to a sample of 176 primary school children. Furthermore, participants completed the verbal intelligence section of the Wechsler Intelligence Scale (WISC III), performed executive sub-function tasks such as planning (Tower of London), working memory (Klingberg Matrix backward span) and inhibition (Go/no-Go task), and a short-term memory task (Klingberg Matrix forward span). RESULTS: Linear and multiple regression analyses showed no significant relationship between exposure to a musically-enriched environment, executive sub-functions (planning, inhibition and working memory), and short-term memory. The relationship between an enriched musical environment and verbal IQ has revealed trends. DISCUSSION: Experiencing a musically enriched environment does not serve as predictor for higher performance on executive sub-functions, however, can influence verbal IQ.


Assuntos
Função Executiva , Inteligência , Idioma , Memória de Curto Prazo , Música/psicologia , Criança , Estudos Transversais , Feminino , Humanos , Masculino , Meio Social
4.
Front Neurosci ; 12: 475, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30061809

RESUMO

Charles Darwin suggested the perception of rhythm to be common to all animals. While only recently experimental research is finding some support for this claim, there are also aspects of rhythm cognition that appear to be species-specific, such as the capability to perceive a regular pulse (or beat) in a varying rhythm. In the current study, using EEG, we adapted an auditory oddball paradigm that allows for disentangling the contributions of beat perception and isochrony to the temporal predictability of the stimulus. We presented two rhesus monkeys (Macaca mulatta) with a rhythmic sequence in two versions: an isochronous version, that was acoustically accented such that it could induce a duple meter (like a march), and a jittered version using the same acoustically accented sequence but that was presented in a randomly timed fashion, as such disabling beat induction. The results reveal that monkeys are sensitive to the isochrony of the stimulus, but not its metrical structure. The MMN was influenced by the isochrony of the stimulus, resulting in a larger MMN in the isochronous as opposed to the jittered condition. However, the MMN for both monkeys showed no interaction between metrical position and isochrony. So, while the monkey brain appears to be sensitive to the isochrony of the stimulus, we find no evidence in support of beat perception. We discuss these results in the context of the gradual audiomotor evolution (GAE) hypothesis (Merchant and Honing, 2014) that suggests beat-based timing to be omnipresent in humans but only weakly so or absent in non-human primates.

5.
Front Neurosci ; 12: 103, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29541017

RESUMO

Background: Research on the effects of music education on cognitive abilities has generated increasing interest across the scientific community. Nonetheless, longitudinal studies investigating the effects of structured music education on cognitive sub-functions are still rare. Prime candidates for investigating a relationship between academic achievement and music education appear to be executive functions such as planning, working memory, and inhibition. Methods: One hundred and forty-seven primary school children, Mage = 6.4 years, SD = 0.65 were followed for 2.5 years. Participants were randomized into four groups: two music intervention groups, one active visual arts group, and a no arts control group. Neuropsychological tests assessed verbal intelligence and executive functions. Additionally, a national pupil monitor provided data on academic performance. Results: Children in the visual arts group perform better on visuospatial memory tasks as compared to the three other conditions. However, the test scores on inhibition, planning and verbal intelligence increased significantly in the two music groups over time as compared to the visual art and no arts controls. Mediation analysis with executive functions and verbal IQ as mediator for academic performance have shown a possible far transfer effect from executive sub-function to academic performance scores. Discussion: The present results indicate a positive influence of long-term music education on cognitive abilities such as inhibition and planning. Of note, following a two-and-a-half year long visual arts program significantly improves scores on a visuospatial memory task. All results combined, this study supports a far transfer effect from music education to academic achievement mediated by executive sub-functions.

6.
Ann N Y Acad Sci ; 2018 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-29542134

RESUMO

In recent years, music and musicality have been the focus of an increasing amount of research effort. This has led to a growing role and visibility of the contribution of (bio)musicology to the field of neuroscience and cognitive sciences at large. While it has been widely acknowledged that there are commonalities between speech, language, and musicality, several researchers explain this by considering musicality as an epiphenomenon of language. However, an alternative hypothesis is that musicality is an innate and widely shared capacity for music that can be seen as a natural, spontaneously developing set of traits based on and constrained by our cognitive abilities and their underlying biology. A comparative study of musicality in humans and well-known animal models (monkeys, birds, pinnipeds) will further our insights on which features of musicality are exclusive to humans and which are shared between humans and nonhuman animals, contribute to an understanding of the musical phenotype, and further constrain existing evolutionary theories of music and musicality.

7.
Front Psychol ; 9: 38, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29441035

RESUMO

Despite differences in their function and domain-specific elements, syntactic processing in music and language is believed to share cognitive resources. This study aims to investigate whether the simultaneous processing of language and music share the use of a common syntactic processor or more general attentional resources. To investigate this matter we tested musicians and non-musicians using visually presented sentences and aurally presented melodies containing syntactic local and long-distance dependencies. Accuracy rates and reaction times of participants' responses were collected. In both sentences and melodies, unexpected syntactic anomalies were introduced. This is the first study to address the processing of local and long-distance dependencies in language and music combined while reducing the effect of sensory memory. Participants were instructed to focus on language (language session), music (music session), or both (dual session). In the language session, musicians and non-musicians performed comparably in terms of accuracy rates and reaction times. As expected, groups' differences appeared in the music session, with musicians being more accurate in their responses than non-musicians and only the latter showing an interaction between the accuracy rates for music and language syntax. In the dual session musicians were overall more accurate than non-musicians. However, both groups showed comparable behavior, by displaying an interaction between the accuracy rates for language and music syntax responses. In our study, accuracy rates seem to better capture the interaction between language and music syntax; and this interaction seems to indicate the use of distinct, however, interacting mechanisms as part of decision making strategy. This interaction seems to be subject of an increase of attentional load and domain proficiency. Our study contributes to the long-lasting debate about the commonalities between language and music by providing evidence for their interaction at a more domain-general level.

8.
PLoS One ; 13(1): e0190322, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29320533

RESUMO

Perception of a regular beat in music is inferred from different types of accents. For example, increases in loudness cause intensity accents, and the grouping of time intervals in a rhythm creates temporal accents. Accents are expected to occur on the beat: when accents are "missing" on the beat, the beat is more difficult to find. However, it is unclear whether accents occurring off the beat alter beat perception similarly to missing accents on the beat. Moreover, no one has examined whether intensity accents influence beat perception more or less strongly than temporal accents, nor how musical expertise affects sensitivity to each type of accent. In two experiments, we obtained ratings of difficulty in finding the beat in rhythms with either temporal or intensity accents, and which varied in the number of accents on the beat as well as the number of accents off the beat. In both experiments, the occurrence of accents on the beat facilitated beat detection more in musical experts than in musical novices. In addition, the number of accents on the beat affected beat finding more in rhythms with temporal accents than in rhythms with intensity accents. The effect of accents off the beat was much weaker than the effect of accents on the beat and appeared to depend on musical expertise, as well as on the number of accents on the beat: when many accents on the beat are missing, beat perception is quite difficult, and adding accents off the beat may not reduce beat perception further. Overall, the different types of accents were processed qualitatively differently, depending on musical expertise. Therefore, these findings indicate the importance of designing ecologically valid stimuli when testing beat perception in musical novices, who may need different types of accent information than musical experts to be able to find a beat. Furthermore, our findings stress the importance of carefully designing rhythms for social and clinical applications of beat perception, as not all listeners treat all rhythms alike.


Assuntos
Percepção Auditiva , Música , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
10.
Front Psychol ; 8: 824, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28588533

RESUMO

Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.

11.
Front Psychol ; 8: 621, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28487668

RESUMO

We present a hypothesis-driven study on the variation of melody phrases in a collection of Dutch folk songs. We investigate the variation of phrases within the folk songs through a pattern matching method which detects occurrences of these phrases within folk song variants, and ask the question: do the phrases which show less variation have different properties than those which do? We hypothesize that theories on melody recall may predict variation, and as such, investigate phrase length, the position and number of repetitions of a given phrase in the melody in which it occurs, as well as expectancy and motif repetivity. We show that all of these predictors account for the observed variation to a moderate degree, and that, as hypothesized, those phrases vary less which are rather short, contain highly expected melodic material, occur relatively early in the melody, and contain small pitch intervals. A large portion of the variance is left unexplained by the current model, however, which leads us to a discussion of future approaches to study memorability of melodies.

12.
Front Psychol ; 7: 730, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27242635

RESUMO

While humans can easily entrain their behavior with the beat in music, this ability is rare among animals. Yet, comparative studies in non-human species are needed if we want to understand how and why this ability evolved. Entrainment requires two abilities: (1) recognizing the regularity in the auditory stimulus and (2) the ability to adjust the own motor output to the perceived pattern. It has been suggested that beat perception and entrainment are linked to the ability for vocal learning. The presence of some bird species showing beat induction, and also the existence of vocal learning as well as vocal non-learning bird taxa, make them relevant models for comparative research on rhythm perception and its link to vocal learning. Also, some bird vocalizations show strong regularity in rhythmic structure, suggesting that birds might perceive rhythmic structures. In this paper we review the available experimental evidence for the perception of regularity and rhythms by birds, like the ability to distinguish regular from irregular stimuli over tempo transformations and report data from new experiments. While some species show a limited ability to detect regularity, most evidence suggests that birds attend primarily to absolute and not relative timing of patterns and to local features of stimuli. We conclude that, apart from some large parrot species, there is limited evidence for beat and regularity perception among birds and that the link to vocal learning is unclear. We next report the new experiments in which zebra finches and budgerigars (both vocal learners) were first trained to distinguish a regular from an irregular pattern of beats and then tested on various tempo transformations of these stimuli. The results showed that both species reduced the discrimination after tempo transformations. This suggests that, as was found in earlier studies, they attended mainly to local temporal features of the stimuli, and not to their overall regularity. However, some individuals of both species showed an additional sensitivity to the more global pattern if some local features were left unchanged. Altogether our study indicates both between and within species variation, in which birds attend to a mixture of local and to global rhythmic features.

13.
Front Psychol ; 7: 817, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27313552

RESUMO

Whether pitch in language and music is governed by domain-specific or domain-general cognitive mechanisms is contentiously debated. The aim of the present study was to investigate whether mechanisms governing pitch contour perception operate differently when pitch information is interpreted as either speech or music. By modulating listening mode, this study aspired to demonstrate that pitch contour perception relies on domain-specific cognitive mechanisms, which are regulated by top-down influences from language and music. Three groups of participants (Mandarin speakers, Dutch speaking non-musicians, and Dutch musicians) were exposed to identical pitch contours, and tested on their ability to identify these contours in a language and musical context. Stimuli consisted of disyllabic words spoken in Mandarin, and melodic tonal analogs, embedded in a linguistic and melodic carrier phrase, respectively. Participants classified identical pitch contours as significantly different depending on listening mode. Top-down influences from language appeared to alter the perception of pitch contour in speakers of Mandarin. This was not the case for non-musician speakers of Dutch. Moreover, this effect was lacking in Dutch speaking musicians. The classification patterns of pitch contours in language and music seem to suggest that domain-specific categorization is modulated by top-down influences from language and music.

14.
Neuropsychologia ; 85: 80-90, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-26972966

RESUMO

Beat perception is the ability to perceive temporal regularity in musical rhythm. When a beat is perceived, predictions about upcoming events can be generated. These predictions can influence processing of subsequent rhythmic events. However, statistical learning of the order of sounds in a sequence can also affect processing of rhythmic events and must be differentiated from beat perception. In the current study, using EEG, we examined the effects of attention and musical abilities on beat perception. To ensure we measured beat perception and not absolute perception of temporal intervals, we used alternating loud and soft tones to create a rhythm with two hierarchical metrical levels. To control for sequential learning of the order of the different sounds, we used temporally regular (isochronous) and jittered rhythmic sequences. The order of sounds was identical in both conditions, but only the regular condition allowed for the perception of a beat. Unexpected intensity decrements were introduced on the beat and offbeat. In the regular condition, both beat perception and sequential learning were expected to enhance detection of these deviants on the beat. In the jittered condition, only sequential learning was expected to affect processing of the deviants. ERP responses to deviants were larger on the beat than offbeat in both conditions. Importantly, this difference was larger in the regular condition than in the jittered condition, suggesting that beat perception influenced responses to rhythmic events in addition to sequential learning. The influence of beat perception was present both with and without attention directed at the rhythm. Moreover, beat perception as measured with ERPs correlated with musical abilities, but only when attention was directed at the stimuli. Our study shows that beat perception is possible when attention is not directed at a rhythm. In addition, our results suggest that attention may mediate the influence of musical abilities on beat perception.


Assuntos
Atenção/fisiologia , Potenciais Evocados Auditivos/fisiologia , Música , Periodicidade , Aprendizagem Seriada/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Percepção Auditiva/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicoacústica , Estatística como Assunto , Adulto Jovem
15.
Front Neurosci ; 10: 40, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26941591

RESUMO

Beat deafness, a recently documented form of congenital amusia, provides a unique window into functional specialization of neural circuitry for the processing of musical stimuli: Beat-deaf individuals exhibit deficits that are specific to the detection of a regular beat in music and the ability to move along with a beat. Studies on the neural underpinnings of beat processing in the general population suggest that the auditory system is capable of pre-attentively generating a predictive model of upcoming sounds in a rhythmic pattern, subserved largely within auditory cortex and reflected in mismatch negativity (MMN) and P3 event-related potential (ERP) components. The current study examined these neural correlates of beat perception in two beat-deaf individuals, Mathieu and Marjorie, and a group of control participants under conditions in which auditory stimuli were either attended or ignored. Compared to control participants, Mathieu demonstrated reduced behavioral sensitivity to beat omissions in metrical patterns, and Marjorie showed a bias to identify irregular patterns as regular. ERP responses to beat omissions reveal an intact pre-attentive system for processing beat irregularities in cases of beat deafness, reflected in the MMN component, and provide partial support for abnormalities in later cognitive stages of beat processing, reflected in an unreliable P3b component exhibited by Mathieu-but not Marjorie-compared to control participants. P3 abnormalities observed in the current study resemble P3 abnormalities exhibited by individuals with pitch-based amusia, and are consistent with attention or auditory-motor coupling accounts of deficits in beat perception.

16.
Front Psychol ; 6: 1094, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26284015

RESUMO

The processing of rhythmic events in music is influenced by the induced metrical structure. Two mechanisms underlying this may be temporal attending and temporal prediction. Temporal fluctuations in attentional resources may influence the processing of rhythmic events by heightening sensitivity at metrically strong positions. Temporal predictions may attenuate responses to events that are highly expected within a metrical structure. In the current study we aimed to disentangle these two mechanisms by examining responses to unexpected sounds, using intensity increments and decrements as deviants. Temporal attending was hypothesized to lead to better detection of deviants in metrically strong (on the beat) than weak (offbeat) positions due to heightened sensitivity on the beat. Temporal prediction was hypothesized to lead to best detection of increments in offbeat positions and decrements on the beat, as they would be most unexpected in these positions. We used a speeded detection task to measure detectability of the deviants under attended conditions (Experiment 1). Under unattended conditions (Experiment 2), we used EEG to measure the mismatch negativity (MMN), an ERP component known to index the detectability of unexpected auditory events. Furthermore, we examined the amplitude of the auditory evoked P1 and N1 responses, which are known to be sensitive to both attention and prediction. We found better detection of small increments in offbeat positions than on the beat, consistent with the influence of temporal prediction (Experiment 1). In addition, we found faster detection of large increments on the beat as opposed to offbeat (Experiment 1), and larger amplitude P1 responses on the beat as compared to offbeat, both in support of temporal attending (Experiment 2). As such, we showed that both temporal attending and temporal prediction shape our processing of metrical rhythm.

17.
Behav Processes ; 115: 37-45, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25725348

RESUMO

Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity.


Assuntos
Percepção Auditiva/fisiologia , Comportamento Animal/fisiologia , Tentilhões/fisiologia , Generalização Psicológica/fisiologia , Música , Percepção do Tempo/fisiologia , Adulto , Animais , Condicionamento Operante/fisiologia , Feminino , Humanos , Masculino
18.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140088, 2015 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-25646511

RESUMO

Musicality can be defined as a natural, spontaneously developing trait based on and constrained by biology and cognition. Music, by contrast, can be defined as a social and cultural construct based on that very musicality. One critical challenge is to delineate the constituent elements of musicality. What biological and cognitive mechanisms are essential for perceiving, appreciating and making music? Progress in understanding the evolution of music cognition depends upon adequate characterization of the constituent mechanisms of musicality and the extent to which they are present in non-human species. We argue for the importance of identifying these mechanisms and delineating their functions and developmental course, as well as suggesting effective means of studying them in human and non-human animals. It is virtually impossible to underpin the evolutionary role of musicality as a whole, but a multicomponent perspective on musicality that emphasizes its constituent capacities, development and neural cognitive specificity is an excellent starting point for a research programme aimed at illuminating the origins and evolution of musical behaviour as an autonomous trait.


Assuntos
Evolução Biológica , Cognição/fisiologia , Música , Adaptação Fisiológica , Cultura , Humanos
19.
Philos Trans R Soc Lond B Biol Sci ; 370(1664): 20140092, 2015 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-25646515

RESUMO

Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes.


Assuntos
Transtornos da Percepção Auditiva/genética , Música , Estudo de Associação Genômica Ampla , Humanos , Pesquisa
20.
Int J Psychophysiol ; 96(1): 23-8, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25722025

RESUMO

Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the presentation rate, we tested whether these fundamental auditory capabilities are functional at birth. Each of these events elicited significant event-related potential components in sleeping healthy neonates. The data thus demonstrate that the newborn brain is sensitive to these acoustic features suggesting that infants are geared towards the temporal aspects of segregating sound sources, speech and music perception already at birth.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Detecção de Sinal Psicológico/fisiologia , Som , Estimulação Acústica , Análise de Variância , Eletroencefalografia , Feminino , Humanos , Lactente , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA