Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 15(3): e0229109, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32130244

RESUMO

Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.


Assuntos
Idioma , Música/psicologia , Nível de Percepção Sonora/fisiologia , Semântica , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Grupo com Ancestrais do Continente Asiático/psicologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Vias Neurais/fisiologia , Tempo de Reação , Adulto Jovem
2.
Psychol Res ; 2019 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-30627768

RESUMO

The Speech-to-Song Illusion (STS) refers to a dramatic shift in our perception of short speech fragments which, when repeatedly presented, may start to sound-like song. Anecdotally, once it is perceived as a song, it is difficult to unhear the melody of a speech fragment, and such temporal dynamics of the STS illusion has theoretical implications. The goal of the current study is to capture this temporal effect. In our experiment, speech fragments that initially did not elicit the STS illusion were manipulated to have increasingly stable F0 contours to strengthen the perceived 'song-likeness' of a fragment. Over the course of trials, the speech fragments with manipulated contours were repeatedly presented within blocks of decreasing, increasing, or random orders of F0 manipulations. Results showed that a presentation order where participants first heard the sentence with the maximum amount of F0 manipulations (decreasing condition) resulted in participants continuously giving higher overall song-like ratings than other presentation orders (increasing or random conditions). Our results thus capture the commonly reported phenomenon that it is hard to 'unhear' the illusion once a speech segment has been perceived as song.

3.
J Acoust Soc Am ; 144(1): 92, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-30075662

RESUMO

Establishing non-native phoneme categories can be a notoriously difficult endeavour-in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.


Assuntos
Aprendizagem/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Idioma , Masculino , Fonética , Adulto Jovem
4.
PLoS One ; 13(5): e0195831, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29718946

RESUMO

This study investigated the effect of handedness on pianists' abilities to adjust their keyboard performance skills to new spatial and motor mappings. Left- and right-handed pianists practiced simple melodies on a regular MIDI piano keyboard (practice) and were then asked to perform these with modified melodic contours (the same or reversed melodic contour causing a change of fingering) and on a reversed MIDI piano keyboard (test). The difference of performance duration between the practice and the test phase as well as the amount of errors played were used as test measures. Overall, a stronger effect for modified melodic contours than for the reversed keyboard was observed. Furthermore, we observed a trend of left-handed pianists to be quicker and more accurate in playing melodies when reversing their fingering with reversed contours in their left-hand performances. This suggests that handedness may influence pianists' skill to adjust to new spatial and motor mappings.


Assuntos
Lateralidade Funcional/fisiologia , Atividade Motora , Música , Nível de Percepção Sonora , Comportamento Espacial/fisiologia , Adulto , Feminino , Humanos , Aprendizagem , Masculino
5.
Front Hum Neurosci ; 10: 288, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27375469

RESUMO

Previous research suggests that mastering languages with distinct rather than similar rhythmic properties enhances musical rhythmic perception. This study investigates whether learning a second language (L2) contributes to enhanced musical rhythmic perception in general, regardless of first and second languages rhythmic properties. Additionally, we investigated whether this perceptual enhancement could be alternatively explained by exposure to musical rhythmic complexity, such as the use of compound meter in Turkish music. Finally, it investigates if an enhancement of musical rhythmic perception could be observed among L2 learners whose first language relies heavily on pitch information, as is the case with tonal languages. Therefore, we tested Turkish, Dutch and Mandarin L2 learners of English and Turkish monolinguals on their musical rhythmic perception. Participants' phonological and working memory capacities, melodic aptitude, years of formal musical training and daily exposure to music were assessed to account for cultural and individual differences which could impact their rhythmic ability. Our results suggest that mastering a L2 rather than exposure to musical rhythmic complexity could explain individuals' enhanced musical rhythmic perception. An even stronger enhancement of musical rhythmic perception was observed for L2 learners whose first and second languages differ regarding their rhythmic properties, as enhanced performance of Turkish in comparison with Dutch L2 learners of English seem to suggest. Such a stronger enhancement of rhythmic perception seems to be found even among L2 learners whose first language relies heavily on pitch information, as the performance of Mandarin L2 learners of English indicates. Our findings provide further support for a cognitive transfer between the language and music domain.

6.
Psychon Bull Rev ; 23(2): 548-55, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26370217

RESUMO

Visualizing acoustic features of speech has proven helpful in speech therapy; however, it is as yet unclear how to create intuitive and fitting visualizations. To better understand the mappings from speech sound aspects to visual space, a large web-based experiment (n = 249) was performed to evaluate spatial parameters that may optimally represent pitch and loudness of speech. To this end, five novel animated visualizations were developed and presented in pairwise comparisons, together with a static visualization. Pitch and loudness of speech were each mapped onto either the vertical (y-axis) or the size (z-axis) dimension, or combined (with size indicating loudness and vertical position indicating pitch height) and visualized as an animation along the horizontal dimension (x-axis) over time. The results indicated that firstly, there is a general preference towards the use of the y-axis for both pitch and loudness, with pitch ranking higher than loudness in terms of fit. Secondly, the data suggest that representing both pitch and loudness combined in a single visualization is preferred over visualization in only one dimension. Finally, the z-axis, although not preferred, was evaluated as corresponding better to loudness than to pitch. This relation between sound and visual space has not been reported previously for speech sounds, and elaborates earlier findings on musical material. In addition to elucidating more general mappings between auditory and visual modalities, the findings provide us with a method of visualizing speech that may be helpful in clinical applications such as computerized speech therapy, or other feedback-based learning paradigms.


Assuntos
Percepção Sonora/fisiologia , Nível de Percepção Sonora/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
Front Psychol ; 5: 1318, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25505434

RESUMO

Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals' aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a 5-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by three speakers), and high (similar to medium but with five speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals' perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals' aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals' aptitude in speech perception but also on the nature of the categories being acquired.

8.
Front Psychol ; 5: 1422, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25566113

RESUMO

Various aspects of linguistic experience influence the way we segment, represent, and process speech signals. The Japanese phonetic and orthographic systems represent geminate consonants (double consonants, e.g., /ss/, /kk/) in a unique way compared to other languages: one abstract representation is used to characterize the first part of geminate consonants despite the acoustic difference between two distinct realizations of geminate consonants (silence in the case of e.g., stop consonants and elongation in the case of fricative consonants). The current study tests whether this discrepancy between abstract representations and acoustic realizations influences how native speakers of Japanese perceive geminate consonants. The experiments used pseudo words containing either the geminate consonant /ss/ or a manipulated version in which the first part was replaced by silence /_s/. The sound /_s/ is acoustically similar to /ss/, yet does not occur in everyday speech. Japanese listeners demonstrated a bias to group these two types into the same category while Italian and Dutch listeners distinguished them. The results thus confirmed that distinguishing fricative geminate consonants with silence from those with sustained frication is not crucial for Japanese native listening. Based on this observation, we propose that native speakers of Japanese tend to segment geminated consonants into two parts and that the first portion of fricative geminates is perceptually similar to a silent duration. This representation is compatible with both Japanese orthography and phonology. Unlike previous studies that were inconclusive in how native speakers segment geminate consonants, our study demonstrated a relatively strong effect of Japanese specific listening. Thus the current experimental methods may open up new lines of investigation into the relationship between development of phonological representation, orthography and speech perception.

9.
J Acoust Soc Am ; 134(2): 1324-35, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23927129

RESUMO

This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.


Assuntos
Aprendizagem , Multilinguismo , Fonética , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Análise de Variância , Audiometria da Fala , Discriminação Psicológica , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Fatores de Tempo , Adulto Jovem
10.
Front Neurosci ; 7: 265, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24415996

RESUMO

Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.

11.
Acta Psychol (Amst) ; 138(1): 1-10, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21726835

RESUMO

Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing.


Assuntos
Idioma , Percepção da Fala/fisiologia , Estimulação Acústica , Adolescente , Comportamento de Escolha/fisiologia , Feminino , Humanos , Linguística , Masculino , Música , Nível de Percepção Sonora/fisiologia , Tempo de Reação/fisiologia , Adulto Jovem
12.
Psychol Res ; 75(2): 107-21, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20574662

RESUMO

A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.


Assuntos
Retroalimentação Sensorial/fisiologia , Aprendizagem/fisiologia , Música , Desempenho Psicomotor/fisiologia , Humanos , Adulto Jovem
13.
Neuroimage ; 56(2): 843-9, 2011 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-20541612

RESUMO

In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Música/psicologia , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Processamento de Sinais Assistido por Computador , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA