Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Sci Rep ; 13(1): 4735, 2023 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-36959270

RESUMEN

Music is an integral part of daily human life, and certain types of music are often associated with certain contexts, such as specific music for sleeping or for studying. The mood-arousal hypothesis suggests that music used for studying should be uplifting to boost arousal and increase cognitive performance while previous studies suggest that music used as a sleep aid should be calm, gentle and slow to decrease arousal. In this study, we created the Study music dataset by collecting tracks from Spotify playlists with the words 'study' or 'studying' in the title or description. In comparison with a pre-existing dataset, the Sleep music dataset, we show that the music's audio features, as defined by Spotify, are highly similar. Additionally, they share most of the same genres and have similar subgroups after a k-means clustering analysis. We suggest that both sleep music and study music aim to create a pleasant but not too disturbing auditory environment, which enables one to focus on studying and to lower arousal for sleeping. Using large Spotify-based datasets, we were able to uncover similarities between music used in two different contexts one would expect to be different.


Asunto(s)
Música , Humanos , Música/psicología , Emociones , Afecto , Nivel de Alerta , Percepción Auditiva
3.
Front Psychol ; 12: 689505, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34707530

RESUMEN

Music is not only the art of organized sound but also a compound of social interaction among people, built upon social and environmental foundations. Since the beginning of the COVID-19 outbreak, containment measures such as shelter-in-place, lockdown, social distancing, and self-quarantine have severely impacted the foundation of human society, resulting in a drastic change in our everyday experience. In this paper, the relationships between musical behavior, lifestyle, and psychological states during the shelter-in-place period of the COVID-19 pandemic are investigated. An online survey on musical experience, lifestyle changes, stress level, musical behaviors, media usage, and environmental sound perception was conducted. The survey was conducted in early June 2020. Responses from 620 people in 24 countries were collected, with the large proportion of the responses coming from the U.S. (55.5%) and India (21.4%). Structural equation modeling (SEM) analysis revealed causal relationships between lifestyle, stress, and music behaviors. Elements such as stress-level change, work risk, and staying home contribute to changes in musical experiences, such as moderating emotion with music, feeling emotional with music, and being more attentive to music. Stress-level change was correlated with work risk and income change, and people who started living with others due to the outbreak, especially with their children, indicated less change in stress level. People with more stress-level change tended to use music more purposefully for their mental well-being, such as to moderate emotions, to influence mood, and to relax. In addition, people with more stress-level change tend to be more annoyed by neighbors' noise. Housing type was not directly associated with annoyance; however, attention to environmental sounds decreased when the housing type was smaller. Attention to environmental and musical sounds and the emotional responses to them are highly inter-correlated. Multi-group SEM based on musicians showed that the causal relationship structure for professional musicians differs from that of less-experienced musicians. For professional musicians, staying at home was the only component that caused all musical behavior changes; stress did not cause musical behavior changes. Regarding Internet use, listening to music via YouTube and streaming was preferred over TV and radio, especially among less-experienced musicians, while participation in the online music community was preferred by more advanced musicians. This work suggests that social, environmental, and personal factors and limitations influence the changes in our musical behavior, perception of sonic experience, and emotional recognition, and that people actively accommodated the unusual pandemic situations using music and Internet technologies.

4.
Acta Psychol (Amst) ; 220: 103417, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34555564

RESUMEN

The effect of background music (BGM) on cognitive task performance is a popular topic. However, the evidence is not converging: experimental studies show mixed results depending on the task, the type of music used and individual characteristics. Here, we explored how people use BGM while optimally performing various cognitive tasks in everyday life, such as reading, writing, memorizing, and critical thinking. Specifically, the frequency of BGM usage, preferred music types, beliefs about the scientific evidence on BGM, and individual characteristics, such as age, extraversion and musical background were investigated. Although the results confirmed highly diverse strategies among individuals regarding when, how often, why and what type of BGM is used, we found several general tendencies: people tend to use less BGM when engaged in more difficult tasks, they become less critical about the type of BGM when engaged in easier tasks, and there is a negative correlation between the frequency of BGM and age, indicating that younger generations tend to use more BGM than older adults. The current and previous evidence are discussed in light of existing theories. Altogether, this study identifies essential variables to consider in future research and further forwards a theory-driven perspective in the field.


Asunto(s)
Música , Anciano , Percepción Auditiva , Cognición , Humanos , Lectura , Análisis y Desempeño de Tareas
5.
Front Hum Neurosci ; 15: 784026, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35069154

RESUMEN

This study compared 30 older musicians and 30 age-matched non-musicians to investigate the association between lifelong musical instrument training and age-related cognitive decline and brain atrophy (musicians: mean age 70.8 years, musical experience 52.7 years; non-musicians: mean age 71.4 years, no or less than 3 years of musical experience). Although previous research has demonstrated that young musicians have larger gray matter volume (GMV) in the auditory-motor cortices and cerebellum than non-musicians, little is known about older musicians. Music imagery in young musicians is also known to share a neural underpinning [the supramarginal gyrus (SMG) and cerebellum] with music performance. Thus, we hypothesized that older musicians would show superiority to non-musicians in some of the abovementioned brain regions. Behavioral performance, GMV, and brain activity, including functional connectivity (FC) during melodic working memory (MWM) tasks, were evaluated in both groups. Behaviorally, musicians exhibited a much higher tapping speed than non-musicians, and tapping speed was correlated with executive function in musicians. Structural analyses revealed larger GMVs in both sides of the cerebellum of musicians, and importantly, this was maintained until very old age. Task-related FC analyses revealed that musicians possessed greater cerebellar-hippocampal FC, which was correlated with tapping speed. Furthermore, musicians showed higher activation in the SMG during MWM tasks; this was correlated with earlier commencement of instrumental training. These results indicate advantages or heightened coupling in brain regions associated with music performance and imagery in musicians. We suggest that lifelong instrumental training highly predicts the structural maintenance of the cerebellum and related cognitive maintenance in old age.

6.
PLoS One ; 15(3): e0229109, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32130244

RESUMEN

Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.


Asunto(s)
Lenguaje , Música/psicología , Percepción de la Altura Tonal/fisiología , Semántica , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica , Adulto , Pueblo Asiatico/psicología , Percepción Auditiva/fisiología , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Vías Nerviosas/fisiología , Tiempo de Reacción , Adulto Joven
7.
Psychol Res ; 84(5): 1451-1459, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30627768

RESUMEN

The Speech-to-Song Illusion (STS) refers to a dramatic shift in our perception of short speech fragments which, when repeatedly presented, may start to sound-like song. Anecdotally, once it is perceived as a song, it is difficult to unhear the melody of a speech fragment, and such temporal dynamics of the STS illusion has theoretical implications. The goal of the current study is to capture this temporal effect. In our experiment, speech fragments that initially did not elicit the STS illusion were manipulated to have increasingly stable F0 contours to strengthen the perceived 'song-likeness' of a fragment. Over the course of trials, the speech fragments with manipulated contours were repeatedly presented within blocks of decreasing, increasing, or random orders of F0 manipulations. Results showed that a presentation order where participants first heard the sentence with the maximum amount of F0 manipulations (decreasing condition) resulted in participants continuously giving higher overall song-like ratings than other presentation orders (increasing or random conditions). Our results thus capture the commonly reported phenomenon that it is hard to 'unhear' the illusion once a speech segment has been perceived as song.


Asunto(s)
Ilusiones/psicología , Lenguaje , Música , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
8.
J Acoust Soc Am ; 144(1): 92, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-30075662

RESUMEN

Establishing non-native phoneme categories can be a notoriously difficult endeavour-in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.


Asunto(s)
Aprendizaje/fisiología , Acústica del Lenguaje , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Lenguaje , Masculino , Fonética , Adulto Joven
9.
PLoS One ; 13(5): e0195831, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29718946

RESUMEN

This study investigated the effect of handedness on pianists' abilities to adjust their keyboard performance skills to new spatial and motor mappings. Left- and right-handed pianists practiced simple melodies on a regular MIDI piano keyboard (practice) and were then asked to perform these with modified melodic contours (the same or reversed melodic contour causing a change of fingering) and on a reversed MIDI piano keyboard (test). The difference of performance duration between the practice and the test phase as well as the amount of errors played were used as test measures. Overall, a stronger effect for modified melodic contours than for the reversed keyboard was observed. Furthermore, we observed a trend of left-handed pianists to be quicker and more accurate in playing melodies when reversing their fingering with reversed contours in their left-hand performances. This suggests that handedness may influence pianists' skill to adjust to new spatial and motor mappings.


Asunto(s)
Lateralidad Funcional/fisiología , Actividad Motora , Música , Percepción de la Altura Tonal , Conducta Espacial/fisiología , Adulto , Femenino , Humanos , Aprendizaje , Masculino
10.
Front Hum Neurosci ; 10: 288, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27375469

RESUMEN

Previous research suggests that mastering languages with distinct rather than similar rhythmic properties enhances musical rhythmic perception. This study investigates whether learning a second language (L2) contributes to enhanced musical rhythmic perception in general, regardless of first and second languages rhythmic properties. Additionally, we investigated whether this perceptual enhancement could be alternatively explained by exposure to musical rhythmic complexity, such as the use of compound meter in Turkish music. Finally, it investigates if an enhancement of musical rhythmic perception could be observed among L2 learners whose first language relies heavily on pitch information, as is the case with tonal languages. Therefore, we tested Turkish, Dutch and Mandarin L2 learners of English and Turkish monolinguals on their musical rhythmic perception. Participants' phonological and working memory capacities, melodic aptitude, years of formal musical training and daily exposure to music were assessed to account for cultural and individual differences which could impact their rhythmic ability. Our results suggest that mastering a L2 rather than exposure to musical rhythmic complexity could explain individuals' enhanced musical rhythmic perception. An even stronger enhancement of musical rhythmic perception was observed for L2 learners whose first and second languages differ regarding their rhythmic properties, as enhanced performance of Turkish in comparison with Dutch L2 learners of English seem to suggest. Such a stronger enhancement of rhythmic perception seems to be found even among L2 learners whose first language relies heavily on pitch information, as the performance of Mandarin L2 learners of English indicates. Our findings provide further support for a cognitive transfer between the language and music domain.

11.
Psychon Bull Rev ; 23(2): 548-55, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26370217

RESUMEN

Visualizing acoustic features of speech has proven helpful in speech therapy; however, it is as yet unclear how to create intuitive and fitting visualizations. To better understand the mappings from speech sound aspects to visual space, a large web-based experiment (n = 249) was performed to evaluate spatial parameters that may optimally represent pitch and loudness of speech. To this end, five novel animated visualizations were developed and presented in pairwise comparisons, together with a static visualization. Pitch and loudness of speech were each mapped onto either the vertical (y-axis) or the size (z-axis) dimension, or combined (with size indicating loudness and vertical position indicating pitch height) and visualized as an animation along the horizontal dimension (x-axis) over time. The results indicated that firstly, there is a general preference towards the use of the y-axis for both pitch and loudness, with pitch ranking higher than loudness in terms of fit. Secondly, the data suggest that representing both pitch and loudness combined in a single visualization is preferred over visualization in only one dimension. Finally, the z-axis, although not preferred, was evaluated as corresponding better to loudness than to pitch. This relation between sound and visual space has not been reported previously for speech sounds, and elaborates earlier findings on musical material. In addition to elucidating more general mappings between auditory and visual modalities, the findings provide us with a method of visualizing speech that may be helpful in clinical applications such as computerized speech therapy, or other feedback-based learning paradigms.


Asunto(s)
Percepción Sonora/fisiología , Percepción de la Altura Tonal/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
12.
Front Psychol ; 5: 1318, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25505434

RESUMEN

Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals' aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a 5-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by three speakers), and high (similar to medium but with five speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals' perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals' aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals' aptitude in speech perception but also on the nature of the categories being acquired.

13.
Front Psychol ; 5: 1422, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25566113

RESUMEN

Various aspects of linguistic experience influence the way we segment, represent, and process speech signals. The Japanese phonetic and orthographic systems represent geminate consonants (double consonants, e.g., /ss/, /kk/) in a unique way compared to other languages: one abstract representation is used to characterize the first part of geminate consonants despite the acoustic difference between two distinct realizations of geminate consonants (silence in the case of e.g., stop consonants and elongation in the case of fricative consonants). The current study tests whether this discrepancy between abstract representations and acoustic realizations influences how native speakers of Japanese perceive geminate consonants. The experiments used pseudo words containing either the geminate consonant /ss/ or a manipulated version in which the first part was replaced by silence /_s/. The sound /_s/ is acoustically similar to /ss/, yet does not occur in everyday speech. Japanese listeners demonstrated a bias to group these two types into the same category while Italian and Dutch listeners distinguished them. The results thus confirmed that distinguishing fricative geminate consonants with silence from those with sustained frication is not crucial for Japanese native listening. Based on this observation, we propose that native speakers of Japanese tend to segment geminated consonants into two parts and that the first portion of fricative geminates is perceptually similar to a silent duration. This representation is compatible with both Japanese orthography and phonology. Unlike previous studies that were inconclusive in how native speakers segment geminate consonants, our study demonstrated a relatively strong effect of Japanese specific listening. Thus the current experimental methods may open up new lines of investigation into the relationship between development of phonological representation, orthography and speech perception.

14.
J Acoust Soc Am ; 134(2): 1324-35, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23927129

RESUMEN

This study reports effects of a high-variability training procedure on nonnative learning of a Japanese geminate-singleton fricative contrast. Thirty native speakers of Dutch took part in a 5-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. Participants were trained with either many repetitions of a limited set of words recorded by a single speaker (low-variability training) or with fewer repetitions of a more variable set of words recorded by multiple speakers (high-variability training). Both types of training enhanced identification of speech but not of nonspeech materials, indicating that learning was domain specific. High-variability training led to superior performance in identification but not in discrimination tests, and supported better generalization of learning as shown by transfer from the trained fricatives to the identification of untrained stops and affricates. Variability thus helps nonnative listeners to form abstract categories rather than to enhance early acoustic analysis.


Asunto(s)
Aprendizaje , Multilingüismo , Fonética , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Análisis de Varianza , Audiometría del Habla , Discriminación en Psicología , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Factores de Tiempo , Adulto Joven
15.
Front Neurosci ; 7: 265, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24415996

RESUMEN

Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.

16.
Acta Psychol (Amst) ; 138(1): 1-10, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21726835

RESUMEN

Two cross-linguistic experiments comparing musicians and non-musicians were performed in order to examine whether musicians have enhanced perception of specific acoustical features of speech in a second language (L2). These discrimination and identification experiments examined the perception of various speech features; namely, the timing and quality of Japanese consonants, and the quality of Dutch vowels. We found that musical experience was more strongly associated with discrimination performance rather than identification performance. The enhanced perception was observed not only with respect to L2, but also L1. It was most pronounced when tested with Japanese consonant timing. These findings suggest the following: 1) musicians exhibit enhanced early acoustical analysis of speech, 2) musical training does not equally enhance the perception of all acoustic features automatically, and 3) musicians may enjoy an advantage in the perception of acoustical features that are important in both language and music, such as pitch and timing.


Asunto(s)
Lenguaje , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Conducta de Elección/fisiología , Femenino , Humanos , Lingüística , Masculino , Música , Percepción de la Altura Tonal/fisiología , Tiempo de Reacción/fisiología , Adulto Joven
17.
Psychol Res ; 75(2): 107-21, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20574662

RESUMEN

A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.


Asunto(s)
Retroalimentación Sensorial/fisiología , Aprendizaje/fisiología , Música , Desempeño Psicomotor/fisiología , Humanos , Adulto Joven
18.
Neuroimage ; 56(2): 843-9, 2011 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-20541612

RESUMEN

In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Potenciales Evocados Auditivos/fisiología , Música/psicología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Procesamiento de Señales Asistido por Computador , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA