Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Annu Rev Psychol ; 75: 87-128, 2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-37738514

RESUMO

Music training is generally assumed to improve perceptual and cognitive abilities. Although correlational data highlight positive associations, experimental results are inconclusive, raising questions about causality. Does music training have far-transfer effects, or do preexisting factors determine who takes music lessons? All behavior reflects genetic and environmental influences, but differences in emphasis-nature versus nurture-have been a source of tension throughout the history of psychology. After reviewing the recent literature, we conclude that the evidence that music training causes nonmusical benefits is weak or nonexistent, and that researchers routinely overemphasize contributions from experience while neglecting those from nature. The literature is also largely exploratory rather than theory driven. It fails to explain mechanistically how music-training effects could occur and ignores evidence that far transfer is rare. Instead of focusing on elusive perceptual or cognitive benefits, we argue that it is more fruitful to examine the social-emotional effects of engaging with music, particularly in groups, and that music-based interventions may be effective mainly for clinical or atypical populations.


Assuntos
Música , Humanos , Cognição , Emoções
2.
Nat Rev Neurosci ; 20(7): 425-434, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30918365

RESUMO

There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Animais , Humanos
3.
Cogn Affect Behav Neurosci ; 22(5): 1044-1062, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35501427

RESUMO

Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.


Assuntos
Música , Canto , Voz , Estimulação Acústica , Percepção Auditiva/fisiologia , Eletroencefalografia , Humanos
4.
J Int Neuropsychol Soc ; 28(1): 48-61, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33660594

RESUMO

OBJECTIVE: The ability to recognize others' emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer's disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities. METHOD: In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. RESULTS: MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. CONCLUSIONS: These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.


Assuntos
Disfunção Cognitiva , Reconhecimento Facial , Atividades Cotidianas , Emoções , Expressão Facial , Humanos , Testes Neuropsicológicos , Reconhecimento Psicológico
5.
Behav Res Methods ; 54(2): 955-969, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34382202

RESUMO

We sought to determine whether an objective test of musical ability could be successfully administered online. A sample of 754 participants was tested with an online version of the Musical Ear Test (MET), which had Melody and Rhythm subtests. Both subtests had 52 trials, each of which required participants to determine whether standard and comparison auditory sequences were identical. The testing session also included the Goldsmiths Musical Sophistication Index (Gold-MSI), a test of general cognitive ability, and self-report questionnaires that measured basic demographics (age, education, gender), mind-wandering, and personality. Approximately 20% of the participants were excluded for incomplete responding or failing to finish the testing session. For the final sample (N = 608), findings were similar to those from in-person testing in many respects: (1) the internal reliability of the MET was maintained, (2) construct validity was confirmed by strong associations with Gold-MSI scores, (3) correlations with other measures (e.g., openness to experience, cognitive ability, mind-wandering) were as predicted, (4) mean levels of performance were similar for individuals with no music training, and (5) musical sophistication was a better predictor of performance on the Melody than on the Rhythm subtest. In sum, online administration of the MET proved to be a reliable and valid way to measure musical ability.


Assuntos
Música , Cognição , Humanos , Música/psicologia , Personalidade , Reprodutibilidade dos Testes
6.
Neuroimage ; 201: 116052, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31351162

RESUMO

Voices are a primary source of emotional information in everyday interactions. Being able to process non-verbal vocal emotional cues, namely those embedded in speech prosody, impacts on our behaviour and communication. Extant research has delineated the role of temporal and inferior frontal brain regions for vocal emotional processing. A growing number of studies also suggest the involvement of the motor system, but little is known about such potential involvement. Using resting-state fMRI, we ask if the patterns of motor system intrinsic connectivity play a role in emotional prosody recognition in children. Fifty-five 8-year-old children completed an emotional prosody recognition task and a resting-state scan. Better performance in emotion recognition was predicted by a stronger connectivity between the inferior frontal gyrus (IFG) and motor regions including primary motor, lateral premotor and supplementary motor sites. This is mostly driven by the IFG pars triangularis and cannot be explained by differences in domain-general cognitive abilities. These findings indicate that individual differences in the engagement of sensorimotor systems, and in its coupling with inferior frontal regions, underpin variation in children's emotional speech perception skills. They suggest that sensorimotor and higher-order evaluative processes interact to aid emotion recognition, and have implications for models of vocal emotional communication.


Assuntos
Emoções/fisiologia , Lobo Frontal/diagnóstico por imagem , Lobo Frontal/fisiologia , Imageamento por Ressonância Magnética , Córtex Sensório-Motor/diagnóstico por imagem , Córtex Sensório-Motor/fisiologia , Voz/fisiologia , Criança , Feminino , Humanos , Masculino
7.
Cereb Cortex ; 28(11): 4063-4079, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30169831

RESUMO

Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.


Assuntos
Percepção Auditiva/fisiologia , Música , Plasticidade Neuronal , Córtex Sensório-Motor/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Competência Profissional
8.
Cogn Emot ; 33(8): 1577-1586, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-30870109

RESUMO

How do we perceive voices coming from different spatial locations, and how is this affected by emotion? The current study probed the interplay between space and emotion during voice perception. Thirty participants listened to nonverbal vocalizations coming from different locations around the head (left vs. right; front vs. back), and differing in valence (neutral, positive [amusement] or negative [anger]). They were instructed to identify the location of the vocalizations (Experiment 1) and to evaluate their emotional qualities (Experiment 2). Emotion-space interactions were observed, but only in Experiment 1: emotional vocalizations were better localised than neutral ones when they were presented from the back and the right side. In Experiment 2, emotion recognition accuracy was increased for positive vs. negative and neutral vocalizations, and perceived arousal was increased for emotional vs. neutral vocalizations, but this was independent of spatial location. These findings indicate that emotional salience affects how we perceive the spatial location of voices. They additionally suggest that the interaction between spatial ("where") and emotional ("what") properties of the voice differs as a function of task.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Voz/fisiologia , Adolescente , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Adulto Jovem
10.
Brain ; 140(9): 2475-2489, 2017 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-29050393

RESUMO

Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and unique opportunity to study hallucinations apart from confounding clinical factors, thus allowing for the identification of symptom-specific mechanisms. Recent theories propose that hallucinations result from an imbalance of prior expectation and sensory information, but whether such an imbalance also influences auditory-perceptual processes remains unknown. We examine for the first time the cortical processing of ambiguous speech in people without psychosis who regularly hear voices. Twelve non-clinical voice-hearers and 17 matched controls completed a functional magnetic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or unintelligible. Voice-hearers reported recognizing the presence of speech in the stimuli before controls, and before being explicitly informed of its intelligibility. Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech processing network. Notably, however, voice-hearers showed stronger intelligibility responses than controls in the dorsal anterior cingulate cortex and in the superior frontal gyrus. This suggests an enhanced involvement of attention and sensorimotor processes, selectively when speech was potentially intelligible. Altogether, these behavioural and neural findings indicate that people with hallucinatory experiences show distinct responses to meaningful auditory stimuli. A greater weighting towards prior knowledge and expectation might cause non-veridical auditory sensations in these individuals, but it might also spontaneously facilitate perceptual processing where such knowledge is required. This has implications for the understanding of hallucinations in clinical and non-clinical populations, and is consistent with current 'predictive processing' theories of psychosis.


Assuntos
Giro do Cíngulo/fisiologia , Alucinações/fisiopatologia , Córtex Pré-Frontal/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Estudos de Casos e Controles , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Incerteza , Adulto Jovem
11.
Cereb Cortex ; 25(11): 4638-50, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26092220

RESUMO

Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual-motor interactions for processing heard and internally generated auditory information.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Imaginação/fisiologia , Individualidade , Ruído , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/irrigação sanguínea , Vias Neurais/fisiologia , Oxigênio/sangue , Análise de Regressão , Adulto Jovem
12.
Cogn Emot ; 29(5): 935-44, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25243615

RESUMO

It is well established that categorising the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad facial expressions. Participants rated the vocalisations on separate unipolar scales of happiness and sadness and on arousal. Although they were instructed to focus exclusively on the vocalisations, consistent context effects were found: For both laughter and crying, emotion judgements were shifted towards the information expressed by the face. These modulations were independent of response latencies and were larger for more emotionally ambiguous vocalisations. No effects of context were found for arousal ratings. These findings suggest that the automatic encoding of contextual information during emotion perception generalises across modalities, to purely non-verbal vocalisations, and is not confined to acted expressions.


Assuntos
Percepção Auditiva/fisiologia , Choro , Expressão Facial , Riso , Adolescente , Adulto , Nível de Alerta , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação , Adulto Jovem
13.
J Acoust Soc Am ; 137(1): 378-87, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25618067

RESUMO

There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.


Assuntos
Música , Mascaramento Perceptivo/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Atenção , Limiar Auditivo/fisiologia , Feminino , Humanos , Inteligência , Masculino , Ruído , Ocupações , Discriminação da Altura Tonal/fisiologia , Psicoacústica , Desempenho Psicomotor , Razão Sinal-Ruído , Teste de Stroop , Inquéritos e Questionários , Percepção do Tempo/fisiologia , Escalas de Wechsler , Adulto Jovem
14.
Cortex ; 172: 254-270, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38123404

RESUMO

The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.


Assuntos
Riso , Voz , Humanos , Emoções/fisiologia , Cegueira , Riso/fisiologia , Percepção Social , Eletroencefalografia , Potenciais Evocados/fisiologia
15.
Emotion ; 24(6): 1376-1385, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38512197

RESUMO

Although emotional mimicry is ubiquitous in social interactions, its mechanisms and roles remain disputed. A prevalent view is that imitating others' expressions facilitates emotional understanding, but the evidence is mixed and almost entirely based on facial emotions. In a preregistered study, we asked whether inhibiting orofacial mimicry affects authenticity perception in vocal emotions. Participants listened to authentic and posed laughs and cries, while holding a pen between the teeth and lips to inhibit orofacial responses (n = 75), or while responding freely without a pen (n = 75). They made authenticity judgments and rated how much they felt the conveyed emotions (emotional contagion). Mimicry inhibition decreased the accuracy of authenticity perception in laughter and crying, and in posed and authentic vocalizations. It did not affect contagion ratings, however, nor performance in a cognitive control task, ruling out the effort of holding the pen as an explanation for the decrements in authenticity perception. Laughter was more contagious than crying, and authentic vocalizations were more contagious than posed ones, regardless of whether mimicry was inhibited or not. These findings confirm the role of mimicry in emotional understanding and extend it to auditory emotions. They also imply that perceived emotional contagion can be unrelated to mimicry. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Choro , Emoções , Expressão Facial , Riso , Percepção Social , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Riso/fisiologia , Choro/fisiologia , Emoções/fisiologia , Comportamento Imitativo/fisiologia , Percepção Auditiva/fisiologia
16.
Neurosci Lett ; 825: 137690, 2024 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-38373631

RESUMO

We present a questionnaire exploring everyday laughter experience. We developed a 30-item questionnaire in English and collected data on an English-speaking sample (N = 823). Based on Principal Component Analysis (PCA), we identified four dimensions which accounted for variations in people's experiences of laughter: laughter frequency ('Frequency'), social usage of laughter ('Usage'), understanding of other people's laughter ('Understanding'), and feelings towards laughter ('Liking'). Reliability and validity of the LPPQ were assessed. To explore potential similarities and differences based on culture and language, we collected data with Mandarin Chinese-speaking population (N = 574). A PCA suggested the extraction of the same four dimensions, with some item differences between English and Chinese versions. The Laughter Production and Perception Questionnaire (LPPQ) will advance research into the experience of human laughter, which has a potentially crucial role in everyday life.


Assuntos
Riso , Humanos , Emoções , Reprodutibilidade dos Testes , Inquéritos e Questionários
17.
Behav Res Methods ; 45(4): 1234-45, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23444120

RESUMO

Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each vocalization, for use by the research community. High recognition accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners' emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Emoções/fisiologia , Relações Interpessoais , Comunicação não Verbal/psicologia , Reconhecimento Psicológico , Qualidade da Voz , Adulto , Feminino , Humanos , Julgamento , Masculino , Análise de Regressão , Adulto Jovem
18.
J Exp Psychol Hum Percept Perform ; 49(7): 1083-1089, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37261743

RESUMO

Many claims have been made about links between musical expertise and language ability. Rhythm ability, in particular, has been shown to predict phonological, grammatical, and second-language (L2) abilities, whereas music training often predicts reading and speech-perception skills. Here, we asked whether musical expertise-musical ability and/or music training-relates to L2 (English) abilities of Portuguese native speakers. Participants (N = 154) rated their L2 ability on seven 7-point scales, one each for speaking, reading, writing, comprehension, vocabulary, fluency, and accent. They also completed a test of general cognitive ability, an objective test of musical ability with melody and rhythm subtests, and a questionnaire that measured music training and other aspects of musical behaviors. L2 ability correlated positively with education and cognitive ability but not with music training. It also had no association with musical ability or with self-reports of musical behaviors. Moreover, Bayesian analyses provided evidence for the null hypotheses (i.e., no link between L2 and rhythm ability, no link between L2 and years of music lessons). In short, our findings-based on participants' self-reports of L2 ability-raise doubts about proposed associations between musical and second-language abilities, which may be limited to specific populations or measures. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Música , Humanos , Música/psicologia , Autorrelato , Teorema de Bayes , Idioma , Cognição
19.
Q J Exp Psychol (Hove) ; 76(7): 1585-1598, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36114609

RESUMO

Good musical abilities are typically considered to be a consequence of music training, such that they are studied in samples of formally trained individuals. Here, we asked what predicts musical abilities in the absence of music training. Participants with no formal music training (N = 190) completed the Goldsmiths Musical Sophistication Index, measures of personality and cognitive ability, and the Musical Ear Test (MET). The MET is an objective test of musical abilities that provides a Total score and separate scores for its two subtests (Melody and Rhythm), which require listeners to determine whether standard and comparison auditory sequences are identical. MET scores had no associations with personality traits. They correlated positively, however, with informal musical experience and cognitive abilities. Informal musical experience was a better predictor of Melody than of Rhythm scores. Some participants (12%) had Total scores higher than the mean from a sample of musically trained individuals (⩾6 years of formal training), tested previously by Correia et al. Untrained participants with particularly good musical abilities (top 25%, n = 51) scored higher than trained participants on the Rhythm subtest and similarly on the Melody subtest. High-ability untrained participants were also similar to trained ones in cognitive ability, but lower in the personality trait openness-to-experience. These results imply that formal music training is not required to achieve musician-like performance on tests of musical and cognitive abilities. They also suggest that informal music practice and music-related predispositions should be considered in studies of musical expertise.


Assuntos
Música , Humanos , Adulto , Música/psicologia , Individualidade , Cognição , Personalidade , Aptidão , Percepção Auditiva
20.
Emotion ; 22(5): 894-906, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32718172

RESUMO

Music training is widely assumed to enhance several nonmusical abilities, including speech perception, executive functions, reading, and emotion recognition. This assumption is based primarily on cross-sectional comparisons between musicians and nonmusicians. It remains unclear, however, whether training itself is necessary to explain the musician advantages, or whether factors such as innate predispositions and informal musical experience could produce similar effects. Here, we sought to clarify this issue by examining the association between music training, music perception abilities and vocal emotion recognition. The sample (N = 169) comprised musically trained and untrained listeners who varied widely in their musical skills, as assessed through self-report and performance-based measures. The emotion recognition tasks required listeners to categorize emotions in nonverbal vocalizations (e.g., laughter, crying) and in speech prosody. Music training was associated positively with emotion recognition across tasks, but the effect was small. We also found a positive association between music perception abilities and emotion recognition in the entire sample, even with music training held constant. In fact, untrained participants with good musical abilities were as good as highly trained musicians at recognizing vocal emotions. Moreover, the association between music training and emotion recognition was fully mediated by auditory and music perception skills. Thus, in the absence of formal music training, individuals who were "naturally" musical showed musician-like performance at recognizing vocal emotions. These findings highlight an important role for factors other than music training (e.g., predispositions and informal musical experience) in associations between musical and nonmusical domains. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Música , Percepção da Fala , Percepção Auditiva , Estudos Transversais , Emoções , Humanos , Música/psicologia , Reconhecimento Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA