Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Neurosci ; 40(10): 2108-2118, 2020 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-32001611

RESUMO

In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.


Assuntos
Córtex Cerebral/fisiologia , Modelos Neurológicos , Percepção da Altura Sonora/fisiologia , Adulto , Feminino , Humanos , Aprendizado de Máquina , Magnetoencefalografia , Masculino
2.
J Acoust Soc Am ; 144(4): 2462, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30404465

RESUMO

In order to perceive meaningful speech, the auditory system must recognize different phonemes amidst a noisy and variable acoustic signal. To better understand the processing mechanisms underlying this ability, evoked cortical responses to different spoken consonants were measured with electroencephalography (EEG). Using multivariate pattern analysis (MVPA), binary classifiers attempted to discriminate between the EEG activity evoked by two given consonants at each peri-stimulus time sample, providing a dynamic measure of their cortical dissimilarity. To examine the relationship between representations at the auditory periphery and cortex, MVPA was also applied to modelled auditory-nerve (AN) responses of consonants, and time-evolving AN-based and EEG-based dissimilarities were compared with one another. Cortical dissimilarities between consonants were commensurate with their articulatory distinctions, particularly their manner of articulation, and to a lesser extent, their voicing. Furthermore, cortical distinctions between consonants in two periods of activity, centered at 130 and 400 ms after onset, aligned with their peripheral dissimilarities in distinct onset and post-onset periods, respectively. In relating speech representations across articulatory, peripheral, and cortical domains, the understanding of crucial transformations in the auditory pathway underlying the ability to perceive speech is advanced.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Fonética
3.
Int J Audiol ; 56(2): 85-91, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27758153

RESUMO

OBJECTIVE: To develop, in Australian English, the first mixed-gender, multi-talker matrix sentence test. DESIGN: Speech material consisted of a 50-word base matrix whose elements can be combined to form sentences of identical syntax but unpredictable content. Ten voices (five female and five male) were recorded for editing and preliminary level equalization. Elements were presented as single-talker sentences-in-noise during two perceptual tests: an optimization phase that provided the basis for further level correction, and an evaluation phase that perceptually validated those changes. STUDY SAMPLE: Ten listeners participated in the optimization phase; these and an additional 32 naïve listeners completed the evaluation test. All were fluent in English and all but one had lived in Australia for >2 years. RESULTS: Optimization reduced the standard deviation (SD) and speech reception threshold (SRT) range across all speech material (grand mean SRT = -10.6 dB signal-to-noise ratio, median = -10.8, SD =1.4, range =13.7, slope =19.3%/dB), yielding data consistent with cross-validated matrix tests in other languages. Intelligibility differences between experienced and naïve listeners were minimal. CONCLUSIONS: The Australian matrix corpus provides a robust set of test materials suitable for both clinical assessment and research into the dynamics of active listening in multi-talker environments.


Assuntos
Estimulação Acústica/métodos , Idioma , Inteligibilidade da Fala , Percepção da Fala , Teste do Limiar de Recepção da Fala/métodos , Acústica , Adulto , Feminino , Humanos , Masculino , Mascaramento Perceptivo , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Fatores Sexuais , Razão Sinal-Ruído , Espectrografia do Som , Adulto Jovem
4.
Sci Adv ; 10(7): eadk0010, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38363839

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.


Assuntos
Córtex Auditivo , Música , Humanos , Percepção da Altura Sonora/fisiologia , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Idioma
5.
Front Hum Neurosci ; 17: 1298129, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37920562

RESUMO

Brain-computer interfaces (BCI) that directly decode speech from brain activity aim to restore communication in people with paralysis who cannot speak. Despite recent advances, neural inference of speech remains imperfect, limiting the ability for speech BCIs to enable experiences such as fluent conversation that promote agency - that is, the ability for users to author and transmit messages enacting their intentions. Here, we make recommendations for promoting agency based on existing and emerging strategies in neural engineering. The focus is on achieving fast, accurate, and reliable performance while ensuring volitional control over when a decoder is engaged, what exactly is decoded, and how messages are expressed. Additionally, alongside neuroscientific progress within controlled experimental settings, we argue that a parallel line of research must consider how to translate experimental successes into real-world environments. While such research will ultimately require input from prospective users, here we identify and describe design choices inspired by human-factors work conducted in existing fields of assistive technology, which address practical issues likely to emerge in future real-world speech BCI applications.

6.
bioRxiv ; 2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37905047

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser: The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.

7.
Brain Sci ; 12(11)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36358382

RESUMO

Chanting is practiced in many religious and secular traditions and involves rhythmic vocalization or mental repetition of a sound or phrase. This study examined how chanting relates to cognitive function, altered states, and quality of life across a wide range of traditions. A global survey was used to assess experiences during chanting including flow states, mystical experiences, mindfulness, and mind wandering. Further, attributes of chanting were assessed to determine their association with altered states and cognitive benefits, and whether psychological correlates of chanting are associated with quality of life. Responses were analyzed from 456 English speaking participants who regularly chant across 32 countries and various chanting traditions. Results revealed that different aspects of chanting were associated with distinctive experiential outcomes. Stronger intentionality (devotion, intention, sound) and higher chanting engagement (experience, practice duration, regularity) were associated with altered states and cognitive benefits. Participants whose main practice was call and response chanting reported higher scores of mystical experiences. Participants whose main practice was repetitive prayer reported lower mind wandering. Lastly, intentionality and engagement were associated with quality of life indirectly through altered states and cognitive benefits. This research sheds new light on the phenomenology and psychological consequences of chanting across a range of practices and traditions.

8.
Front Neurol ; 12: 669172, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34017308

RESUMO

The anterior cingulate cortex (ACC) has been extensively implicated in the functional brain network underlying chronic pain. Electrical stimulation of the ACC has been proposed as a therapy for refractory chronic pain, although, mechanisms of therapeutic action are still unclear. As stimulation of the ACC has been reported to produce many different behavioral and perceptual responses, this region likely plays a varied role in sensory and emotional integration as well as modulating internally generated perceptual states. In this case series, we report the emergence of subjective musical hallucinations (MH) after electrical stimulation of the ACC in two patients with refractory chronic pain. In an N-of-1 analysis from one patient, we identified neural activity (local field potentials) that distinguish MH from both the non-MH condition and during a task involving music listening. Music hallucinations were associated with reduced alpha band activity and increased gamma band activity in the ACC. Listening to similar music was associated with different changes in ACC alpha and gamma power, extending prior results that internally generated perceptual phenomena are supported by circuits in the ACC. We discuss these findings in the context of phantom perceptual phenomena and posit a framework whereby chronic pain may be interpreted as a persistent internally generated percept.

9.
PLoS One ; 9(9): e108437, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25269061

RESUMO

The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically "dry" stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion. Stimuli were presented at 25°/s, 50°/s and 100°/s with a random spatial offset between audition and vision. In a 2AFC task, subjects made a judgment of the leading modality (auditory or visual). No significant differences were observed in the spatial threshold based on the point of subjective equivalence (PSE) or the slope of psychometric functions (ß) across all three acoustic conditions. Additionally, both the PSE and ß did not significantly differ across velocity, suggesting a fixed spatial window of audio-visual separation. Findings suggest that there was no loss in spatial information accompanying the reduction in spatial cues and reverberation levels tested, and establish a perceptual measure for assessing the veracity of motion generated from discrete locations and in echoic environments.


Assuntos
Limiar Auditivo/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Audição/fisiologia , Humanos , Masculino , Movimento (Física) , Estimulação Luminosa , Som , Percepção Espacial/fisiologia , Análise Espaço-Temporal , Visão Ocular/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA