Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Cereb Cortex ; 34(1)2024 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-38044462

RESUMEN

A growing literature has shown that binaural beat (BB)-generated by dichotic presentation of slightly mismatched pure tones-improves cognition. We recently found that BB stimulation of either beta (18 Hz) or gamma (40 Hz) frequencies enhanced auditory sentence comprehension. Here, we used electroencephalography (EEG) to characterize neural oscillations pertaining to the enhanced linguistic operations following BB stimulation. Sixty healthy young adults were randomly assigned to one of three listening groups: 18-Hz BB, 40-Hz BB, or pure-tone baseline, all embedded in music. After listening to the sound for 10 min (stimulation phase), participants underwent an auditory sentence comprehension task involving spoken sentences that contained either an object or subject relative clause (task phase). During the stimulation phase, 18-Hz BB yielded increased EEG power in a beta frequency range, while 40-Hz BB did not. During the task phase, only the 18-Hz BB resulted in significantly higher accuracy and faster response times compared with the baseline, especially on syntactically more complex object-relative sentences. The behavioral improvement by 18-Hz BB was accompanied by attenuated beta power difference between object- and subject-relative sentences. Altogether, our findings demonstrate beta oscillations as a neural correlate of improved syntactic operation following BB stimulation.


Asunto(s)
Comprensión , Electroencefalografía , Adulto Joven , Humanos , Electroencefalografía/métodos , Lenguaje , Cognición , Tiempo de Reacción , Estimulación Acústica/métodos
2.
Conscious Cogn ; 122: 103709, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38781813

RESUMEN

Conscious visual experiences are enriched by concurrent auditory information, implying audiovisual interactions. In the present study, we investigated how prior conscious experience of auditory and visual information influences the subsequent audiovisual temporal integration under the surface of awareness. We used continuous flash suppression (CFS) to render perceptually invisible a ball-shaped object constantly moving and bouncing inside a square frame window. To examine whether audiovisual temporal correspondence facilitates the ball stimulus to enter awareness, the visual motion was accompanied by click sounds temporally congruent or incongruent with the bounces of the ball. In Experiment 1, where no prior experience of the audiovisual events was given, we found no significant impact of audiovisual correspondence on visual detection time. However, when the temporally congruent or incongruent bounce-sound relations were consciously experienced prior to CFS in Experiment 2, congruent sounds yielded faster detection time compared to incongruent sounds during CFS. In addition, in Experiment 3, explicit processing of the incongruent bounce-sound relation prior to CFS slowed down detection time when the ball bounces became later congruent with sounds during CFS. These findings suggest that audiovisual temporal integration may take place outside of visual awareness though its potency is modulated by previous conscious experiences of the audiovisual events. The results are discussed in light of the framework of multisensory causal inference.


Asunto(s)
Percepción Auditiva , Concienciación , Estado de Conciencia , Percepción Visual , Humanos , Percepción Auditiva/fisiología , Femenino , Masculino , Percepción Visual/fisiología , Adulto , Adulto Joven , Concienciación/fisiología , Estado de Conciencia/fisiología , Inconsciente en Psicología , Tiempo de Reacción/fisiología , Percepción de Movimiento/fisiología , Estimulación Luminosa , Estimulación Acústica
3.
Dev Sci ; 26(1): e13261, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35343637

RESUMEN

We studied the role of sensorimotor and working memory systems in supporting development of perceptual rhythm processing with 119 participants aged 7-12 years. Children were assessed for their abilities in sensorimotor synchronization (SMS; beat tapping), auditory working memory (AWM; digit span), and rhythm discrimination (RD; same/different judgment on a pair of musical rhythm sequences). Multiple regression analysis revealed that children's RD performance was independently predicted by higher beat tapping consistency and greater digit span score, with all other demographic variables (age, sex, socioeconomic status, music training) controlled. The association between RD and SMS was more robust in the slower tempos (60 and 100 beats-per-minute (BPM)) than faster ones (120 and 180 BPM). Critically, the relation of SMS to RD was moderated by age in that RD performance was predicted by beat tapping consistency in younger children (age: 7-9 years), but not in older children (age: 10-12 years). AWM was the only predictor of RD in older children. Together, the current findings demonstrate that the sensorimotor and working memory systems jointly support RD processing during middle-to-late childhood and that the degree of association between the two systems and perceptual rhythm processing is shifted before entering into early adolescence.


Asunto(s)
Memoria a Corto Plazo , Música , Niño , Adolescente , Humanos , Percepción Auditiva , Juicio
4.
Psychol Res ; 87(7): 2218-2227, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36854935

RESUMEN

Binaural beats-an auditory illusion produced when two pure tones of slightly different frequencies are dichotically presented-have been shown to modulate various cognitive and psychological states. Here, we investigated the effects of binaural beat stimulation on auditory sentence processing that required interpretation of syntactic relations (Experiment 1) or an evaluation of syntactic well formedness (Experiment 2) with a large cohort of healthy young adults (N = 200). In both experiments, participants performed a language task after listening to one of four sounds (i.e., between-subject design): theta (7 Hz), beta (18 Hz), and gamma (40 Hz) binaural beats embedded in music, or the music only (baseline). In Experiment 1, 100 participants indicated the gender of a noun linked to a transitive action verb in spoken sentences containing either a subject or object-relative center-embedded clause. We found that both beta and gamma binaural beats yielded better performance, compared to the baseline, especially for syntactically more complex object-relative sentences. To determine if the binaural beat effect can be generalized to another type of syntactic analysis, we conducted Experiment 2 in which another 100 participants indicated whether or not there was a grammatical error in spoken sentences. However, none of the binaural beats yielded better performance for this task indicating that the benefit of beta and gamma binaural beats may be specific to the interpretation of syntactic relations. Together, we demonstrate, for the first time, the positive impact of binaural beats on auditory language comprehension. Both theoretical and practical implications are discussed.


Asunto(s)
Percepción Auditiva , Comprensión , Adulto Joven , Humanos , Estimulación Acústica , Percepción Auditiva/fisiología , Lenguaje
5.
J Neurophysiol ; 114(3): 1819-26, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26245316

RESUMEN

Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.


Asunto(s)
Percepción Auditiva , Lóbulo Temporal/fisiología , Voz , Adulto , Mapeo Encefálico , Femenino , Humanos , Masculino
6.
Neuroimage ; 89: 10-22, 2014 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-24269802

RESUMEN

Individual participants vary greatly in their ability to estimate and discriminate intervals of time. This heterogeneity of performance may be caused by reliance on different time perception networks as well as individual differences in the activation of brain structures utilized for timing within those networks. To address these possibilities we utilized event-related functional magnetic resonance imaging (fMRI) while human participants (n=25) performed a temporal or color discrimination task. Additionally, based on our previous research, we genotyped participants for DRD2/ANKK1-Taq1a, a single-nucleotide polymorphism associated with a 30-40% reduction in striatal D2 density and associated with poorer timing performance. Similar to previous reports, a wide range of performance was found across our sample; crucially, better performance on the timing versus color task was associated with greater activation in prefrontal and sub-cortical regions previously associated with timing. Furthermore, better timing performance also correlated with increased volume of the right lateral cerebellum, as demonstrated by voxel-based morphometry. Our analysis also revealed that A1 carriers of the Taq1a polymorphism exhibited relatively worse performance on temporal, but not color discrimination, but greater activation in the striatum and right dorsolateral prefrontal cortex, as well as reduced volume in the cerebellar cluster. These results point to the neural bases for heterogeneous timing performance in humans, and suggest that differences in performance on a temporal discrimination task are, in part, attributable to the DRD2/ANKK1 genotype.


Asunto(s)
Encéfalo/fisiología , Individualidad , Red Nerviosa/fisiología , Receptores de Dopamina D2/genética , Percepción del Tiempo/fisiología , Adulto , Mapeo Encefálico , Percepción de Color/fisiología , Discriminación en Psicología/fisiología , Femenino , Genotipo , Humanos , Imagen por Resonancia Magnética , Masculino , Polimorfismo de Nucleótido Simple , Adulto Joven
7.
Sci Rep ; 14(1): 3710, 2024 02 14.
Artículo en Inglés | MEDLINE | ID: mdl-38355855

RESUMEN

A growing body of literature has reported the relationship between music and language, particularly between individual differences in perceptual rhythm skill and grammar competency in children. Here, we investigated whether motoric aspects of rhythm processing-as measured by rhythmic finger tapping tasks-also explain the rhythm-grammar connection in 150 healthy young adults. We found that all expressive rhythm skills (spontaneous, synchronized, and continued tapping) along with rhythm discrimination skill significantly predicted receptive grammar skills on either auditory sentence comprehension or grammaticality well-formedness judgment (e.g., singular/plural, past/present), even after controlling for verbal working memory and music experience. Among these, synchronized tapping and rhythm discrimination explained unique variance of sentence comprehension and grammaticality judgment, respectively, indicating differential associations between different rhythm and grammar skills. Together, we demonstrate that even simple and repetitive motor behavior can account for seemingly high-order grammar skills in the adult population, suggesting that the sensorimotor system continue to support syntactic operations.


Asunto(s)
Individualidad , Lingüística , Niño , Adulto Joven , Humanos , Lenguaje , Cognición , Memoria a Corto Plazo
8.
J Neurosci ; 32(11): 3942-8, 2012 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-22423114

RESUMEN

Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.


Asunto(s)
Estimulación Acústica/métodos , Mapeo Encefálico/métodos , Lóbulo Frontal/fisiología , Imagen por Resonancia Magnética/métodos , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Análisis Multivariante , Adulto Joven
9.
Aging Brain ; 2: 100051, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36908889

RESUMEN

We investigated how the aging brain copes with acoustic and syntactic challenges during spoken language comprehension. Thirty-eight healthy adults aged 54 - 80 years (M = 66 years) participated in an fMRI experiment wherein listeners indicated the gender of an agent in short spoken sentences that varied in syntactic complexity (object-relative vs subject-relative center-embedded clause structures) and acoustic richness (high vs low spectral detail, but all intelligible). We found widespread activity throughout a bilateral frontotemporal network during successful sentence comprehension. Consistent with prior reports, bilateral inferior frontal gyrus and left posterior superior temporal gyrus were more active in response to object-relative sentences than to subject-relative sentences. Moreover, several regions were significantly correlated with individual differences in task performance: Activity in right frontoparietal cortex and left cerebellum (Crus I & II) showed a negative correlation with overall comprehension. By contrast, left frontotemporal areas and right cerebellum (Lobule VII) showed a negative correlation with accuracy specifically for syntactically complex sentences. In addition, laterality analyses confirmed a lack of hemispheric lateralization in activity evoked by sentence stimuli in older adults. Importantly, we found different hemispheric roles, with a left-lateralized core language network supporting syntactic operations, and right-hemisphere regions coming into play to aid in general cognitive demands during spoken sentence processing. Together our findings support the view that high levels of language comprehension in older adults are maintained by a close interplay between a core left hemisphere language network and additional neural resources in the contralateral hemisphere.

10.
Neuroimage ; 57(1): 293-300, 2011 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-21315158

RESUMEN

Music perception generally involves processing the frequency relationships between successive pitches and extraction of the melodic contour. Previous evidence has suggested that the 'ups' and 'downs' of melodic contour are categorically and automatically processed, but knowledge of the brain regions that discriminate different types of contour is limited. Here, we examined melodic contour discrimination using multivariate pattern analysis (MVPA) of fMRI data. Twelve non-musicians were presented with various ascending and descending melodic sequences while being scanned. Whole-brain MVPA was used to identify regions in which the local pattern of activity accurately discriminated between contour categories. We identified three distinct cortical loci: the right superior temporal sulcus (rSTS), the left inferior parietal lobule (lIPL), and the anterior cingulate cortex (ACC). These results complement previous findings of melodic processing within the rSTS, and extend our understanding of the way in which abstract auditory sequences are categorized by the human brain.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Música , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Masculino
11.
Hear Res ; 229(1-2): 204-12, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17208397

RESUMEN

Goal-directed behavior is the essence of adaptation because it allows humans and other animals to respond dynamically to different environmental scenarios. Goal-directed behavior can be characterized as the formation of dynamic links between stimuli and actions. One important attribute of goal-directed behavior is that linkages can be formed based on how a stimulus is categorized. That is, links are formed based on the membership of a stimulus in a particular functional category. In this review, we review categorization with an emphasis on auditory categorization. We focus on the role of categorization in language and non-human vocalizations. We present behavioral data indicating that non-human primates categorize and respond to vocalizations based on differences in their putative meaning and not differences in their acoustics. Finally, we present evidence suggesting that the ventrolateral prefrontal cortex plays an important role in processing auditory objects and has a specific role in the representation of auditory categories.


Asunto(s)
Percepción Auditiva/fisiología , Animales , Conducta Animal , Humanos , Macaca mulatta , Corteza Prefrontal/fisiología , Psicoacústica , Vocalización Animal
12.
Biol Psychol ; 129: 314-323, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28964789

RESUMEN

There has been recent debate over whether actions are processed primarily by means of motor simulation or cognitive semantics. The current study investigated how abstract action concepts are processed in the brain, independent of the format in which they are presented. Eighteen healthy adult participants viewed different actions (e.g., diving, boxing) in the form of verbs and schematic action pictograms while functional magnetic resonance imaging (fMRI) was collected. We predicted that sensorimotor and semantic brain regions would show similar patterns of neural activity for different instances of the same action (e.g., diving pictogram and the word 'diving'). A representational similarity analysis revealed posterior temporal and sensorimotor regions where specific action concepts were encoded, independent of the format of presentation. These results reveal the neural instantiations of abstract action concepts, and demonstrate that both sensorimotor and semantic systems are involved in processing actions.


Asunto(s)
Encéfalo/diagnóstico por imagen , Formación de Concepto/fisiología , Movimiento (Física) , Adulto , Encéfalo/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
13.
Neuropsychologia ; 94: 52-60, 2017 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-27864027

RESUMEN

Naming objects represents a substantial challenge for patients with chronic aphasia. This could be in part because the reorganized compensatory language networks of persons with aphasia may be less stable than the intact language systems of healthy individuals. Here, we hypothesized that the degree of stability would be instantiated by spatially differential neural patterns rather than either increased or diminished amplitudes of neural activity within a putative compensatory language system. We recruited a chronic aphasic patient (KL; 66 year-old male) who exhibited a semantic deficit (e.g., often said "milk" for "cow" and "pillow" for "blanket"). Over the course of four behavioral sessions involving a naming task performed in a mock scanner, we identified visual objects that yielded an approximately 50% success rate. We then conducted two fMRI sessions in which the patient performed a naming task for multiple exemplars of those objects. Multivoxel pattern analysis (MVPA) searchlight revealed differential activity patterns associated with correct and incorrect trials throughout intact brain regions. The most robust and largest cluster was found in the right occipito-temporal cortex encompassing fusiform cortex, lateral occipital cortex (LOC), and middle occipital cortex, which may account for the patient's propensity for semantic naming errors. None of these areas were found by a conventional univariate analysis. By using an alternative approach, we extend current evidence for compensatory naming processes that operate through spatially differential patterns within the reorganized language system.


Asunto(s)
Afasia/fisiopatología , Afasia/psicología , Encéfalo/fisiopatología , Reconocimiento Visual de Modelos/fisiología , Semántica , Habla/fisiología , Anciano , Afasia/diagnóstico por imagen , Afasia/etiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/fisiopatología , Accidente Cerebrovascular/psicología
14.
Hear Res ; 333: 108-117, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26723103

RESUMEN

The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high.


Asunto(s)
Vías Auditivas/fisiología , Red Nerviosa/fisiología , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Calidad de la Voz , Estimulación Acústica/métodos , Acústica , Adulto , Audiometría del Habla , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Espectrografía del Sonido , Adulto Joven
15.
Psychon Bull Rev ; 22(1): 163-9, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24865280

RESUMEN

Melody recognition entails the encoding of pitch intervals between successive notes. While it has been shown that a whole melodic sequence is better encoded than the sum of its constituent intervals, the underlying reasons have remained opaque. Here, we compared listeners' accuracy in encoding the relative pitch distance between two notes (for example, C, E) of an interval to listeners accuracy under the following three modifications: (1) doubling the duration of each note (C - E -), (2) repetition of each note (C, C, E, E), and (3) adding a preceding note (G, C, E). Repeating (2) or adding an extra note (3) improved encoding of relative pitch distance when the melodic sequences were transposed to other keys, but lengthening the duration (1) did not improve encoding relative to the standard two-note interval sequences. Crucially, encoding accuracy was higher with the four-note sequences than with long two-note sequences despite the fact that sensory (pitch) information was held constant. We interpret the results to show that re-forming the Gestalts of two-note intervals into two-note "melodies" results in more accurate encoding of relational pitch information due to a richer structural context in which to embed the interval.


Asunto(s)
Percepción Auditiva , Música , Percepción de la Altura Tonal , Reconocimiento en Psicología , Adulto , Femenino , Humanos , Masculino , Distorsión de la Percepción , Percepción del Tiempo , Adulto Joven
16.
PLoS One ; 8(7): e69566, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23922740

RESUMEN

Spatial smoothness is helpful when averaging fMRI signals across multiple subjects, as it allows different subjects' corresponding brain areas to be pooled together even if they are slightly misaligned. However, smoothing is usually not applied when performing multivoxel pattern-based analyses (MVPA), as it runs the risk of blurring away the information that fine-grained spatial patterns contain. It would therefore be desirable, if possible, to carry out pattern-based analyses which take unsmoothed data as their input but which produce smooth images as output. We show here that the Gaussian Naive Bayes (GNB) classifier does precisely this, when it is used in "searchlight" pattern-based analyses. We explain why this occurs, and illustrate the effect in real fMRI data. Moreover, we show that analyses using GNBs produce results at the multi-subject level which are statistically robust, neurally plausible, and which replicate across two independent data sets. By contrast, SVM classifiers applied to the same data do not generate a replication, even if the SVM-derived searchlight maps have smoothing applied to them. An additional advantage of GNB classifiers for searchlight analyses is that they are orders of magnitude faster to compute than more complex alternatives such as SVMs. Collectively, these results suggest that Gaussian Naive Bayes classifiers may be a highly non-naive choice for multi-subject pattern-based fMRI studies.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Teorema de Bayes , Humanos , Distribución Normal , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA