Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 7.311
Filtrar
Mais filtros

Medicinas Complementares
Intervalo de ano de publicação
1.
Acta Psychol (Amst) ; 246: 104250, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38615596

RESUMO

Percepts, urges, and even high-level cognitions often enter the conscious field involuntarily. The Reflexive Imagery Task (RIT) was designed to investigate experimentally the nature of such entry into consciousness. In the most basic version of the task, participants are instructed not to subvocalize the names of visual objects. Involuntary subvocalizations arise on the majority of the trials. Can these effects be influenced by priming? In our experiment, participants were exposed to an auditory prime 300 ms before being presented with the RIT stimuli. For example, participants heard the word "FOOD" before seeing two RIT stimuli (e.g., line drawings of BANANA and CAT, with the former being the target of the prime). The short span between prime and target allowed us to assess whether the RIT effect is strategic or automatic. Before each trial, participants were instructed to disregard what they hear, and not to think of the name of any of the objects. On an average of 83% of the trials, the participants thought (involuntarily) of the name of the object associated with the prime. This is the first study to use a priming technique within the context of the RIT. The theoretical implications of these involuntary effects are discussed.


Assuntos
Imaginação , Humanos , Imaginação/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Tempo de Reação/fisiologia , Estado de Consciência/fisiologia , Estimulação Acústica , Estimulação Luminosa , Priming de Repetição/fisiologia , Percepção Auditiva/fisiologia
2.
PLoS One ; 19(4): e0300219, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38568916

RESUMO

Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.


Assuntos
Imagens, Psicoterapia , Imaginação , Humanos , Autorrelato , Imagens, Psicoterapia/métodos , Percepção Auditiva
3.
Cogn Neuropsychiatry ; 29(2): 87-102, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38363282

RESUMO

INTRODUCTION: Vivid mental imagery has been proposed to increase the likelihood of experiencing hallucinations. Typically, studies have employed a modality general approach to mental imagery which compares imagery across multiple domains (e.g., visual, auditory and tactile) to hallucinations in multiple senses. However, modality specific imagery may be a better predictor of hallucinations in the same domain. The study examined the contribution of imagery to hallucinations in a non-clinical sample and specifically whether imagery best predicted hallucinations at a modality general or modality specific level. METHODS: In study one, modality general and modality specific accounts of the imagery-hallucination relationship were contrasted through application of self-report measures in a sample of 434 students. Study two used a subsample (n = 103) to extend exploration of the imagery-hallucinations relationship using a performance-based imagery task. RESULTS: A small to moderate modality general relationship was observed between self-report imagery and hallucination proneness. There was only evidence of a modality specific relationship in the tactile domain. Performance-based imagery measures were unrelated to hallucinations and self-report imagery. CONCLUSIONS: Mental imagery may act as a modality general process increasing hallucination proneness. The observed distinction between self-report and performance-based imagery highlights the difficulty of accurately measuring internal processes.


Assuntos
Alucinações , Imaginação , Autorrelato , Humanos , Alucinações/psicologia , Feminino , Masculino , Adulto , Adulto Jovem , Adolescente , Percepção Visual , Percepção Auditiva
4.
Neuroreport ; 35(4): 269-276, 2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38305131

RESUMO

This study explored how the human brain perceives stickiness through tactile and auditory channels, especially when presented with congruent or incongruent intensity cues. In our behavioral and functional MRI (fMRI) experiments, we presented participants with adhesive tape stimuli at two different intensities. The congruent condition involved providing stickiness stimuli with matching intensity cues in both auditory and tactile channels, whereas the incongruent condition involved cues of different intensities. Behavioral results showed that participants were able to distinguish between the congruent and incongruent conditions with high accuracy. Through fMRI searchlight analysis, we tested which brain regions could distinguish between congruent and incongruent conditions, and as a result, we identified the superior temporal gyrus, known primarily for auditory processing. Interestingly, we did not observe any significant activation in regions associated with somatosensory or motor functions. This indicates that the brain dedicates more attention to auditory cues than to tactile cues, possibly due to the unfamiliarity of conveying the sensation of stickiness through sound. Our results could provide new perspectives on the complexities of multisensory integration, highlighting the subtle yet significant role of auditory processing in understanding tactile properties such as stickiness.


Assuntos
Percepção Auditiva , Imageamento por Ressonância Magnética , Humanos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Lobo Temporal , Percepção Visual/fisiologia
5.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38319934

RESUMO

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Assuntos
Mapeamento Encefálico , Percepção Visual , Humanos , Idoso , Teorema de Bayes , Percepção Visual/fisiologia , Encéfalo/fisiologia , Atenção/fisiologia , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Estimulação Luminosa/métodos , Imageamento por Ressonância Magnética
6.
Sci Rep ; 14(1): 3262, 2024 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-38332159

RESUMO

The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, "what a person said": /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect.


Assuntos
Música , Percepção da Fala , Humanos , Percepção Visual , Fala , Ouro , Percepção Auditiva , Estimulação Acústica
7.
Elife ; 132024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38334469

RESUMO

Orbitofrontal cortex (OFC) is classically linked to inhibitory control, emotion regulation, and reward processing. Recent perspectives propose that the OFC also generates predictions about perceptual events, actions, and their outcomes. We tested the role of the OFC in detecting violations of prediction at two levels of abstraction (i.e., hierarchical predictive processing) by studying the event-related potentials (ERPs) of patients with focal OFC lesions (n = 12) and healthy controls (n = 14) while they detected deviant sequences of tones in a local-global paradigm. The structural regularities of the tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) and at a global (i.e., between sequences) level. In OFC patients, ERPs elicited by standard tones were unaffected at both local and global levels compared to controls. However, patients showed an attenuated mismatch negativity (MMN) and P3a to local prediction violation, as well as a diminished MMN followed by a delayed P3a to the combined local and global level prediction violation. The subsequent P3b component to conditions involving violations of prediction at the level of global rules was preserved in the OFC group. Comparable effects were absent in patients with lesions restricted to the lateral PFC, which lends a degree of anatomical specificity to the altered predictive processing resulting from OFC lesion. Overall, the altered magnitudes and time courses of MMN/P3a responses after lesions to the OFC indicate that the neural correlates of detection of auditory regularity violation are impacted at two hierarchical levels of rule abstraction.


Assuntos
Córtex Auditivo , Potenciais Evocados Auditivos , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Percepção Auditiva/fisiologia , Córtex Pré-Frontal , Córtex Auditivo/fisiologia
8.
Nat Commun ; 15(1): 1482, 2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38369535

RESUMO

The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.


Assuntos
Música , Humanos , Psicoacústica , Música/psicologia , Percepção Auditiva , Emoções , Julgamento , Estimulação Acústica
9.
Autism Res ; 17(2): 280-310, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38334251

RESUMO

Autistic individuals show substantially reduced benefit from observing visual articulations during audiovisual speech perception, a multisensory integration deficit that is particularly relevant to social communication. This has mostly been studied using simple syllabic or word-level stimuli and it remains unclear how altered lower-level multisensory integration translates to the processing of more complex natural multisensory stimulus environments in autism. Here, functional neuroimaging was used to examine neural correlates of audiovisual gain (AV-gain) in 41 autistic individuals to those of 41 age-matched non-autistic controls when presented with a complex audiovisual narrative. Participants were presented with continuous narration of a story in auditory-alone, visual-alone, and both synchronous and asynchronous audiovisual speech conditions. We hypothesized that previously identified differences in audiovisual speech processing in autism would be characterized by activation differences in brain regions well known to be associated with audiovisual enhancement in neurotypicals. However, our results did not provide evidence for altered processing of auditory alone, visual alone, audiovisual conditions or AV- gain in regions associated with the respective task when comparing activation patterns between groups. Instead, we found that autistic individuals responded with higher activations in mostly frontal regions where the activation to the experimental conditions was below baseline (de-activations) in the control group. These frontal effects were observed in both unisensory and audiovisual conditions, suggesting that these altered activations were not specific to multisensory processing but reflective of more general mechanisms such as an altered disengagement of Default Mode Network processes during the observation of the language stimulus across conditions.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Percepção da Fala , Adulto , Criança , Humanos , Percepção da Fala/fisiologia , Narração , Percepção Visual/fisiologia , Transtorno do Espectro Autista/diagnóstico por imagem , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos
10.
Sci Rep ; 14(1): 4586, 2024 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-38403782

RESUMO

Predictive processing in the brain, involving interaction between interoceptive (bodily signal) and exteroceptive (sensory) processing, is essential for understanding music as it encompasses musical temporality dynamics and affective responses. This study explores the relationship between neural correlates and subjective certainty of chord prediction, focusing on the alignment between predicted and actual chord progressions in both musically appropriate chord sequences and random chord sequences. Participants were asked to predict the final chord in sequences while their brain activity was measured using electroencephalography (EEG). We found that the stimulus preceding negativity (SPN), an EEG component associated with predictive processing of sensory stimuli, was larger for non-harmonic chord sequences than for harmonic chord progressions. Additionally, the heartbeat evoked potential (HEP), an EEG component related to interoceptive processing, was larger for random chord sequences and correlated with prediction certainty ratings. HEP also correlated with the N5 component, found while listening to the final chord. Our findings suggest that HEP more directly reflects the subjective prediction certainty than SPN. These findings offer new insights into the neural mechanisms underlying music perception and prediction, emphasizing the importance of considering auditory prediction certainty when examining the neural basis of music cognition.


Assuntos
Potenciais Evocados Auditivos , Música , Humanos , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia , Percepção Auditiva/fisiologia , Incerteza , Eletroencefalografia , Música/psicologia
11.
J Neural Eng ; 21(1)2024 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-38266281

RESUMO

Objective.Spatial auditory attention decoding (Sp-AAD) refers to the task of identifying the direction of the speaker to which a person is attending in a multi-talker setting, based on the listener's neural recordings, e.g. electroencephalography (EEG). The goal of this study is to thoroughly investigate potential biases when training such Sp-AAD decoders on EEG data, particularly eye-gaze biases and latent trial-dependent confounds, which may result in Sp-AAD models that decode eye-gaze or trial-specific fingerprints rather than spatial auditory attention.Approach.We designed a two-speaker audiovisual Sp-AAD protocol in which the spatial auditory and visual attention were enforced to be either congruent or incongruent, and we recorded EEG data from sixteen participants undergoing several trials recorded at distinct timepoints. We trained a simple linear model for Sp-AAD based on common spatial patterns filters in combination with either linear discriminant analysis (LDA) or k-means clustering, and evaluated them both across- and within-trial.Main results.We found that even a simple linear Sp-AAD model is susceptible to overfitting to confounding signal patterns such as eye-gaze and trial fingerprints (e.g. due to feature shifts across trials), resulting in artificially high decoding accuracies. Furthermore, we found that changes in the EEG signal statistics across trials deteriorate the trial generalization of the classifier, even when the latter is retrained on the test trial with an unsupervised algorithm.Significance.Collectively, our findings confirm that there exist subtle biases and confounds that can strongly interfere with the decoding of spatial auditory attention from EEG. It is expected that more complicated non-linear models based on deep neural networks, which are often used for Sp-AAD, are even more vulnerable to such biases. Future work should perform experiments and model evaluations that avoid and/or control for such biases in Sp-AAD tasks.


Assuntos
Percepção Auditiva , Percepção da Fala , Humanos , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Viés
12.
eNeuro ; 11(3)2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38253583

RESUMO

The neural mechanisms underlying the exogenous coding and neural entrainment to repetitive auditory stimuli have seen a recent surge of interest. However, few studies have characterized how parametric changes in stimulus presentation alter entrained responses. We examined the degree to which the brain entrains to repeated speech (i.e., /ba/) and nonspeech (i.e., click) sounds using phase-locking value (PLV) analysis applied to multichannel human electroencephalogram (EEG) data. Passive cortico-acoustic tracking was investigated in N = 24 normal young adults utilizing EEG source analyses that isolated neural activity stemming from both auditory temporal cortices. We parametrically manipulated the rate and periodicity of repetitive, continuous speech and click stimuli to investigate how speed and jitter in ongoing sound streams affect oscillatory entrainment. Neuronal synchronization to speech was enhanced at 4.5 Hz (the putative universal rate of speech) and showed a differential pattern to that of clicks, particularly at higher rates. PLV to speech decreased with increasing jitter but remained superior to clicks. Surprisingly, PLV entrainment to clicks was invariant to periodicity manipulations. Our findings provide evidence that the brain's neural entrainment to complex sounds is enhanced and more sensitized when processing speech-like stimuli, even at the syllable level, relative to nonspeech sounds. The fact that this specialization is apparent even under passive listening suggests a priority of the auditory system for synchronizing to behaviorally relevant signals.


Assuntos
Córtex Auditivo , Percepção da Fala , Adulto Jovem , Humanos , Estimulação Acústica , Percepção da Fala/fisiologia , Som , Eletroencefalografia , Periodicidade , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia
13.
Nat Commun ; 15(1): 148, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38168097

RESUMO

Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training. However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music. The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain. We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin. These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.


Assuntos
Música , Humanos , Estimulação Acústica , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Audição
14.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38183184

RESUMO

Auditory sensory processing is assumed to occur in a hierarchical structure including the primary auditory cortex (A1), superior temporal gyrus, and frontal areas. These areas are postulated to generate predictions for incoming stimuli, creating an internal model of the surrounding environment. Previous studies on mismatch negativity have indicated the involvement of the superior temporal gyrus in this processing, whereas reports have been mixed regarding the contribution of the frontal cortex. We designed a novel auditory paradigm, the "cascade roving" paradigm, which incorporated complex structures (cascade sequences) into a roving paradigm. We analyzed electrocorticography data from six patients with refractory epilepsy who passively listened to this novel auditory paradigm and detected responses to deviants mainly in the superior temporal gyrus and inferior frontal gyrus. Notably, the inferior frontal gyrus exhibited broader distribution and sustained duration of deviant-elicited responses, seemingly differing in spatio-temporal characteristics from the prediction error responses observed in the superior temporal gyrus, compared with conventional oddball paradigms performed on the same participants. Moreover, we observed that the deviant responses were enhanced through stimulus repetition in the high-gamma range mainly in the superior temporal gyrus. These features of the novel paradigm may aid in our understanding of auditory predictive coding.


Assuntos
Córtex Auditivo , Eletrocorticografia , Humanos , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Córtex Auditivo/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Percepção Auditiva/fisiologia
15.
Curr Biol ; 34(2): 444-450.e5, 2024 01 22.
Artigo em Inglês | MEDLINE | ID: mdl-38176416

RESUMO

The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.


Assuntos
Música , Animais , Percepção da Altura Sonora/fisiologia , Motivação , Eletroencefalografia/métodos , Primatas , Estimulação Acústica , Percepção Auditiva/fisiologia
16.
Hear Res ; 441: 108923, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38091866

RESUMO

According to the latest frameworks, auditory perception and memory involve the constant prediction of future sound events by the brain, based on the continuous extraction of feature regularities from the environment. The neural hierarchical mechanisms for predictive processes in perception and memory for sounds are typically studied in relation to simple acoustic features in isolated sounds or sound patterns inserted in highly certain contexts. Such studies have identified reliable prediction formation and error signals, e.g., the N100 or the mismatch negativity (MMN) evoked responses. In real life, though, individuals often face situations in which uncertainty prevails and where making sense of sounds becomes a hard challenge. In music, not only deviations from predictions are masterly set up by composers to induce emotions but sometimes the sheer uncertainty of sound scenes is exploited for aesthetic purposes, especially in compositional styles such as Western atonal classical music. In very recent magnetoencephalography (MEG) and electroencephalography (EEG) studies, experimental and technical advances in stimulation paradigms and analysis approaches have permitted the identification of prediction-error responses from highly uncertain, atonal contexts and the extraction of prediction-related responses from real, continuous music. Moreover, functional connectivity analyses revealed the emergence of cortico-hippocampal interactions during the formation of auditory memories for more predictable vs. less predictable patterns. These findings contribute to understanding the general brain mechanisms that enable us to predict even highly uncertain sound environments and to possibly make sense of and appreciate even atonal music.


Assuntos
Potenciais Evocados Auditivos , Música , Humanos , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia , Música/psicologia , Eletroencefalografia , Neurofisiologia , Percepção Auditiva/fisiologia
17.
Conscious Cogn ; 117: 103598, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38086154

RESUMO

Little is known about the perceptual characteristics of mental images nor how they vary across sensory modalities. We conducted an exhaustive survey into how mental images are experienced across modalities, mainly targeting visual and auditory imagery of a single stimulus, the letter "O", to facilitate direct comparisons. We investigated temporal properties of mental images (e.g. onset latency, duration), spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), movement requirements (e.g. eye movements), real-imagined interactions (e.g. inner speech while reading), beliefs about imagery norms and terminologies, as well as respondent confidence. Participants also reported on the five traditional senses and their prominence during thinking, imagining, and dreaming. Overall, visual and auditory experiences dominated mental events, although auditory mental images were superior to visual mental images on almost every metric tested except regarding spatial properties. Our findings suggest that modality-specific differences in mental imagery may parallel those of other sensory neural processes.


Assuntos
Imaginação , Sensação , Humanos , Percepção Visual , Imagens, Psicoterapia , Percepção Auditiva
18.
Cortex ; 171: 287-307, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38061210

RESUMO

The spectral formant structure and periodicity pitch are the major features that determine the identity of vowels and the characteristics of the speaker. However, very little is known about how the processing of these features in the auditory cortex changes during development. To address this question, we independently manipulated the periodicity and formant structure of vowels while measuring auditory cortex responses using magnetoencephalography (MEG) in children aged 7-12 years and adults. We analyzed the sustained negative shift of source current associated with these vowel properties, which was present in the auditory cortex in both age groups despite differences in the transient components of the auditory response. In adults, the sustained activation associated with formant structure was lateralized to the left hemisphere early in the auditory processing stream requiring neither attention nor semantic mapping. This lateralization was not yet established in children, in whom the right hemisphere contribution to formant processing was strong and decreased during or after puberty. In contrast to the formant structure, periodicity was associated with a greater response in the right hemisphere in both children and adults. These findings suggest that left-lateralization for the automatic processing of vowel formant structure emerges relatively late in ontogenesis and pose a serious challenge to current theories of hemispheric specialization for speech processing.


Assuntos
Córtex Auditivo , Percepção da Fala , Adulto , Humanos , Criança , Córtex Auditivo/fisiologia , Estimulação Acústica , Percepção Auditiva/fisiologia , Magnetoencefalografia , Fala/fisiologia , Percepção da Fala/fisiologia
19.
Psychophysiology ; 61(2): e14450, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37779371

RESUMO

There are sounds that most people perceive as highly unpleasant, for instance, the sound of rubbing pieces of polystyrene together. Previous research showed larger physiological and neural responses for such aversive compared to neutral sounds. Hitherto, it remains unclear whether habituation, i.e., diminished responses to repeated stimulus presentation, which is typically reported for neutral sounds, occurs to the same extent for aversive stimuli. We measured the mismatch negativity (MMN) in response to rare occurrences of aversive or neutral deviant sounds within an auditory oddball sequence in 24 healthy participants, while they performed a demanding visual distractor task. Deviants occurred as single events (i.e., between two standards) or as double deviants (i.e., repeating the identical deviant sound in two consecutive trials). All deviants elicited a clear MMN, and amplitudes were larger for aversive than for neutral deviants (irrespective of their position within a deviant pair). This supports the claim of preattentive emotion evaluation during early auditory processing. In contrast to our expectations, MMN amplitudes did not show habituation, but increased in response to deviant repetition-similarly for aversive and neutral deviants. A more fine-grained analysis of individual MMN amplitudes in relation to individual arousal and valence ratings of each sound item revealed that stimulus-specific MMN amplitudes were best predicted by the interaction of deviant position and perceived arousal, but not by valence. Deviants with perceived higher arousal elicited larger MMN amplitudes only at the first deviant position, indicating that the MMN reflects preattentive processing of the emotional content of sounds.


Assuntos
Eletroencefalografia , Potenciais Evocados Auditivos , Humanos , Potenciais Evocados Auditivos/fisiologia , Habituação Psicofisiológica , Percepção Auditiva/fisiologia , Som , Estimulação Acústica
20.
Perception ; 53(1): 31-43, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37872670

RESUMO

We present an experimental research aiming to explore how spatial attention may be biased through auditory stimuli. In particular, we investigate how synchronous sound and image may affect attention and increase the saliency of the audiovisual event. We have designed and implemented an experimental study where subjects, wearing an eye-tracking system, were examined regarding their gaze toward the audiovisual stimuli being displayed. The audiovisual stimuli were specifically tailored for this experiment, consisting of videos contrasting in terms of Synch Points (i.e., moments where a visual event is associated with a visible trigger movement, synchronous with its correspondent sound). While consistency across audiovisual sensory modalities revealed to be an attention-drawing feature, when combined with synchrony, it clearly emphasized the biasing, triggering orienting, that is, focal attention towards the particular scene that contains the Synch Point. Consequently, results revealed synchrony to be a saliency factor, contributing to the strengthening of the focal attention.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Som , Movimento , Tecnologia de Rastreamento Ocular , Estimulação Acústica , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA