Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 351
Filtrar
2.
Cereb Cortex ; 34(6)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38897817

RESUMO

Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.


Assuntos
Córtex Auditivo , Cegueira , Imageamento por Ressonância Magnética , Córtex Visual , Humanos , Cegueira/fisiopatologia , Cegueira/diagnóstico por imagem , Masculino , Adulto , Feminino , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Córtex Auditivo/fisiopatologia , Córtex Visual/diagnóstico por imagem , Córtex Visual/fisiologia , Adulto Jovem , Plasticidade Neuronal/fisiologia , Estimulação Acústica , Mapeamento Encefálico , Pessoa de Meia-Idade , Percepção Auditiva/fisiologia , Ecolocação/fisiologia
3.
Behav Res Methods ; 56(7): 7561-7573, 2024 10.
Artigo em Inglês | MEDLINE | ID: mdl-38750387

RESUMO

While several methods have been proposed to assess the influence of continuous visual cues in parallel numerosity estimation, the impact of temporal magnitudes on sequential numerosity judgments has been largely ignored. To overcome this issue, we extend a recently proposed framework that makes it possible to separate the contribution of numerical and non-numerical information in numerosity comparison by introducing a novel stimulus space designed for sequential tasks. Our method systematically varies the temporal magnitudes embedded into event sequences through the orthogonal manipulation of numerosity and two latent factors, which we designate as "duration" and "temporal spacing". This allows us to measure the contribution of finer-grained temporal features on numerosity judgments in several sensory modalities. We validate the proposed method on two different experiments in both visual and auditory modalities: results show that adult participants discriminated sequences primarily by relying on numerosity, with similar acuity in the visual and auditory modality. However, participants were similarly influenced by non-numerical cues, such as the total duration of the stimuli, suggesting that temporal cues can significantly bias numerical processing. Our findings highlight the need to carefully consider the continuous properties of numerical stimuli in a sequential mode of presentation as well, with particular relevance in multimodal and cross-modal investigations. We provide the complete code for creating sequential stimuli and analyzing participants' responses.


Assuntos
Julgamento , Humanos , Feminino , Masculino , Adulto , Julgamento/fisiologia , Adulto Jovem , Sinais (Psicologia) , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Luminosa , Fatores de Tempo
4.
Artigo em Inglês | MEDLINE | ID: mdl-38724729

RESUMO

Auditory cues are integrated with vision and body-based self-motion cues for motion perception, balance, and gait, though limited research has evaluated their effectiveness for navigation. Here, we tested whether an auditory cue co-localized with a visual target could improve spatial updating in a virtual reality homing task. Participants navigated a triangular homing task with and without an easily localizable spatial audio signal co-located with the home location. The main outcome was unsigned angular error, defined as the absolute value of the difference between the participant's turning response and the correct response towards the home location. Angular error was significantly reduced in the presence of spatial sound compared to a head-fixed identical auditory signal. Participants' angular error was 22.79° in the presence of spatial audio and 30.09° in its absence. Those with the worst performance in the absence of spatial sound demonstrated the greatest improvement with the added sound cue. These results suggest that auditory cues may benefit navigation, particularly for those who demonstrated the highest level of spatial updating error in the absence of spatial sound.

5.
Psychon Bull Rev ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38587756

RESUMO

One's experience of shifting attention from the color to the smell to the act of picking a flower seems like a unitary process applied, at will, to one modality after another. Yet, the unique and separable experiences of sight versus smell versus movement might suggest that the neural mechanisms of attention have been separately optimized to employ each modality to its greatest advantage. Moreover, addressing the issue of universality can be particularly difficult due to a paucity of existing cross-modal comparisons and a dearth of neurophysiological methods that can be applied equally well across disparate modalities. Here we outline some of the conceptual and methodological issues related to this problem and present an instructive example of an experimental approach that can be applied widely throughout the human brain to permit detailed, quantitative comparison of attentional mechanisms across modalities. The ultimate goal is to spur efforts across disciplines to provide a large and varied database of empirical observations that will either support the notion of a universal neural substrate for attention or more clearly identify the degree to which attentional mechanisms are specialized for each modality.

6.
Front Neuroanat ; 18: 1331230, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38425805

RESUMO

Introduction: Auditory information is relayed from the cochlea via the eighth cranial nerve to the dorsal and ventral cochlear nuclei (DCN, VCN). The organization, neurochemistry and circuitry of the cochlear nuclei (CN) have been studied in many species. It is well-established that glycine is an inhibitory transmitter in the CN of rodents and cats, with glycinergic cells in the DCN and VCN. There are, however, major differences in the laminar and cellular organization of the DCN between humans (and other primates) and rodents and cats. We therefore asked whether there might also be differences in glycinergic neurotransmission in the CN. Methods: We studied brainstem sections from humans, chimpanzees, and cats. We used antibodies to glycine receptors (GLYR) to identify neurons receiving glycinergic input, and antibodies to the neuronal glycine transporter (GLYT2) to immunolabel glycinergic axons and terminals. We also examined archival sections immunostained for calretinin (CR) and nonphosphorylated neurofilament protein (NPNFP) to try to locate the octopus cell area (OCA), a region in the VCN that rodents has minimal glycinergic input. Results: In humans and chimpanzees we found widespread immunolabel for glycine receptors in DCN and in the posterior (PVCN) and anterior (AVCN) divisions of the VCN. We found a parallel distribution of GLYT2-immunolabeled fibers and puncta. The data also suggest that, as in rodents, a region containing octopus cells in cats, humans and chimpanzees has little glycinergic input. Discussion: Our results show that glycine is a major transmitter in the human and chimpanzee CN, despite the species differences in DCN organization. The sources of the glycinergic input to the CN in humans and chimpanzees are not known.

7.
Perception ; 53(5-6): 317-334, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38483923

RESUMO

Our percept of the world is not solely determined by what we perceive and process at a given moment in time, but also depends on what we processed recently. In the present study, we investigate whether the perceived emotion of a spoken sentence is contingent upon the emotion of an auditory stimulus on the preceding trial (i.e., serial dependence). Thereto, participants were exposed to spoken sentences that varied in emotional affect by changing the prosody that ranged from 'happy' to 'fearful'. Participants were instructed to rate the emotion. We found a positive serial dependence for emotion processing whereby the perceived emotion was biased towards the emotion on the preceding trial. When we introduced 'no-go' trials (i.e., no rating was required), we found a negative serial dependence when participants knew in advance to withhold their response on a given trial (Experiment 2) and a positive serial dependence when participants received the information to withhold their response after the stimulus presentation (Experiment 3). We therefore established a robust serial dependence for emotion processing in speech and introduce a methodology to disentangle perceptual from post-perceptual processes. This approach can be applied to the vast majority of studies investigating sequential dependencies to separate positive from negative serial dependence.


Assuntos
Emoções , Percepção da Fala , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia
8.
Patterns (N Y) ; 5(3): 100932, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38487806

RESUMO

Along with propagating the input toward making a prediction, Bayesian neural networks also propagate uncertainty. This has the potential to guide the training process by rejecting predictions of low confidence, and recent variational Bayesian methods can do so without Monte Carlo sampling of weights. Here, we apply sample-free methods for wildlife call detection on recordings made via passive acoustic monitoring equipment in the animals' natural habitats. We further propose uncertainty-aware label smoothing, where the smoothing probability is dependent on sample-free predictive uncertainty, in order to downweigh data samples that should contribute less to the loss value. We introduce a bioacoustic dataset recorded in Malaysian Borneo, containing overlapping calls from 30 species. On that dataset, our proposed method achieves an absolute percentage improvement of around 1.5 points on area under the receiver operating characteristic (AU-ROC), 13 points in F1, and 19.5 points in expected calibration error (ECE) compared to the point-estimate network baseline averaged across all target classes.

9.
Eur Thyroid J ; 13(2)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38417253

RESUMO

Thyroid hormones play an important role during the development and functioning of the different sensory systems. In order to exert their actions, thyroid hormones need to access their target cells through transmembrane transporter proteins, among which the monocarboxylate transporter 8 (MCT8) stands out for its pathophysiological relevance. Mutations in the gene encoding for MCT8 lead to the Allan-Herndon-Dudley syndrome (AHDS), a rare disease characterised by severe neuromotor and cognitive impairments. The impact of MCT8 deficiency in the neurosensory capacity of AHDS patients is less clear, with only a few patients displaying visual and auditory impairments. In this review we aim to gather data from different animal models regarding thyroid hormone transport and action in the different neurosensory systems that could aid to identify potential neurosensorial alterations in MCT8-deficient patients.


Assuntos
Deficiência Intelectual Ligada ao Cromossomo X , Atrofia Muscular , Hormônios Tireóideos , Animais , Humanos , Hormônios Tireóideos/metabolismo , Deficiência Intelectual Ligada ao Cromossomo X/genética , Transporte Biológico , Hipotonia Muscular/genética , Transportadores de Ácidos Monocarboxílicos/genética
10.
Nutr Neurosci ; 27(11): 1226-1236, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38386286

RESUMO

Diet can influence cognitive functioning in older adults and is a modifiable risk factor for cognitive decline. However, it is unknown if an association exists between diet and lower-level processes in the brain underpinning cognition, such as multisensory integration. We investigated whether temporal multisensory integration is associated with daily intake of fruit and vegetables (FV) or products high in fat/sugar/salt (FSS) in a large sample (N = 2,693) of older adults (mean age = 64.06 years, SD = 7.60; 56% female) from The Irish Longitudinal Study on Ageing (TILDA). Older adults completed a Food Frequency Questionnaire from which the total number of daily servings of FV and FSS items respectively was calculated. Older adults' susceptibility to the Sound Induced Flash Illusion (SIFI) measured the temporal precision of audio-visual integration, which included three audio-visual Stimulus Onset Asynchronies (SOAs): 70, 150 and 230 ms. Older adults who self-reported a higher daily consumption of FV were less susceptible to the SIFI at the longest versus shortest SOAs (i.e. increased temporal precision) compared to those reporting the lowest daily consumption (p = .013). In contrast, older adults reporting a higher daily consumption of FSS items were more susceptible to the SIFI at the longer versus shortest SOAs (i.e. reduced temporal precision) compared to those reporting the lowest daily consumption (p < .001). The temporal precision of multisensory integration is differentially associated with levels of daily consumption of FV versus products high in FSS, consistent with broader evidence that habitual diet is associated with brain health.


Assuntos
Dieta , Frutas , Humanos , Feminino , Masculino , Idoso , Pessoa de Meia-Idade , Estudos Longitudinais , Verduras , Cognição , Irlanda , Envelhecimento/fisiologia , Estado Nutricional , Percepção Auditiva
11.
Neuropsychologia ; 196: 108822, 2024 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-38342179

RESUMO

Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.


Assuntos
Localização de Som , Animais , Humanos , Localização de Som/fisiologia , Cegueira , Ruído , Acústica , Sinais (Psicologia) , Mascaramento Perceptivo , Estimulação Acústica/métodos
12.
Exp Brain Res ; 242(2): 451-462, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38165451

RESUMO

Bodily resizing illusions typically use visual and/or tactile inputs to produce a vivid experience of one's body changing size. Naturalistic auditory input (an input that reflects the natural sounds of a stimulus) has been used to increase illusory experience during the rubber hand illusion, whilst non-naturalistic auditory input can influence estimations of finger length. We aimed to use a non-naturalistic auditory input during a hand-based resizing illusion using augmented reality, to assess whether the addition of an auditory input would increase both subjective illusion strength and measures of performance-based tasks. Forty-four participants completed the following three conditions: no finger stretching, finger stretching without tactile feedback and finger stretching with tactile feedback. Half of the participants had an auditory input throughout all the conditions, whilst the other half did not. After each condition, the participants were given one of the following three performance tasks: stimulated (right) hand dot touch task, non-stimulated (left) hand dot touch task, and a ruler judgement task. Dot tasks involved participants reaching for the location of a virtual dot, whereas the ruler task concerned estimates of the participant's own finger on a ruler whilst the hand was hidden from view. After all trials, the participants completed a questionnaire capturing subjective illusion strength. The addition of auditory input increased subjective illusion strength for manipulations without tactile feedback but not those with tactile feedback. No facilitatory effects of audio were found for any performance task. We conclude that adding auditory input to illusory finger stretching increased subjective illusory experience in the absence of tactile feedback but did not affect performance-based measures.


Assuntos
Ilusões , Percepção do Tato , Humanos , Tato , Propriocepção , Mãos , Percepção Visual , Imagem Corporal
13.
Atten Percept Psychophys ; 86(3): 750-767, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38212478

RESUMO

Switching auditory attention to one of two (or more) simultaneous voices incurs a substantial performance overhead. Whether/when this voice 'switch cost' reduces when the listener has opportunity to prepare in silence is not clear-the findings on the effect of preparation on the switch cost range from (near) null to substantial. We sought to determine which factors are crucial for encouraging preparation and detecting its effect on the switch cost in a paradigm where participants categorized the number spoken by one of two simultaneous voices; the target voice, which changed unpredictably, was specified by a visual cue depicting the target's gender. First, we manipulated the probability of a voice switch. When 25% of trials were switches, increasing the preparation interval (50/800/1,400 ms) resulted in substantial (~50%) reduction in switch cost. No reduction was observed when 75% of trials were switches. Second, we examined the relative prevalence of low-conflict, 'congruent' trials (where the numbers spoken by the two voices were mapped onto the same response) and high-conflict, 'incongruent' trials (where the voices afforded different responses). 'Conflict prevalence' had a strong effect on selectivity-the incongruent-congruent difference ('congruence effect') was reduced in the 66%-incongruent condition relative to the 66%-congruent condition-but conflict prevalence did not discernibly interact with preparation and its effect on the switch cost. Thus, conditions where switches of target voice are relatively rare are especially conducive to preparation, possibly because attention is committed more strongly to (and/or disengaged less rapidly from) the perceptual features of target voice.


Assuntos
Atenção , Conflito Psicológico , Humanos , Feminino , Masculino , Adulto Jovem , Adulto , Sinais (Psicologia) , Percepção Auditiva , Percepção da Fala , Tempo de Reação , Voz , Adolescente , Probabilidade
14.
eNeuro ; 11(2)2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38195533

RESUMO

Activity-dependent neuronal plasticity is crucial for animals to adapt to dynamic sensory environments. Traditionally, it has been investigated using deprivation approaches in animal models primarily in sensory cortices. Nevertheless, emerging evidence emphasizes its significance in sensory organs and in subcortical regions where cranial nerves relay information to the brain. Additionally, critical questions started to arise. Do different sensory modalities share common cellular mechanisms for deprivation-induced plasticity at these central entry points? Does the deprivation duration correlate with specific plasticity mechanisms? This study systematically reviews and meta-analyzes research papers that investigated visual, auditory, or olfactory deprivation in rodents of both sexes. It examines the consequences of sensory deprivation in homologous regions at the first central synapse following cranial nerve transmission (vision - lateral geniculate nucleus and superior colliculus; audition - ventral and dorsal cochlear nucleus; olfaction - olfactory bulb). The systematic search yielded 91 papers (39 vision, 22 audition, 30 olfaction), revealing substantial heterogeneity in publication trends, experimental methods, measures of plasticity, and reporting across the sensory modalities. Despite these differences, commonalities emerged when correlating plasticity mechanisms with the duration of sensory deprivation. Short-term deprivation (up to 1 d) reduced activity and increased disinhibition, medium-term deprivation (1 d to a week) involved glial changes and synaptic remodeling, and long-term deprivation (over a week) primarily led to structural alterations. These findings underscore the importance of standardizing methodologies and reporting practices. Additionally, they highlight the value of cross-modal synthesis for understanding how the nervous system, including peripheral, precortical, and cortical areas, respond to and compensate for sensory inputs loss.


Assuntos
Plasticidade Neuronal , Privação Sensorial , Animais , Plasticidade Neuronal/fisiologia , Privação Sensorial/fisiologia , Roedores , Condutos Olfatórios/fisiologia , Vias Auditivas/fisiologia , Vias Visuais/fisiologia
15.
Heliyon ; 10(1): e23142, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38163154

RESUMO

Among the 17 Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the 13th SDG is a call for action to combat climate change. Moreover, SDGs 14 and 15 claim the protection and conservation of life below water and life on land, respectively. In this work, we provide a literature-founded overview of application areas, in which computer audition - a powerful but in this context so far hardly considered technology, combining audio signal processing and machine intelligence - is employed to monitor our ecosystem with the potential to identify ecologically critical processes or states. We distinguish between applications related to organisms, such as species richness analysis and plant health monitoring, and applications related to the environment, such as melting ice monitoring or wildfire detection. This work positions computer audition in relation to alternative approaches by discussing methodological strengths and limitations, as well as ethical aspects. We conclude with an urgent call to action to the research community for a greater involvement of audio intelligence methodology in future ecosystem monitoring approaches.

16.
Cell Rep ; 43(2): 113709, 2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38280196

RESUMO

During sensory-guided behavior, an animal's decision-making dynamics unfold through sequences of distinct performance states, even while stimulus-reward contingencies remain static. Little is known about the factors that underlie these changes in task performance. We hypothesize that these decision-making dynamics can be predicted by externally observable measures, such as uninstructed movements and changes in arousal. Here, using computational modeling of visual and auditory task performance data from mice, we uncovered lawful relationships between transitions in strategic task performance states and an animal's arousal and uninstructed movements. Using hidden Markov models applied to behavioral choices during sensory discrimination tasks, we find that animals fluctuate between minutes-long optimal, sub-optimal, and disengaged performance states. Optimal state epochs are predicted by intermediate levels, and reduced variability, of pupil diameter and movement. Our results demonstrate that externally observable uninstructed behaviors can predict optimal performance states and suggest that mice regulate their arousal during optimal performance.


Assuntos
Nível de Alerta , Movimento , Camundongos , Animais , Nível de Alerta/fisiologia , Análise e Desempenho de Tarefas , Simulação por Computador
17.
Atten Percept Psychophys ; 86(3): 909-930, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38253985

RESUMO

Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a novel human foraging task. Human foraging tasks are a recent extension of the classic visual search paradigm in which multiple targets must be located on a given trial, making it possible to capture a wide range of performance metrics. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony, in contrast to predictions from the so-called "pip-and-pop" effect (Van der Burg et al., Journal of Experimental Psychology, 1053-1065, 2008). In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 s, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects the local grouping of the synchronized targets. Importantly, there was no additional benefit for sound synchrony, even when the foraging task was quite difficult (Experiment 3).


Assuntos
Atenção , Percepção de Cores , Reconhecimento Visual de Modelos , Humanos , Atenção/fisiologia , Feminino , Percepção de Cores/fisiologia , Masculino , Adulto Jovem , Reconhecimento Visual de Modelos/fisiologia , Adulto , Percepção Auditiva/fisiologia , Tempo de Reação/fisiologia , Orientação Espacial/fisiologia , Adolescente , Orientação
18.
J Neurophysiol ; 131(1): 38-63, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37965933

RESUMO

Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.


Assuntos
Percepção da Fala , Percepção do Tempo , Animais , Humanos , Haplorrinos , Percepção Auditiva , Audição , Estimulação Acústica/métodos
19.
CoDAS ; 36(1): e20220341, 2024. graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1514026

RESUMO

ABSTRACT Purpose Due to the pandemic of the Covid-19 disease, it became common to wear masks on some public spaces. By covering mouth and nose, visual-related speech cues are greatly reduced, while the auditory signal is both distorted and attenuated. The present study aimed to analyze the multisensory effects of mask wearing on speech intelligibility and the differences in these effects between participants who spoke 1, 2 and 3 languages. Methods The study consisted of the presentation of sentences from the SPIN test to 40 participants. Participants were asked to report the perceived sentences. There were four conditions: auditory with mask; audiovisual with mask; auditory without mask; audiovisual without mask. Two sessions were conducted, one week apart, each with the same stimuli but with a different signal-to-noise ratio. Results Results demonstrated that the use of the mask decreased speech intelligibility, both due to a decrease in the quality of auditory stimuli and due to the loss of visual information. Signal-to-noise ratio largely affects speech intelligibility and higher ratios are needed in mask-wearing conditions to obtain any degree of intelligibility. Those who speak more than one language are less affected by mask wearing, as are younger listeners. Conclusion Wearing a facial mask reduces speech intelligibility, both due to visual and auditory factors. Older people and people who only speak one language are affected the most.


RESUMO Objetivo Devido à pandemia da doença Covid-19, o uso de máscaras em espaços públicos tornou-se comum. Ao cobrir a boca e o nariz, reduzem-se amplamente as pistas visuais associadas à fala, assim como se distorce e atenua o sinal auditivo. Este estudo teve como objetivo analisar os efeitos multissensoriais do uso da máscara na percepção da fala e a diferença entre participantes falantes de uma, duas ou três línguas. Método Este estudo consistiu na apresentação de frases do SPIN teste a 40 participantes. Os participantes tinham como tarefa reportar as frases percebidas em quatro condições: Auditiva com máscara, audiovisual com máscara, auditiva sem máscara, audiovisual sem máscara. Conduziram-se duas sessões, com uma semana de intervalo, cada uma com os mesmos estímulos mas com diferente relação sinal-ruído. Resultados Os resultados demonstraram que o uso de máscara reduz a inteligibilidade da fala, tanto devido à diminuição da qualidade do estímulo auditivo, como devido à perda de informação visual. A relação sinal-ruído afeta a inteligibilidade e com o uso de máscara são necessárias relações mais altas para obter qualquer identificação correta. Aqueles que falam mais do que uma língua, assim como os mais novos, são menos afetados na percepção de fala com uso de máscara. Conclusão O uso de máscara facial reduz a inteligibilidade da fala, tanto devido a fatores visuais como auditivos. Indivíduos monolíngues, assim como os mais velhos, são os mais afetados nesta tarefa.

20.
Ann N Y Acad Sci ; 1532(1): 18-36, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38152040

RESUMO

Eye movements have been extensively studied with respect to visual stimulation. However, we live in a multisensory world, and how the eyes are driven by other senses has been explored much less. Here, we review the evidence on how audition can trigger and drive different eye responses and which cortical and subcortical neural correlates are involved. We provide an overview on how different types of sounds, from simple tones and noise bursts to spatially localized sounds and complex linguistic stimuli, influence saccades, microsaccades, smooth pursuit, pupil dilation, and eye blinks. The reviewed evidence reveals how the auditory system interacts with the oculomotor system, both behaviorally and neurally, and how this differs from visually driven eye responses. Some evidence points to multisensory interaction, and potential multisensory integration, but the underlying computational and neural mechanisms are still unclear. While there are marked differences in how the eyes respond to auditory compared to visual stimuli, many aspects of auditory-evoked eye responses remain underexplored, and we summarize the key open questions for future research.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Humanos , Estimulação Luminosa/métodos , Potenciais Evocados Auditivos , Percepção Auditiva
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA