Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 126
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Eur J Neurosci ; 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38711271

RESUMEN

Regularities in our surroundings lead to predictions about upcoming events. Previous research has shown that omitted sounds during otherwise regular tone sequences elicit frequency-specific neural activity related to the upcoming but omitted tone. We tested whether this neural response is depending on the unpredictability of the omission. Therefore, we recorded magnetencephalography (MEG) data while participants listened to ordered or random tone sequences with omissions occurring either ordered or randomly. Using multivariate pattern analysis shows that the frequency-specific neural pattern during omission within ordered tone sequences occurs independent of the regularity of the omissions. These results suggest that the auditory predictions based on sensory experiences are not immediately updated by violations of those expectations.

2.
Nat Commun ; 15(1): 3692, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38693186

RESUMEN

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Asunto(s)
Atención , Movimientos Oculares , Magnetoencefalografía , Percepción del Habla , Habla , Humanos , Atención/fisiología , Movimientos Oculares/fisiología , Masculino , Femenino , Adulto , Adulto Joven , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica , Encéfalo/fisiología , Tecnología de Seguimiento Ocular
3.
Psychophysiology ; 61(1): e14435, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37691098

RESUMEN

Predictive processing theories, which model the brain as a "prediction machine", explain a wide range of cognitive functions, including learning, perception and action. Furthermore, it is increasingly accepted that aberrant prediction tendencies play a crucial role in psychiatric disorders. Given this explanatory value for clinical psychiatry, prediction tendencies are often implicitly conceptualized as individual traits or as tendencies that generalize across situations. As this has not yet explicitly been shown, in the current study, we quantify to what extent the individual tendency to anticipate sensory features of high probability generalizes across modalities. Using magnetoencephalography (MEG), we recorded brain activity while participants were presented with a sequence of four different (either visual or auditory) stimuli, which changed according to predefined transitional probabilities of two entropy levels: ordered vs. random. Our results show that, on a group-level, under conditions of low entropy, stimulus features of high probability are preactivated in the auditory but not in the visual modality. Crucially, the magnitude of the individual tendency to predict sensory events seems not to correlate between the two modalities. Furthermore, reliability statistics indicate poor internal consistency, suggesting that the measures from the different modalities are unlikely to reflect a single, common cognitive process. In sum, our findings suggest that quantification and interpretation of individual prediction tendencies cannot be generalized across modalities.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Reproducibilidad de los Resultados , Encéfalo , Magnetoencefalografía , Estimulación Acústica
4.
J Cogn Neurosci ; 36(1): 128-142, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-37977156

RESUMEN

Visual speech plays a powerful role in facilitating auditory speech processing and has been a publicly noticed topic with the wide usage of face masks during the COVID-19 pandemic. In a previous magnetoencephalography study, we showed that occluding the mouth area significantly impairs neural speech tracking. To rule out the possibility that this deterioration is because of degraded sound quality, in the present follow-up study, we presented participants with audiovisual (AV) and audio-only (A) speech. We further independently manipulated the trials by adding a face mask and a distractor speaker. Our results clearly show that face masks only affect speech tracking in AV conditions, not in A conditions. This shows that face masks indeed primarily impact speech processing by blocking visual speech and not by acoustic degradation. We can further highlight how the spectrogram, lip movements and lexical units are tracked on a sensor level. We can show visual benefits for tracking the spectrogram especially in the multi-speaker condition. While lip movements only show additional improvement and visual benefit over tracking of the spectrogram in clear speech conditions, lexical units (phonemes and word onsets) do not show visual enhancement at all. We hypothesize that in young normal hearing individuals, information from visual input is less used for specific feature extraction, but acts more as a general resource for guiding attention.


Asunto(s)
Percepción del Habla , Humanos , Habla , Percepción Visual , Estudios de Seguimiento , Pandemias , Estimulación Acústica
5.
J Assoc Res Otolaryngol ; 24(6): 531-547, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38015287

RESUMEN

Tinnitus has been widely investigated in order to draw conclusions about the underlying causes and altered neural activity in various brain regions. Existing studies have based their work on different tinnitus frameworks, ranging from a more local perspective on the auditory cortex to the inclusion of broader networks and various approaches towards tinnitus perception and distress. Magnetoencephalography (MEG) provides a powerful tool for efficiently investigating tinnitus and aberrant neural activity both spatially and temporally. However, results are inconclusive, and studies are rarely mapped to theoretical frameworks. The purpose of this review was to firstly introduce MEG to interested researchers and secondly provide a synopsis of the current state. We divided recent tinnitus research in MEG into study designs using resting state measurements and studies implementing tone stimulation paradigms. The studies were categorized based on their theoretical foundation, and we outlined shortcomings as well as inconsistencies within the different approaches. Finally, we provided future perspectives on how to benefit more efficiently from the enormous potential of MEG. We suggested novel approaches from a theoretical, conceptual, and methodological point of view to allow future research to obtain a more comprehensive understanding of tinnitus and its underlying processes.


Asunto(s)
Corteza Auditiva , Acúfeno , Humanos , Magnetoencefalografía/métodos , Encéfalo
6.
Ther Adv Neurol Disord ; 16: 17562864231190298, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37655227

RESUMEN

Background: It was proposed that network topology is altered in brain tumor patients. However, there is no consensus on the pattern of these changes and evidence on potential drivers is lacking. Objectives: We aimed to characterize neurooncological patients' network topology by analyzing glial brain tumors (GBTs) and brain metastases (BMs) with respect to the presence of structural epilepsy. Methods: Network topology derived from resting state magnetoencephalography was compared between (1) patients and controls, (2) GBTs and BMs, and (3) patients with (PSEs) and without structural epilepsy (PNSEs). Eligible patients were investigated from February 2019 to March 2021. We calculated whole brain (WB) connectivity in six frequency bands, network topological parameters (node degree, average shortest path length, local clustering coefficient) and performed a stratification, where differences in power were identified. For data analysis, we used Fieldtrip, Brain Connectivity MATLAB toolboxes, and in-house built scripts. Results: We included 41 patients (21 men), with a mean age of 60.1 years (range 23-82), of those were: GBTs (n = 23), BMs (n = 14), and other histologies (n = 4). Statistical analysis revealed a significantly decreased WB node degree in patients versus controls in every frequency range at the corrected level (p1-30Hz = 0.002, pγ = 0.002, pß = 0.002, pα = 0.002, pθ = 0.024, and pδ = 0.002). At the descriptive level, we found a significant augmentation for WB local clustering coefficient (p1-30Hz = 0.031, pδ = 0.013) in patients compared to controls, which did not persist the false discovery rate correction. No differences regarding networks of GBTs compared to BMs were identified. However, we found a significant increase in WB local clustering coefficient (pθ = 0.048) and decrease in WB node degree (pα = 0.039) in PSEs versus PNSEs at the uncorrected level. Conclusion: Our data suggest that network topology is altered in brain tumor patients. Histology per se might not, however, tumor-related epilepsy seems to influence the brain's functional network. Longitudinal studies and analysis of possible confounders are required to substantiate these findings.

7.
eNeuro ; 10(10)2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37775312

RESUMEN

The auditory system relies on both local and summary representations; acoustic local features exceeding system constraints are compacted into a set of summary statistics. Such compression is pivotal for sound-object recognition. Here, we assessed whether computations subtending local and statistical representations of sounds could be distinguished at the neural level. A computational auditory model was employed to extract auditory statistics from natural sound textures (i.e., fire, rain) and to generate synthetic exemplars where local and statistical properties were controlled. Twenty-four human participants were passively exposed to auditory streams while the electroencephalography (EEG) was recorded. Each stream could consist of short, medium, or long sounds to vary the amount of acoustic information. Short and long sounds were expected to engage local or summary statistics representations, respectively. Data revealed a clear dissociation. Compared with summary-based ones, auditory-evoked responses based on local information were selectively greater in magnitude in short sounds. Opposite patterns emerged for longer sounds. Neural oscillations revealed that local features and summary statistics rely on neural activity occurring at different temporal scales, faster (beta) or slower (theta-alpha). These dissociations emerged automatically without explicit engagement in a discrimination task. Overall, this study demonstrates that the auditory system developed distinct coding mechanisms to discriminate changes in the acoustic environment based on fine structure and summary representations.


Asunto(s)
Corteza Auditiva , Percepción Auditiva , Humanos , Estimulación Acústica , Percepción Auditiva/fisiología , Sonido , Potenciales Evocados Auditivos/fisiología , Acústica , Corteza Auditiva/fisiología
8.
BMC Med ; 21(1): 283, 2023 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-37533027

RESUMEN

BACKGROUND: Tinnitus affects 10 to 15% of the population, but its underlying causes are not yet fully understood. Hearing loss has been established as the most important risk factor. Ageing is also known to accompany increased prevalence; however, the risk is normally seen in context with (age-related) hearing loss. Whether ageing per se is a risk factor has not yet been established. We specifically focused on the effect of ageing and the relationship between age, hearing loss, and tinnitus. METHODS: We used two samples for our analyses. The first, exploratory analyses comprised 2249 Austrian individuals. The second included data from 16,008 people, drawn from a publicly available dataset (NHANES). We used logistic regressions to investigate the effect of age on tinnitus. RESULTS: In both samples, ageing per se was found to be a significant predictor of tinnitus. In the more decisive NHANES sample, there was an additional interaction effect between age and hearing loss. Odds ratio analyses show that per unit increase of hearing loss, the odds of reporting tinnitus is higher in older people (1.06 vs 1.03). CONCLUSIONS: Expanding previous findings of hearing loss as the main risk factor for tinnitus, we established ageing as a risk factor in its own right. Underlying mechanisms remain unclear, and this work calls for urgent research efforts to link biological ageing processes, hearing loss, and tinnitus. We therefore suggest a novel working hypothesis that integrates these aspects from an ageing brain viewpoint.


Asunto(s)
Pérdida Auditiva , Acúfeno , Humanos , Anciano , Acúfeno/epidemiología , Acúfeno/etiología , Encuestas Nutricionales , Pérdida Auditiva/epidemiología , Envejecimiento , Factores de Riesgo
9.
Psychophysiology ; 60(11): e14362, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37350379

RESUMEN

The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.


Asunto(s)
Percepción del Habla , Habla , Humanos , Magnetoencefalografía , Ruido , Cognición , Estimulación Acústica , Inteligibilidad del Habla
10.
Psychophysiology ; 60(10): e14353, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37246813

RESUMEN

Imagine you are focusing on the traffic on a busy street to ride your bike safely when suddenly you hear the siren of an ambulance. This unexpected sound involuntarily captures your attention and interferes with ongoing performance. We tested whether this type of distraction involves a spatial shift of attention. We measured behavioral data and magnetoencephalographic alpha power during a cross-modal paradigm that combined an exogenous cueing task and a distraction task. In each trial, a task-irrelevant sound preceded a visual target (left or right). The sound was usually the same animal sound (i.e., standard sound). Rarely, it was replaced by an unexpected environmental sound (i.e., deviant sound). Fifty percent of the deviants occurred on the same side as the target, and 50% occurred on the opposite side. Participants responded to the location of the target. As expected, responses were slower to targets that followed a deviant compared to a standard. Crucially, this distraction effect was mitigated by the spatial relationship between the targets and the deviants: responses were faster when targets followed deviants on the same versus different side, indexing a spatial shift of attention. This was further corroborated by a posterior alpha power modulation that was higher in the hemisphere ipsilateral (vs. contralateral) to the location of the attention-capturing deviant. We suggest that this alpha power lateralization reflects a spatial attention bias. Overall, our data support the contention that spatial shifts of attention contribute to deviant distraction.


Asunto(s)
Percepción Auditiva , Sonido , Humanos , Tiempo de Reacción/fisiología , Estimulación Acústica , Percepción Auditiva/fisiología , Magnetoencefalografía
11.
Cereb Cortex ; 33(11): 6608-6619, 2023 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-36617790

RESUMEN

Listening can be conceptualized as a process of active inference, in which the brain forms internal models to integrate auditory information in a complex interaction of bottom-up and top-down processes. We propose that individuals vary in their "prediction tendency" and that this variation contributes to experiential differences in everyday listening situations and shapes the cortical processing of acoustic input such as speech. Here, we presented tone sequences of varying entropy level, to independently quantify auditory prediction tendency (as the tendency to anticipate low-level acoustic features) for each individual. This measure was then used to predict cortical speech tracking in a multi speaker listening task, where participants listened to audiobooks narrated by a target speaker in isolation or interfered by 1 or 2 distractors. Furthermore, semantic violations were introduced into the story, to also examine effects of word surprisal during speech processing. Our results show that cortical speech tracking is related to prediction tendency. In addition, we find interactions between prediction tendency and background noise as well as word surprisal in disparate brain regions. Our findings suggest that individual prediction tendencies are generalizable across different listening situations and may serve as a valuable element to explain interindividual differences in natural listening situations.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Habla , Estimulación Acústica/métodos , Ruido
12.
J Cogn Neurosci ; 35(4): 588-602, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36626349

RESUMEN

It is widely established that sensory perception is a rhythmic process as opposed to a continuous one. In the context of auditory perception, this effect is only established on a cortical and behavioral level. Yet, the unique architecture of the auditory sensory system allows its primary sensory cortex to modulate the processes of its sensory receptors at the cochlear level. Previously, we could demonstrate the existence of a genuine cochlear theta (∼6-Hz) rhythm that is modulated in amplitude by intermodal selective attention. As the study's paradigm was not suited to assess attentional effects on the oscillatory phase of cochlear activity, the question of whether attention can also affect the temporal organization of the cochlea's ongoing activity remained open. The present study utilizes an interaural attention paradigm to investigate ongoing otoacoustic activity during a stimulus-free cue-target interval and an omission period of the auditory target in humans. We were able to replicate the existence of the cochlear theta rhythm. Importantly, we found significant phase opposition between the two ears and attention conditions of anticipatory as well as cochlear oscillatory activity during target presentation. Yet, the amplitude was unaffected by interaural attention. These results are the first to demonstrate that intermodal and interaural attention deploy different aspects of excitation and inhibition at the first level of auditory processing. Whereas intermodal attention modulates the level of cochlear activity, interaural attention modulates the timing.


Asunto(s)
Percepción Auditiva , Cóclea , Humanos , Inhibición Psicológica , Ritmo Teta
13.
Neuroimage ; 268: 119894, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36693596

RESUMEN

Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Humanos , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Encéfalo/fisiología , Percepción Auditiva , Cognición , Estimulación Acústica
14.
Cereb Cortex ; 33(7): 3478-3489, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-35972419

RESUMEN

Spatially selective modulation of alpha power (8-14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention.


Asunto(s)
Percepción Auditiva , Localización de Sonidos , Humanos , Percepción Auditiva/fisiología , Ritmo alfa/fisiología , Encéfalo/fisiología , Localización de Sonidos/fisiología , Sonido
15.
Curr Biol ; 32(24): R1347-R1349, 2022 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-36538886

RESUMEN

Categories help us make sense of sensory input. A new study has directly compared category-related brain signals between human infants and adults, discovering delayed and temporally highly compressed processing in infants.


Asunto(s)
Encéfalo , Ojo , Adulto , Humanos , Lactante
16.
PLoS One ; 17(9): e0275585, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36178907

RESUMEN

Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken. The task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We found an effect of item categories, articulation, sentence frequency, and sentence length on the recognition score. With respect to hearing impairment we found that higher subjective hearing impairment is associated with higher test score. We did not find any evidence that prelingually deaf individuals show enhanced lipreading skills over people with postlingual acquired hearing impairment. However, we see an interaction with education only in the prelingual deaf, but not in the population with postlingual acquired hearing loss. This points to the fact that there are different factors contributing to enhanced lipreading abilities depending on the onset of hearing impairment (prelingual vs. postlingual). Overall, lipreading skills vary strongly in the general population independent of hearing impairment. Based on our findings we constructed a new and efficient lipreading assessment tool (SaLT) that can be used to test behavioral lipreading abilities in the German speaking population.


Asunto(s)
Sordera , Pérdida Auditiva , Percepción del Habla , Humanos , Lenguaje , Lingüística , Lectura de los Labios , Habla , Percepción Visual
17.
Psychophysiology ; 59(5): e14052, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35398913

RESUMEN

Since its beginnings in the early 20th century, the psychophysiological study of human brain function has included research into the spectral properties of electrical and magnetic brain signals. Now, dramatic advances in digital signal processing, biophysics, and computer science have enabled increasingly sophisticated methodology for neural time series analysis. Innovations in hardware and recording techniques have further expanded the range of tools available to researchers interested in measuring, quantifying, modeling, and altering the spectral properties of neural time series. These tools are increasingly used in the field, by a growing number of researchers who vary in their training, background, and research interests. Implementation and reporting standards also vary greatly in the published literature, causing challenges for authors, readers, reviewers, and editors alike. The present report addresses this issue by providing recommendations for the use of these methods, with a focus on foundational aspects of frequency domain and time-frequency analyses. It also provides publication guidelines, which aim to (1) foster replication and scientific rigor, (2) assist new researchers who wish to enter the field of brain oscillations, and (3) facilitate communication among authors, reviewers, and editors.


Asunto(s)
Encéfalo , Psicofisiología , Humanos , Proyectos de Investigación , Factores de Tiempo
19.
Neuroimage ; 252: 119044, 2022 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-35240298

RESUMEN

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occluding the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologically difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the missing visual input. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing.


Asunto(s)
Percepción del Habla , Habla , Estimulación Acústica , Acústica , Humanos , Boca , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA