Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
BMC Ophthalmol ; 23(1): 220, 2023 May 17.
Article in English | MEDLINE | ID: mdl-37198558

ABSTRACT

BACKGROUND: Amblyopia is the most common developmental vision disorder in children. The initial treatment consists of refractive correction. When insufficient, occlusion therapy may further improve visual acuity. However, the challenges and compliance issues associated with occlusion therapy may result in treatment failure and residual amblyopia. Virtual reality (VR) games developed to improve visual function have shown positive preliminary results. The aim of this study is to determine the efficacy of these games to improve vision, attention, and motor skills in patients with residual amblyopia and identify brain-related changes. We hypothesize that a VR-based training with the suggested ingredients (3D cues and rich feedback), combined with increasing the difficulty level and the use of various games in a home-based environment is crucial for treatment efficacy of vision recovery, and may be particularly effective in children. METHODS: The AMBER study is a randomized, cross-over, controlled trial designed to assess the effect of binocular stimulation (VR-based stereoptic serious games) in individuals with residual amblyopia (n = 30, 6-35 years of age), compared to refractive correction on vision, selective attention and motor control skills. Additionally, they will be compared to a control group of age-matched healthy individuals (n = 30) to account for the unique benefit of VR-based serious games. All participants will play serious games 30 min per day, 5 days per week, for 8 weeks. The games are delivered with the Vivid Vision Home software. The amblyopic cohort will receive both treatments in a randomized order according to the type of amblyopia, while the control group will only receive the VR-based stereoscopic serious games. The primary outcome is visual acuity in the amblyopic eye. Secondary outcomes include stereoacuity, functional vision, cortical visual responses, selective attention, and motor control. The outcomes will be measured before and after each treatment with 8-week follow-up. DISCUSSION: The VR-based games used in this study have been conceived to deliver binocular visual stimulation tailored to the individual visual needs of the patient, which will potentially result in improved basic and functional vision skills as well as visual attention and motor control skills. TRIAL REGISTRATION: This protocol is registered on ClinicalTrials.gov (identifier: NCT05114252) and in the Swiss National Clinical Trials Portal (identifier: SNCTP000005024).


Subject(s)
Amblyopia , Video Games , Child , Humans , Amblyopia/therapy , Vision, Binocular/physiology , Visual Acuity , Treatment Outcome , Randomized Controlled Trials as Topic
2.
Neuroimage ; 244: 118556, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34492292

ABSTRACT

Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.


Subject(s)
Attention/physiology , Visual Perception/physiology , Adult , Cues , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Male , Motivation , Reaction Time , Time Perception , Young Adult
3.
Dev Cogn Neurosci ; 48: 100930, 2021 04.
Article in English | MEDLINE | ID: mdl-33561691

ABSTRACT

Outside the laboratory, people need to pay attention to relevant objects that are typically multisensory, but it remains poorly understood how the underlying neurocognitive mechanisms develop. We investigated when adult-like mechanisms controlling one's attentional selection of visual and multisensory objects emerge across childhood. Five-, 7-, and 9-year-olds were compared with adults in their performance on a computer game-like multisensory spatial cueing task, while 129-channel EEG was simultaneously recorded. Markers of attentional control were behavioural spatial cueing effects and the N2pc ERP component (analysed traditionally and using a multivariate electrical neuroimaging framework). In behaviour, adult-like visual attentional control was present from age 7 onwards, whereas multisensory control was absent in all children groups. In EEG, multivariate analyses of the activity over the N2pc time-window revealed stable brain activity patterns in children. Adult-like visual-attentional control EEG patterns were present age 7 onwards, while multisensory control activity patterns were found in 9-year-olds (albeit behavioural measures showed no effects). By combining rigorous yet naturalistic paradigms with multivariate signal analyses, we demonstrated that visual attentional control seems to reach an adult-like state at ∼7 years, before adult-like multisensory control, emerging at ∼9 years. These results enrich our understanding of how attention in naturalistic settings develops.


Subject(s)
Neuroimaging , Adult , Auditory Perception , Child , Child, Preschool , Cognition , Cues , Electroencephalography , Female , Humans , Male , Photic Stimulation , Visual Perception , Young Adult
4.
Neuropsychologia ; 144: 107498, 2020 07.
Article in English | MEDLINE | ID: mdl-32442445

ABSTRACT

Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Sound , Visual Cortex/physiology , Acoustic Stimulation , Acoustics , Adult , Attentional Bias , Electroencephalography , Female , Humans , Male , Middle Aged , Young Adult
5.
Sci Rep ; 10(1): 1394, 2020 02 04.
Article in English | MEDLINE | ID: mdl-32019951

ABSTRACT

The capacity to integrate information from different senses is central for coherent perception across the lifespan from infancy onwards. Later in life, multisensory processes are related to cognitive functions, such as speech or social communication. During learning, multisensory processes can in fact enhance subsequent recognition memory for unisensory objects. These benefits can even be predicted; adults' recognition memory performance is shaped by earlier responses in the same task to multisensory - but not unisensory - information. Everyday environments where learning occurs, such as classrooms, are inherently multisensory in nature. Multisensory processes may therefore scaffold healthy cognitive development. Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level cognition that is present already in schoolchildren. Multiple regression analyses indicated that the extent to which a child (N = 68; aged 4.5-15years) exhibited multisensory benefits on a simple detection task not only predicted benefits on a continuous recognition task involving naturalistic objects (p = 0.009), even when controlling for age, but also the same relative multisensory benefit also predicted working memory scores (p = 0.023) and fluid intelligence scores (p = 0.033) as measured using age-standardised test batteries. By contrast, gains in unisensory detection did not show significant prediction of any of the above global cognition measures. Our findings show that low-level multisensory processes predict higher-order memory and cognition already during childhood, even if still subject to ongoing maturation. These results call for revision of traditional models of cognitive development (and likely also education) to account for the role of multisensory processing, while also opening exciting opportunities to facilitate early learning through multisensory programs. More generally, these data suggest that a simple detection task could provide direct insights into the integrity of global cognition in schoolchildren and could be further developed as a readily-implemented and cost-effective screening tool for neurodevelopmental disorders, particularly in cases when standard neuropsychological tests are infeasible or unavailable.


Subject(s)
Cognition , Perception , Psychology, Child/statistics & numerical data , Adolescent , Child , Child Development , Child, Preschool , Female , Humans , Intelligence , Male , Memory, Short-Term , Recognition, Psychology , Regression Analysis
6.
Article in English | MEDLINE | ID: mdl-30930756

ABSTRACT

Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.

7.
BMC Pediatr ; 19(1): 81, 2019 03 19.
Article in English | MEDLINE | ID: mdl-30890132

ABSTRACT

BACKGROUND: Premature infants are at risk for abnormal sensory development due to brain immaturity at birth and atypical early sensory experiences in the Neonatal Intensive Care Unit. This altered sensory development can have downstream effects on other more complex developmental processes. There are currently no interventions that address rehabilitation of sensory function in the neonatal period. METHODS: This study is a randomized controlled trial of preterm infants enrolled at 32-36 weeks postmenstrual age to either standard care or standard care plus multisensory intervention in order to study the effect of multisensory intervention as compared to standard care alone. The study population will consist of 100 preterm infants in each group (total n = 200). Both groups will receive standard care, consisting of non-contingent recorded parent's voice and skin-to-skin by parent. The multisensory group will also receive contemporaneous holding and light pressure containment for tactile stimulation, playing of the mother's voice contingent on the infant's pacifier sucking for auditory stimulation, exposure to a parent-scented cloth for olfactory stimulation, and exposure to carefully regulated therapist breathing that is mindful and responsive to the child's condition for vestibular stimulation. The primary outcome is a brain-based measure of multisensory processing, measured using time locked-EEG. Secondary outcomes include sensory adaptation, tactile processing, speech sound differentiation, motor and language function, measured at one and two years corrected gestational age. DISCUSSION: This is the first randomized controlled trial of a multisensory intervention using brain-based measurements in order to explain the causal effects of the multisensory intervention on neural processing changes to mediate neurodevelopmental outcomes in former preterm infants. In addition to contributing a critical link in our understanding of these processes, the protocolized multisensory intervention in this study is therapist administered, parent supported and leverages simple technology. Thus, this multisensory intervention has the potential to be widely implemented in various NICU settings, with the opportunity to potentially improve neurodevelopment of premature infants. TRIAL REGISTRATION: NIH Clinical Trials ( clinicaltrials.gov ): NCT03232931 . Registered July 2017.


Subject(s)
Infant, Premature , Language Development , Motor Skills , Neurodevelopmental Disorders/prevention & control , Electroencephalography , Female , Humans , Infant, Newborn , Intensive Care Units, Neonatal , Male , Nervous System Physiological Phenomena , Parents
8.
Cognition ; 186: 171-177, 2019 05.
Article in English | MEDLINE | ID: mdl-30782550

ABSTRACT

Traditional models developed within cognitive psychology suggest that attention is deployed flexibly and irrespective of differences in expertise with to-be-attended stimuli. However, everyday environments are inherently multisensory and observers differ in familiarity with particular unisensory representations (e.g., number words, in contrast with digits). To test whether the predictions of the traditional models extend to such naturalistic settings, six-year-olds, 11-year-olds and young adults (N = 83) searched for predefined numerals amongst a small or large number of distractor digits, while distractor number words, digits or their combination were presented peripherally. Concurrently presented number words and audiovisual stimuli that were compatible with the target digit facilitated young children's selective attention. In contrast, for older children and young adults number words and audiovisual stimuli that were incompatible with their visual targets resulted in a cost on reaction time. These findings suggest that multisensory and familiarity-based influences interact dynamically as they shape selective attention. Therefore, models of selective attention should include multisensory and familiarity-dependent constraints: more or less familiar object representations across modalities will be attended to differently, with their effects visible as predominant benefits for attention at one level but costs at another.


Subject(s)
Attention , Auditory Perception , Pattern Recognition, Visual , Acoustic Stimulation , Adult , Age Factors , Child , Female , Humans , Male , Mathematical Concepts , Photic Stimulation , Psychomotor Performance , Reaction Time , Young Adult
9.
J Cogn Neurosci ; 31(3): 327-338, 2019 03.
Article in English | MEDLINE | ID: mdl-29916793

ABSTRACT

Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top-down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged-by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging "real-world neuroscientific" approaches. These approaches differ in their principal aims, assumptions, or even definitions of "real-world neuroscience" research. Here, we showcase the commonalities and distinctive features of the different "real-world neuroscience" approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.


Subject(s)
Brain/physiology , Cognition/physiology , Environment , Neurosciences , Attention/physiology , Humans
10.
J Cogn Neurosci ; 31(3): 412-430, 2019 03.
Article in English | MEDLINE | ID: mdl-30513045

ABSTRACT

In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top-down audiovisual object representations ("attentional templates") while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via "the N2pc component": spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150-300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030-1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top-down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170-270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top-down object representations, and so not only by separate sensory-specific top-down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top-down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.


Subject(s)
Attention/physiology , Brain/physiology , Cognition/physiology , Visual Perception/physiology , Adult , Cues , Electroencephalography , Female , Humans , Male , Neuroimaging , Photic Stimulation , Reaction Time/physiology , Young Adult
11.
Neural Plast ; 2018: 1891978, 2018.
Article in English | MEDLINE | ID: mdl-30532772

ABSTRACT

Cerebral palsy (CP) is predominantly a disorder of movement, with evidence of sensory-motor dysfunction. CIMT1 is a widely used treatment for hemiplegic CP. However, effects of CIMT on somatosensory processing remain unclear. To examine potential CIMT-induced changes in cortical tactile processing, we designed a prospective study, during which 10 children with hemiplegic CP (5 to 8 years old) underwent an intensive one-week-long nonremovable hard-constraint CIMT. Before and directly after the treatment, we recorded their cortical event-related potential (ERP) responses to calibrated light touch (versus a control stimulus) at the more and less affected hand. To provide insights into the core neurophysiological deficits in light touch processing in CP as well as into the plasticity of this function following CIMT, we analyzed the ERPs within an electrical neuroimaging framework. After CIMT, brain areas governing the more affected hand responded to touch in configurations similar to those activated by the hemisphere controlling the less affected hand before CIMT. This was in contrast to the affected hand where configurations resembled those of the more affected hand before CIMT. Furthermore, dysfunctional patterns of brain activity, identified using hierarchical ERP cluster analyses, appeared reduced after CIMT in proportion with changes in sensory-motor measures (grip or pinch movements). These novel results suggest recovery of functional sensory activation as one possible mechanism underlying the effectiveness of intensive constraint-based therapy on motor functions in the more affected upper extremity in CP. However, maladaptive effects on the less affected constrained extremity may also have occurred. Our findings also highlight the use of electrical neuroimaging as feasible methodology to measure changes in tactile function after treatment even in young children, as it does not require active participation.


Subject(s)
Cerebral Palsy/physiopathology , Cerebral Palsy/therapy , Motion Therapy, Continuous Passive/methods , Neuronal Plasticity/physiology , Somatosensory Cortex/physiology , Cerebral Palsy/diagnosis , Child , Child, Preschool , Electroencephalography/methods , Evoked Potentials, Somatosensory/physiology , Female , Humans , Male , Physical Therapy Modalities , Prospective Studies , Range of Motion, Articular/physiology
12.
Neuroimage ; 179: 480-488, 2018 10 01.
Article in English | MEDLINE | ID: mdl-29959049

ABSTRACT

Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the ICeffect). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the ICeffect. We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The ICeffect was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients.


Subject(s)
Brain/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Male , Photic Stimulation , Sound , Young Adult
13.
Sci Rep ; 8(1): 8901, 2018 06 11.
Article in English | MEDLINE | ID: mdl-29891964

ABSTRACT

Multisensory information typically confers neural and behavioural advantages over unisensory information. We used a simple audio-visual detection task to compare healthy young (HY), healthy older (HO) and mild-cognitive impairment (MCI) individuals. Neuropsychological tests assessed individuals' learning and memory impairments. First, we provide much-needed clarification regarding the presence of enhanced multisensory benefits in both healthily and abnormally aging individuals. The pattern of sensory dominance shifted with healthy and abnormal aging to favour a propensity of auditory-dominant behaviour (i.e., detecting sounds faster than flashes). Notably, multisensory benefits were larger only in healthy older than younger individuals who were also visually-dominant. Second, we demonstrate that the multisensory detection task offers benefits as a time- and resource-economic MCI screening tool. Receiver operating characteristic (ROC) analysis demonstrated that MCI diagnosis could be reliably achieved based on the combination of indices of multisensory integration together with indices of sensory dominance. Our findings showcase the importance of sensory profiles in determining multisensory benefits in healthy and abnormal aging. Crucially, our findings open an exciting possibility for multisensory detection tasks to be used as a cost-effective screening tool. These findings clarify relationships between multisensory and memory functions in aging, while offering new avenues for improved dementia diagnostics.


Subject(s)
Aging/pathology , Aging/physiology , Cognitive Dysfunction/diagnosis , Mass Screening/methods , Neuropsychological Tests , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Auditory Perception , Female , Humans , Male , Photic Stimulation , ROC Curve , Visual Perception , Young Adult
14.
Neuroimage ; 176: 29-40, 2018 08 01.
Article in English | MEDLINE | ID: mdl-29678759

ABSTRACT

Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Evoked Potentials, Auditory , Female , Humans , Male , Middle Aged , Sound Localization/physiology , Sound Spectrography , Young Adult
15.
Neuropsychologia ; 105: 243-252, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28400327

ABSTRACT

Traditional studies of memory and object recognition involved objects presented within a single sensory modality (i.e., purely visual or purely auditory objects). However, in naturalistic settings, objects are often evaluated and processed in a multisensory manner. This begets the question of how object representations that combine information from the different senses are created and utilised by memory functions. Here we review research that has demonstrated that a single multisensory exposure can influence memory for both visual and auditory objects. In an old/new object discrimination task, objects that were presented initially with a task-irrelevant stimulus in another sense were better remembered compared to stimuli presented alone, most notably when the two stimuli were semantically congruent. The brain discriminates between these two types of object representations within the first 100ms post-stimulus onset, indicating early "tagging" of objects/events by the brain based on the nature of their initial presentation context. Interestingly, the specific brain networks supporting the improved object recognition vary based on a variety of factors, including the effectiveness of the initial multisensory presentation and the sense that is task-relevant. We specify the requisite conditions for multisensory contexts to improve object discrimination following single exposures, and the individual differences that exist with respect to these improvements. Our results shed light onto how memory operates on the multisensory nature of object representations as well as how the brain stores and retrieves memories of objects.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Recognition, Psychology , Visual Perception/physiology , Acoustic Stimulation , Brain/diagnostic imaging , Discrimination, Psychological/physiology , Humans , Photic Stimulation
16.
Curr Biol ; 27(7): 1048-1054, 2017 Apr 03.
Article in English | MEDLINE | ID: mdl-28318973

ABSTRACT

Every year, 15 million preterm infants are born, and most spend their first weeks in neonatal intensive care units (NICUs) [1]. Although essential for the support and survival of these infants, NICU sensory environments are dramatically different from those in which full-term infants mature and thus likely impact the development of functional brain organization [2]. Yet the integrity of sensory systems determines effective perception and behavior [3, 4]. In neonates, touch is a cornerstone of interpersonal interactions and sensory-cognitive development [5-7]. NICU treatments used to improve neurodevelopmental outcomes rely heavily on touch [8]. However, we understand little of how brain maturation at birth (i.e., prematurity) and quality of early-life experiences (e.g., supportive versus painful touch) interact to shape the development of the somatosensory system [9]. Here, we identified the spatial, temporal, and amplitude characteristics of cortical responses to light touch that differentiate them from sham stimuli in full-term infants. We then utilized this data-driven analytical framework to show that the degree of prematurity at birth determines the extent to which brain responses to light touch (but not sham) are attenuated at the time of discharge from the hospital. Building on these results, we showed that, when controlling for prematurity and analgesics, supportive experiences (e.g., breastfeeding, skin-to-skin care) are associated with stronger brain responses, whereas painful experiences (e.g., skin punctures, tube insertions) are associated with reduced brain responses to the same touch stimuli. Our results shed crucial insights into the mechanisms through which common early perinatal experiences may shape the somatosensory scaffolding of later perceptual, cognitive, and social development.


Subject(s)
Brain/physiology , Infant, Newborn/physiology , Infant, Premature/physiology , Touch Perception , Cohort Studies , Electroencephalography , Evoked Potentials , Female , Humans , Male , Term Birth
17.
Curr Biol ; 26(13): R519-R520, 2016 07 11.
Article in English | MEDLINE | ID: mdl-27404234

ABSTRACT

Diagnosing heart conditions by auscultation is an important clinical skill commonly learnt by medical students. Clinical proficiency for this skill is in decline [1], and new teaching methods are needed. Successful discrimination of heartbeat sounds is believed to benefit mainly from acoustical training [2]. From recent studies of auditory training [3,4] we hypothesized that semantic representations outside the auditory cortex contribute to diagnostic accuracy in cardiac auscultation. To test this hypothesis, we analysed auditory evoked potentials (AEPs) which were recorded from medical students while they diagnosed quadruplets of heartbeat cycles. The comparison of trials with correct (Hits) versus incorrect diagnosis (Misses) revealed a significant difference in brain activity at 280-310 ms after the onset of the second cycle within the left middle frontal gyrus (MFG) and the right prefrontal cortex. This timing and locus suggest that semantic rather than acoustic representations contribute critically to auscultation skills. Thus, teaching auscultation should emphasize the link between the heartbeat sound and its meaning. Beyond cardiac auscultation, this issue is of interest for all fields where subtle but complex perceptual differences identify items in a well-known semantic context.


Subject(s)
Auditory Perception/physiology , Clinical Competence , Heart Auscultation/standards , Heart Diseases/diagnosis , Students, Medical , Humans , Learning , Physical Examination
18.
Exp Brain Res ; 234(5): 1307-23, 2016 May.
Article in English | MEDLINE | ID: mdl-26931340

ABSTRACT

Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.


Subject(s)
Attention , Brain Mapping , Brain/physiology , Goals , Perception/physiology , Female , Humans , Male , Physical Stimulation
19.
Neuroimage ; 129: 335-344, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26827814

ABSTRACT

Objects' borders are readily perceived despite absent contrast gradients, e.g. due to poor lighting or occlusion. In humans, a visual evoked potential (VEP) correlate of illusory contour (IC) sensitivity, the "IC effect", has been identified with an onset at ~90 ms and generators within bilateral lateral occipital cortices (LOC). The IC effect is observed across a wide range of stimulus parameters, though until now it always involved high-contrast achromatic stimuli. Whether IC perception and its brain mechanisms differ as a function of the type of stimulus cue remains unknown. Resolving such will provide insights on whether there is a unique or multiple solutions to how the brain binds together spatially fractionated information into a cohesive perception. Here, participants discriminated IC from no-contour (NC) control stimuli that were either comprised of low-contrast achromatic stimuli or instead isoluminant chromatic contrast stimuli (presumably biasing processing to the magnocellular and parvocellular pathways, respectively) on separate blocks of trials. Behavioural analyses revealed that ICs were readily perceived independently of the stimulus cue--i.e. when defined by either chromatic or luminance contrast. VEPs were analysed within an electrical neuroimaging framework and revealed a generally similar timing of IC effects across both stimulus contrasts (i.e. at ~90 ms). Additionally, an overall phase shift of the VEP on the order of ~30 ms was consistently observed in response to chromatic vs. luminance contrast independently of the presence/absence of ICs. Critically, topographic differences in the IC effect were observed over the ~110-160 ms period; different configurations of intracranial sources contributed to IC sensitivity as a function of stimulus contrast. Distributed source estimations localized these differences to LOC as well as V1/V2. The present data expand current models by demonstrating the existence of multiple, cue-dependent circuits in the brain for generating perceptions of illusory contours.


Subject(s)
Brain/physiology , Evoked Potentials, Visual/physiology , Form Perception/physiology , Adult , Brain Mapping/methods , Cues , Electroencephalography , Female , Humans , Male , Photic Stimulation , Young Adult
20.
Neuroimage ; 125: 996-1004, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26564531

ABSTRACT

Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~250ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Evoked Potentials/physiology , Visual Cortex/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...