Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters











Database
Language
Publication year range
1.
J Exp Child Psychol ; 182: 166-186, 2019 06.
Article in English | MEDLINE | ID: mdl-30831382

ABSTRACT

Although much research suggests that adults, infants, and nonhuman primates process number (among other properties) across distinct modalities, limited studies have explored children's abilities to integrate multisensory information when making judgments about number. In the current study, 3- to 6-year-old children performed numerical matching or numerical discrimination tasks in which numerical information was presented either unimodally (visual only), cross-modally (comparing audio with visual), or bimodally (simultaneously presenting audio and visual input). In three experiments, we investigated children's multimodal numerical processing across distinct task demands and difficulty levels. In contrast to previous work, results indicate that even the youngest children (3 and 4 years) performed above chance across all three modality presentations. In addition, the current study contributes two other novel findings, namely that (a) children exhibit a cross-modal disadvantage when numerical comparisons are easy and that (b) accuracy on bimodal trial types led to even more accurate numerical judgments under more difficult circumstances, particularly for the youngest participants and when precise numerical matching was required. Importantly, findings from this study extend the literature on children's numerical cross-modal abilities to reveal that, like their adult counterparts, children readily track and compare visual and auditory numerical information, although their abilities to do so are not perfect and are affected by task demands and trial difficulty.


Subject(s)
Auditory Perception/physiology , Judgment/physiology , Mathematics/methods , Visual Perception/physiology , Age Factors , Child , Child, Preschool , Female , Humans , Male
2.
Infant Behav Dev ; 52: 32-44, 2018 08.
Article in English | MEDLINE | ID: mdl-29807236

ABSTRACT

Two-system theory as the dominant approach in the field of infant numerical representation is characterized by three features: precise representation of small sets of objects, approximate representation of large magnitudes and failure to compare small and large sets. Comparison of single- and multimodal numerical abilities suggests that infants' performance in multimodal conditions is consistent with these three features. Nevertheless, the influence of multimodal stimulation on infants' numerical representation is characterized by preventing the formation of perceptual overlaps across different sensory modalities which can lead to an understanding of numerical values of small sets and also by creating a conceptual overlap about numbers that increases infants' accuracy for discriminating quantities when numerical information is presented bimodally and synchronously. Such multisensory benefits provide numerical capabilities beyond what is depicted by the two-system view.


Subject(s)
Child Development/physiology , Cognition/physiology , Mathematics/education , Female , Humans , Infant , Knowledge , Male
3.
Adv Child Dev Behav ; 52: 227-268, 2017.
Article in English | MEDLINE | ID: mdl-28215286

ABSTRACT

Touch is the first of our senses to develop, providing us with the sensory scaffold on which we come to perceive our own bodies and our sense of self. Touch also provides us with direct access to the external world of physical objects, via haptic exploration. Furthermore, a recent area of interest in tactile research across studies of developing children and adults is its social function, mediating interpersonal bonding. Although there are a range of demonstrations of early competence with touch, particularly in the domain of haptics, the review presented here indicates that many of the tactile perceptual skills that we take for granted as adults (e.g., perceiving touches in the external world as well as on the body) take some time to develop in the first months of postnatal life, likely as a result of an extended process of connection with other sense modalities which provide new kinds of information from birth (e.g., vision and audition). Here, we argue that because touch is of such fundamental importance across a wide range of social and cognitive domains, it should be placed much more centrally in the study of early perceptual development than it currently is.


Subject(s)
Touch Perception , Adult , Body Image , Child , Child, Preschool , Concept Formation , Feedback, Sensory , Female , Functional Laterality , Humans , Infant , Infant, Newborn , Interpersonal Relations , Memory , Object Attachment , Orientation , Pregnancy , Proprioception , Self Concept , Visual Perception
4.
Multisens Res ; 30(6): 579-600, 2017 Jan 01.
Article in English | MEDLINE | ID: mdl-31287085

ABSTRACT

Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user's pre-existing sensory and cognitive capacities.

5.
J Phon ; 56: 66-74, 2016 May.
Article in English | MEDLINE | ID: mdl-28867850

ABSTRACT

One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

6.
Neuropsychologia ; 77: 366-79, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26382751

ABSTRACT

We studied the time course of the cerebral integration of olfaction in the visual processing of emotional faces during an orthogonal task asking for detection of red-colored faces among expressive faces. Happy, angry, disgust, fearful, sad, and neutral faces were displayed in pleasant, aversive or no odor control olfactory contexts while EEG was recorded to extract event-related potentials (ERPs). Results indicated that the expressive faces modulated the cerebral responses at occipito-parietal, central and central-parietal electrodes from around 100 ms and until 480 ms after face onset. The response was divided in different successive stages corresponding to different ERP components (P100, N170, P200 and N250 (EPN), and LPP). The olfactory contexts influenced the ERPs in response to facial expressions in two phases. First, regardless of their emotional content, the response to faces was enhanced by both odors compared with no odor approximately 160 ms after face-onset at several central, centro-parietal and left lateral electrodes. The topography of this effect clearly depended on the valence of odors. Then, a second phase occurred, but only in the aversive olfactory context, which modulated differentially the P200 at occipital sites (starting approximately 200 ms post-stimulus) by amplifying the differential response to expressions, especially between emotional neutrality and both happiness and disgust. Overall, the present study suggests that the olfactory context first elicits an undifferentiated effect around 160 ms after face onset, followed by a specific modulation at 200 ms induced by the aversive odor on neutral and affectively congruent/incongruent expressions.


Subject(s)
Brain/physiology , Facial Recognition/physiology , Olfactory Perception/physiology , Adolescent , Adult , Electroencephalography , Emotions , Evoked Potentials , Facial Expression , Female , Humans , Male , Odorants , Physical Stimulation , Young Adult
7.
Atten Percept Psychophys ; 77(7): 2356-76, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26001381

ABSTRACT

Visual processing is most effective at the location of our attentional focus. It has long been known that various spatial cues can direct visuospatial attention and influence the detection of auditory targets. Cross-modal cueing, however, seems to depend on the type of visual cue: facilitation effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1, we used endogenous cues to investigate their effect on the detection of auditory, visual, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2, we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3, we used predictive exogenous cues to examine the possibility that cue-target contingencies were responsible for the difference between Experiment 1 and 2. In all experiments, we investigated whether a response time model can explain the data and tested whether the observed cueing effects were modality-dependent. The results observed with endogenous cues imply that the perception of multisensory signals is modulated by a single, supramodal system operating in a top-down manner (Experiment 1). In contrast, bottom-up control of attention, as observed in the exogenous cueing task of Experiment 2, mainly exerts its influence through modality-specific subsystems. Experiment 3 showed that this striking difference does not depend on contingencies between cue and target.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Cues , Visual Perception/physiology , Adult , Female , Humans , Reaction Time/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL