Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 15: 1373191, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38550642

RESUMO

Introduction: A substantial amount of research from the last two decades suggests that infants' attention to the eyes and mouth regions of talking faces could be a supporting mechanism by which they acquire their native(s) language(s). Importantly, attentional strategies seem to be sensitive to three types of constraints: the properties of the stimulus, the infants' attentional control skills (which improve with age and brain maturation) and their previous linguistic and non-linguistic knowledge. The goal of the present paper is to present a probabilistic model to simulate infants' visual attention control to talking faces as a function of their language learning environment (monolingual vs. bilingual), attention maturation (i.e., age) and their increasing knowledge concerning the task at stake (detecting and learning to anticipate information displayed in the eyes or the mouth region of the speaker). Methods: To test the model, we first considered experimental eye-tracking data from monolingual and bilingual infants (aged between 12 and 18 months; in part already published) exploring a face speaking in their native language. In each of these conditions, we compared the proportion of total looking time on each of the two areas of interest (eyes vs. mouth of the speaker). Results: In line with previous studies, our experimental results show a strong bias for the mouth (over the eyes) region of the speaker, regardless of age. Furthermore, monolingual and bilingual infants appear to have different developmental trajectories, which is consistent with and extends previous results observed in the first year. Comparison of model simulations with experimental data shows that the model successfully captures patterns of visuo-attentional orientation through the three parameters that effectively modulate the simulated visuo-attentional behavior. Discussion: We interpret parameter values, and find that they adequately reflect evolution of strength and speed of anticipatory learning; we further discuss their descriptive and explanatory power.

2.
Vision Res ; 207: 108211, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36990012

RESUMO

During reading acquisition, beginning readers transition from serial to more parallel processing. The acquisition of word specific knowledge through orthographic learning is critical for this transition. However, the processes by which orthographic representations are acquired and fine-tuned as learning progresses are not well understood. Our aim was to explore the role of visual attention in this transition through computational modeling. We used the BRAID-Learn model, a Bayesian model of visual word recognition, to simulate the orthographic learning of 700 4-to 10-letter English known words and novel words, presented 5 times each to the model. The visual attention quantity available for letter identification was manipulated in the simulations to assess its influence on the learning process. We measured the overall processing time and number of attentional fixations simulated by the model across exposures and their impact on two markers of serial processing, the lexicality and length effects, depending on visual attention quantity. Results showed that the two lexicality and length effects were modulated by visual attention quantity. The quantity of visual attention available for processing further modulated novel word orthographic learning and the evolution of the length effect on processing time and number of attentional fixations across repeated exposures to novel words. The simulated patterns are consistent with behavioral data and the developmental trajectories reported during reading acquisition. Overall, the model predicts that the efficacy of orthographic learning depends on visual attention quantity and that visual attention may be critical to explain the transition from serial to more parallel processing.


Assuntos
Idioma , Aprendizagem , Humanos , Teorema de Bayes , Leitura , Reconhecimento Visual de Modelos
3.
Psychon Bull Rev ; 29(5): 1649-1672, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35318586

RESUMO

How is orthographic knowledge acquired? In line with the self-teaching hypothesis, most computational models assume that phonological recoding has a pivotal role in orthographic learning. However, these models make simplifying assumptions on the mechanisms involved in visuo-orthographic processing. Against evidence from eye movement data during orthographic learning, they assume that orthographic information on novel words is immediately available and accurately encoded after a single exposure. In this paper, we describe BRAID-Learn, a new computational model of orthographic learning. BRAID-Learn is a probabilistic and hierarchical model that incorporates the mechanisms of visual acuity, lateral interference, and visual attention involved in word recognition. Orthographic learning in the model rests on three main mechanisms: first, visual attention moves over the input string to optimize the gain of information on letter identity at each fixation; second, top-down lexical influence is modulated as a function of stimulus familiarity; third, after exploration, perceived information is used to create a new orthographic representation or stabilize a better-specified representation of the input word. BRAID-Learn was challenged on its capacity to simulate the eye movement patterns reported in humans during incidental orthographic learning. In line with the behavioral data, the model predicts a larger decline with exposures in number of fixations and processing time for novel words than for known words. For novel words, most changes occur between the first and second exposure, that is to say, after creation in memory of a new orthographic representation. Beyond phonological recoding, our results suggest that visuo-attentional exploration is an intrinsic portion of orthographic learning seldom taken into consideration by models or theoretical accounts.


Assuntos
Fonética , Leitura , Humanos , Aprendizagem , Reconhecimento Psicológico
4.
Cogn Neuropsychol ; 38(5): 319-335, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34818988

RESUMO

The probability of recognizing a word depends on the position of fixation during processing. In typical readers, the resulting word-recognition curves are asymmetrical, showing a left-of-centre optimal viewing position (OVP). First, we report behavioural results from dyslexic participants who show atypical word-recognition curves characterized by the OVP being right of centre with recognition probability being higher on the rightmost than on the leftmost letters. Second, we used BRAID, a Bayesian model of word recognition that implements gaze position, an acuity gradient, lateral interference and a visual attention component, to examine how variations in the deployment of visual attention would affect the OVP curves. We show that the atypical dyslexic curves are well simulated assuming a narrow distribution of visual attention and a shifting of visual attention towards the left visual field. These behavioural and modelling findings are discussed in light of current theories of visual attention deficits in developmental dyslexia.


Assuntos
Dislexia , Teorema de Bayes , Humanos , Leitura , Reconhecimento Psicológico , Campos Visuais
5.
Front Syst Neurosci ; 15: 653975, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34421549

RESUMO

Recent neurocognitive models commonly consider speech perception as a hierarchy of processes, each corresponding to specific temporal scales of collective oscillatory processes in the cortex: 30-80 Hz gamma oscillations in charge of phonetic analysis, 4-9 Hz theta oscillations in charge of syllabic segmentation, 1-2 Hz delta oscillations processing prosodic/syntactic units and the 15-20 Hz beta channel possibly involved in top-down predictions. Several recent neuro-computational models thus feature theta oscillations, driven by the speech acoustic envelope, to achieve syllabic parsing before lexical access. However, it is unlikely that such syllabic parsing, performed in a purely bottom-up manner from envelope variations, would be totally efficient in all situations, especially in adverse sensory conditions. We present a new probabilistic model of spoken word recognition, called COSMO-Onset, in which syllabic parsing relies on fusion between top-down, lexical prediction of onset events and bottom-up onset detection from the acoustic envelope. We report preliminary simulations, analyzing how the model performs syllabic parsing and phone, syllable and word recognition. We show that, while purely bottom-up onset detection is sufficient for word recognition in nominal conditions, top-down prediction of syllabic onset events allows overcoming challenging adverse conditions, such as when the acoustic envelope is degraded, leading either to spurious or missing onset events in the sensory signal. This provides a proposal for a possible computational functional role of top-down, predictive processes during speech recognition, consistent with recent models of neuronal oscillatory processes.

6.
Proc Natl Acad Sci U S A ; 117(11): 6255-6263, 2020 03 17.
Artigo em Inglês | MEDLINE | ID: mdl-32123070

RESUMO

Auditory speech perception enables listeners to access phonological categories from speech sounds. During speech production and speech motor learning, speakers' experience matched auditory and somatosensory input. Accordingly, access to phonetic units might also be provided by somatosensory information. The present study assessed whether humans can identify vowels using somatosensory feedback, without auditory feedback. A tongue-positioning task was used in which participants were required to achieve different tongue postures within the /e, ε, a/ articulatory range, in a procedure that was totally nonspeech like, involving distorted visual feedback of tongue shape. Tongue postures were measured using electromagnetic articulography. At the end of each tongue-positioning trial, subjects were required to whisper the corresponding vocal tract configuration with masked auditory feedback and to identify the vowel associated with the reached tongue posture. Masked auditory feedback ensured that vowel categorization was based on somatosensory feedback rather than auditory feedback. A separate group of subjects was required to auditorily classify the whispered sounds. In addition, we modeled the link between vowel categories and tongue postures in normal speech production with a Bayesian classifier based on the tongue postures recorded from the same speakers for several repetitions of the /e, ε, a/ vowels during a separate speech production task. Overall, our results indicate that vowel categorization is possible with somatosensory feedback alone, with an accuracy that is similar to the accuracy of the auditory perception of whispered sounds, and in congruence with normal speech articulation, as accounted for by the Bayesian classifier.


Assuntos
Retroalimentação Fisiológica , Fonética , Sensação/fisiologia , Percepção da Fala/fisiologia , Língua/fisiologia , Adulto , Feminino , Humanos , Masculino , Palato/fisiologia , Medida da Produção da Fala , Adulto Jovem
7.
Front Psychol ; 10: 2339, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31708828

RESUMO

Experimental studies of speech production involving compensations for auditory and somatosensory perturbations and adaptation after training suggest that both types of sensory information are considered to plan and monitor speech production. Interestingly, individual sensory preferences have been observed in this context: subjects who compensate less for somatosensory perturbations compensate more for auditory perturbations, and vice versa. We propose to integrate this sensory preference phenomenon in a model of speech motor planning using a probabilistic model in which speech units are characterized both in auditory and somatosensory terms. Sensory preference is implemented in the model according to two approaches. In the first approach, which is often used in motor control models accounting for sensory integration, sensory preference is attributed to the relative precision (i.e., inverse of the variance) of the sensory characterization of the speech motor goals associated with phonological units (which are phonemes in the context of this paper). In the second, "more original" variant, sensory preference is implemented by modulating the sensitivity of the comparison between the predicted sensory consequences of motor commands and the sensory characterizations of the phonemes. We present simulation results using these two variants, in the context of the adaptation to an auditory perturbation, implemented in a 2-dimensional biomechanical model of the tongue. Simulation results show that both variants lead to qualitatively similar results. Distinguishing them experimentally would require precise analyses of partial compensation patterns. However, the second proposed variant implements sensory preference without changing the sensory characterizations of the phonemes. This dissociates sensory preference and sensory characterizations of the phonemes, and makes the account of sensory preference more flexible. Indeed, in the second variant the sensory characterizations of the phonemes can remain stable, when sensory preference varies as a response to cognitive or attentional control. This opens new perspectives for capturing speech production variability associated with aging, disorders and speaking conditions.

8.
Vision Res ; 159: 10-20, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30904615

RESUMO

The word length effect in Lexical Decision (LD) has been studied in many behavioral experiments but no computational models has yet simulated this effect. We use a new Bayesian model of visual word recognition, the BRAID model, that simulates expert readers' performance. BRAID integrates an attentional component modeled by a Gaussian probability distribution, a mechanism of lateral interference between adjacent letters and an acuity gradient, but no phonological component. We explored the role of visual attention on the word length effect using 1,200 French words from 4 to 11 letters. A series of five simulations was carried out to assess (a) the impact of a single attentional focus versus multiple shifts of attention on the word length effect and (b) how this effect is modulated by variations in the distribution of attention. Results show that the model successfully simulates the word length effect reported for humans in the French Lexicon Project when allowing multiple shifts of attention for longer words. The magnitude and direction of the effect can be modulated depending on the use of a uniform or narrow distribution of attention. The present study provides evidence that visual attention is critical for the recognition of single words and that a narrowing of the attention distribution might account for the exaggerated length effect reported in some reading disorders.


Assuntos
Atenção/fisiologia , Leitura , Percepção Visual/fisiologia , Análise de Variância , Teorema de Bayes , Humanos , Reconhecimento Psicológico/fisiologia , Semântica
9.
PLoS One ; 14(1): e0210302, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30633745

RESUMO

The existence of a functional relationship between speech perception and production systems is now widely accepted, but the exact nature and role of this relationship remains quite unclear. The existence of idiosyncrasies in production and in perception sheds interesting light on the nature of the link. Indeed, a number of studies explore inter-individual variability in auditory and motor prototypes within a given language, and provide evidence for a link between both sets. In this paper, we attempt to simulate one study on coupled idiosyncrasies in the perception and production of French oral vowels, within COSMO, a Bayesian computational model of speech communication. First, we show that if the learning process in COSMO includes a communicative mechanism between a Learning Agent and a Master Agent, vowel production does display idiosyncrasies. Second, we implement within COSMO three models for speech perception that are, respectively, auditory, motor and perceptuo-motor. We show that no idiosyncrasy in perception can be obtained in the auditory model, since it is optimally tuned to the learning environment, which does not include the motor variability of the Learning Agent. On the contrary, motor and perceptuo-motor models provide perception idiosyncrasies correlated with idiosyncrasies in production. We draw conclusions about the role and importance of motor processes in speech perception, and propose a perceptuo-motor model in which auditory processing would enable optimal processing of learned sounds and motor processing would be helpful in unlearned adverse conditions.


Assuntos
Modelos Psicológicos , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Teorema de Bayes , Comunicação , Simulação por Computador , Humanos , Aprendizagem , Aprendizado de Máquina , Modelos Neurológicos , Modelos Estatísticos
10.
PLoS Comput Biol ; 14(1): e1005942, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29357357

RESUMO

Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways.


Assuntos
Teorema de Bayes , Percepção da Fala/fisiologia , Fala , Estimulação Acústica , Percepção Auditiva , Biologia Computacional , Simulação por Computador , Retroalimentação Sensorial , Audição , Humanos , Modelos Biológicos , Modelos Estatísticos , Destreza Motora , Distribuição Normal , Acústica da Fala
11.
Brain Lang ; 187: 19-32, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29241588

RESUMO

While neurocognitive data provide clear evidence for the involvement of the motor system in speech perception, its precise role and the way motor information is involved in perceptual decision remain unclear. In this paper, we discuss some recent experimental results in light of COSMO, a Bayesian perceptuo-motor model of speech communication. COSMO enables us to model both speech perception and speech production with probability distributions relating phonological units with sensory and motor variables. Speech perception is conceived as a sensory-motor architecture combining an auditory and a motor decoder thanks to a Bayesian fusion process. We propose the sketch of a neuroanatomical architecture for COSMO, and we capitalize on properties of the auditory vs. motor decoders to address three neurocognitive studies of the literature. Altogether, this computational study reinforces functional arguments supporting the role of a motor decoding branch in the speech perception process.


Assuntos
Modelos Neurológicos , Desempenho Psicomotor , Percepção da Fala , Teorema de Bayes , Cognição , Humanos
12.
Psychol Rev ; 124(5): 572-602, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28471206

RESUMO

There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion. Here, we show 3 main results: (a) In a set of precisely defined "perfect conditions," auditory and motor theories of speech perception are indistinguishable; (b) When a learning process that mimics speech development is introduced into COSMO, it departs from these perfect conditions. Then auditory recognition becomes more efficient than motor recognition in dealing with learned stimuli, while motor recognition is more efficient in adverse conditions. We interpret this result as a general "auditory-narrowband versus motor-wideband" property; and (c) Simulations of plosive-vowel syllable recognition reveal possible cues from motor recognition for the invariant specification of the place of plosive articulation in context that are lacking in the auditory pathway. This provides COSMO with a second property, where auditory cues would be more efficient for vowel decoding and motor cues for plosive articulation decoding. These simulations provide several predictions, which are in good agreement with experimental data and suggest that there is natural complementarity between auditory and motor processing within a perceptuo-motor theory of speech perception. (PsycINFO Database Record


Assuntos
Estimulação Acústica , Percepção Auditiva , Teorema de Bayes , Percepção da Fala/fisiologia , Fala/fisiologia , Sinais (Psicologia) , Humanos , Idioma
14.
Biol Cybern ; 109(6): 611-26, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26497359

RESUMO

The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.


Assuntos
Teorema de Bayes , Atividade Motora , Fala , Modelos Teóricos
15.
Front Psychol ; 4: 843, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24273525

RESUMO

This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables "eye writing," which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented, which illustrate the method, showing the feasibility of character recognition in the context of eye writing. We then show experimentally how a model of the unknown character can be used to detect trajectories that are likely to be new symbols, and how disability assessment can be performed by opportunistically observing characteristics of fine motor control, as letter are being traced. Experimental analyses also help identify specificities of eye writing, as compared to handwriting, and the resulting technical challenges.

16.
Behav Brain Sci ; 36(4): 364-5, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23790121

RESUMO

We consider a computational model comparing the possible roles of "association" and "simulation" in phonetic decoding, demonstrating that these two routes can contain similar information in some "perfect" communication situations and highlighting situations where their decoding performance differs. We conclude that optimal decoding should involve some sort of fusion of association and simulation in the human brain.


Assuntos
Compreensão/fisiologia , Modelos Teóricos , Percepção da Fala/fisiologia , Fala/fisiologia , Humanos
17.
PLoS One ; 6(6): e20387, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21674043

RESUMO

In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception-action loop, based on probabilistic modeling and bayesian inference, which we call the Bayesian Action-Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments.


Assuntos
Simulação por Computador , Escrita Manual , Reconhecimento Visual de Modelos/fisiologia , Teorema de Bayes , Humanos , Aprendizagem/fisiologia , Movimento , Sistemas On-Line , Leitura , Robótica , Semântica
18.
Acta Biotheor ; 58(2-3): 191-216, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20658175

RESUMO

How can an incomplete and uncertain model of the environment be used to perceive, infer, decide and act efficiently? This is the challenge that both living and artificial cognitive systems have to face. Symbolic logic is, by its nature, unable to deal with this question. The subjectivist approach to probability is an extension to logic that is designed specifically to face this challenge. In this paper, we review a number of frequently encountered cognitive issues and cast them into a common Bayesian formalism. The concepts we review are ambiguities, fusion, multimodality, conflicts, modularity, hierarchies and loops. First, each of these concepts is introduced briefly using some examples from the neuroscience, psychophysics or robotics literature. Then, the concept is formalized using a template Bayesian model. The assumptions and common features of these models, as well as their major differences, are outlined and discussed.


Assuntos
Cognição , Modelos Psicológicos , Teorema de Bayes , Teoria da Decisão , Humanos , Lógica , Modelos Estatísticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...