Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Nature ; 591(7851): 604-609, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33473215

RESUMEN

In dynamic environments, subjects often integrate multiple samples of a signal and combine them to reach a categorical judgment1. The process of deliberation can be described by a time-varying decision variable (DV), decoded from neural population activity, that predicts a subject's upcoming decision2. Within single trials, however, there are large moment-to-moment fluctuations in the DV, the behavioural significance of which is unclear. Here, using real-time, neural feedback control of stimulus duration, we show that within-trial DV fluctuations, decoded from motor cortex, are tightly linked to decision state in macaques, predicting behavioural choices substantially better than the condition-averaged DV or the visual stimulus alone. Furthermore, robust changes in DV sign have the statistical regularities expected from behavioural studies of changes of mind3. Probing the decision process on single trials with weak stimulus pulses, we find evidence for time-varying absorbing decision bounds, enabling us to distinguish between specific models of decision making.


Asunto(s)
Toma de Decisiones/fisiología , Modelos Neurológicos , Animales , Conducta de Elección/fisiología , Discriminación en Psicología , Juicio , Macaca/fisiología , Movimiento (Física) , Percepción de Movimiento , Estimulación Luminosa , Factores de Tiempo
2.
J Neurosci ; 39(8): 1420-1435, 2019 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-30606756

RESUMEN

Neural activity in the premotor and motor cortices shows prominent structure in the beta frequency range (13-30 Hz). Currently, the behavioral relevance of this beta band activity (BBA) is debated. The underlying source of motor BBA and how it changes as a function of cortical depth are also not completely understood. Here, we addressed these unresolved questions by investigating BBA recorded using laminar electrodes in the dorsal premotor cortex of 2 male rhesus macaques performing a visual reaction time (RT) reach discrimination task. We observed robust BBA before and after the onset of the visual stimulus but not during the arm movement. While poststimulus BBA was positively correlated with RT throughout the beta frequency range, prestimulus correlation varied by frequency. Low beta frequencies (∼12-20 Hz) were positively correlated with RT, and high beta frequencies (∼22-30 Hz) were negatively correlated with RT. Analysis and simulations suggested that these frequency-dependent correlations could emerge due to a shift in the component frequencies of the prestimulus BBA as a function of RT, such that faster RTs are accompanied by greater power in high beta frequencies. We also observed a laminar dependence of BBA, with deeper electrodes demonstrating stronger power in low beta frequencies both prestimulus and poststimulus. The heterogeneous nature of BBA and the changing relationship between BBA and RT in different task epochs may be a sign of the differential network dynamics involved in cue expectation, decision-making, motor preparation, and movement execution.SIGNIFICANCE STATEMENT Beta band activity (BBA) has been implicated in motor tasks, in disease states, and as a potential signal for brain-machine interfaces. However, the behavioral relevance of BBA and its laminar organization in premotor cortex have not been completely elucidated. Here we addressed these unresolved issues using simultaneous recordings from multiple cortical layers of the premotor cortex of monkeys performing a decision-making task. Our key finding is that BBA is not a monolithic signal. Instead, BBA consists of at least two frequency bands. The relationship between BBA and eventual behavior, such as reaction time, also dynamically changes depending on task epoch. We also provide further evidence that BBA is laminarly organized, with greater power in deeper electrodes for low beta frequencies.


Asunto(s)
Ritmo beta/fisiología , Percepción de Color/fisiología , Toma de Decisiones/fisiología , Discriminación en Psicología/fisiología , Corteza Motora/fisiología , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Animales , Atención/fisiología , Simulación por Computador , Señales (Psicología) , Ritmo Gamma/fisiología , Fuerza de la Mano , Macaca mulatta , Masculino , Modelos Neurológicos , Modelos Psicológicos , Estimulación Luminosa , Análisis de Ondículas
3.
Proc Natl Acad Sci U S A ; 110(48): E4668-77, 2013 Nov 26.
Artículo en Inglés | MEDLINE | ID: mdl-24218574

RESUMEN

How low-level sensory areas help mediate the detection and discrimination advantages of integrating faces and voices is the subject of intense debate. To gain insights, we investigated the role of the auditory cortex in face/voice integration in macaque monkeys performing a vocal-detection task. Behaviorally, subjects were slower to detect vocalizations as the signal-to-noise ratio decreased, but seeing mouth movements associated with vocalizations sped up detection. Paralleling this behavioral relationship, as the signal to noise ratio decreased, the onset of spiking responses were delayed and magnitudes were decreased. However, when mouth motion accompanied the vocalization, these responses were uniformly faster. Conversely, and at odds with previous assumptions regarding the neural basis of face/voice integration, changes in the magnitude of neural responses were not related consistently to audiovisual behavior. Taken together, our data reveal that facilitation of spike latency is a means by which the auditory cortex partially mediates the reaction time benefits of combining faces and voices.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Cara , Macaca fascicularis/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Animales , Masculino , Estimulación Luminosa , Desempeño Psicomotor , Tiempo de Reacción
4.
bioRxiv ; 2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-37662199

RESUMEN

The cognitive processes supporting complex animal behavior are closely associated with ubiquitous movements responsible for our posture, facial expressions, ability to actively sample our sensory environments, and other critical processes. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes, making it challenging to dissociate the neural dynamics that support cognitive processes from those supporting related movements. In such cases, a critical issue is whether cognitive processes are separable from related movements, or if they are driven by common neural mechanisms. Here, we demonstrate how the separability of cognitive and motor processes can be assessed, and, when separable, how the neural dynamics associated with each component can be isolated. We establish a novel two-context behavioral task in mice that involves multiple cognitive processes and show that commonly observed dynamics taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories. Further, properly accounting for movement revealed that largely separate populations of cells encode cognitive and motor variables, in contrast to the 'mixed selectivity' often reported. Accurately isolating the dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function and evaluating the function of the cell types of which neural circuits are composed.

5.
STAR Protoc ; 4(2): 102320, 2023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-37220000

RESUMEN

Action potential spike widths are used to classify cell types as either excitatory or inhibitory; however, this approach obscures other differences in waveform shape useful for identifying more fine-grained cell types. Here, we present a protocol for using WaveMAP to generate nuanced average waveform clusters more closely linked to underlying cell types. We describe steps for installing WaveMAP, preprocessing data, and clustering waveform into putative cell types. We also detail cluster evaluation for functional differences and interpretation of WaveMAP output. For complete details on the use and execution of this protocol, please refer to Lee et al. (2021).1.

6.
bioRxiv ; 2023 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-37546748

RESUMEN

The brain represents sensory variables in the coordinated activity of neural populations, in which tuning curves of single neurons define the geometry of the population code. Whether the same coding principle holds for dynamic cognitive variables remains unknown because internal cognitive processes unfold with a unique time course on single trials observed only in the irregular spiking of heterogeneous neural populations. Here we show the existence of such a population code for the dynamics of choice formation in the primate premotor cortex. We developed an approach to simultaneously infer population dynamics and tuning functions of single neurons to the population state. Applied to spike data recorded during decision-making, our model revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Our results reveal a common geometric principle for neural encoding of sensory and dynamic cognitive variables.

7.
Nat Commun ; 14(1): 6510, 2023 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-37845221

RESUMEN

We used a dynamical systems perspective to understand decision-related neural activity, a fundamentally unresolved problem. This perspective posits that time-varying neural activity is described by a state equation with an initial condition and evolves in time by combining at each time step, recurrent activity and inputs. We hypothesized various dynamical mechanisms of decisions, simulated them in models to derive predictions, and evaluated these predictions by examining firing rates of neurons in the dorsal premotor cortex (PMd) of monkeys performing a perceptual decision-making task. Prestimulus neural activity (i.e., the initial condition) predicted poststimulus neural trajectories, covaried with RT and the outcome of the previous trial, but not with choice. Poststimulus dynamics depended on both the sensory evidence and initial condition, with easier stimuli and fast initial conditions leading to the fastest choice-related dynamics. Together, these results suggest that initial conditions combine with sensory evidence to induce decision-related dynamics in PMd.


Asunto(s)
Corteza Motora , Corteza Motora/fisiología , Neuronas/fisiología
8.
bioRxiv ; 2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-37502862

RESUMEN

Decision-making emerges from distributed computations across multiple brain areas, but it is unclear why the brain distributes the computation. In deep learning, artificial neural networks use multiple areas (or layers) to form optimal representations of task inputs. These optimal representations are sufficient to perform the task well, but minimal so they are invariant to other irrelevant variables. We recorded single neurons and multiunits in dorsolateral prefrontal cortex (DLPFC) and dorsal premotor cortex (PMd) in monkeys during a perceptual decision-making task. We found that while DLPFC represents task-related inputs required to compute the choice, the downstream PMd contains a minimal sufficient, or optimal, representation of the choice. To identify a mechanism for how cortex may form these optimal representations, we trained a multi-area recurrent neural network (RNN) to perform the task. Remarkably, DLPFC and PMd resembling representations emerged in the early and late areas of the multi-area RNN, respectively. The DLPFC-resembling area partially orthogonalized choice information and task inputs and this choice information was preferentially propagated to downstream areas through selective alignment with inter-area connections, while remaining task information was not. Our results suggest that cortex uses multi-area computation to form minimal sufficient representations by preferential propagation of relevant information between areas.

9.
PLoS Comput Biol ; 7(9): e1002165, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21998576

RESUMEN

Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race" model failed to account for their behavior patterns. Conversely, a "superposition model", positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.


Asunto(s)
Macaca fascicularis/fisiología , Macaca fascicularis/psicología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Animales , Biología Computacional , Cara , Femenino , Humanos , Masculino , Modelos Neurológicos , Modelos Psicológicos , Estimulación Luminosa , Tiempo de Reacción/fisiología , Especificidad de la Especie , Vocalización Animal/fisiología
10.
Neuron ; 53(2): 162-4, 2007 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-17224399

RESUMEN

Most, if not all, of the neocortex is multisensory, but the mechanisms by which different cortical areas - association versus sensory, for instance - integrate multisensory inputs are not known. The study by Lakatos et al. reveals that, in the primary auditory cortex, the phase of neural oscillations is reset by somatosensory inputs, and subsequent auditory inputs are enhanced or suppressed, depending on their timing relative to the oscillatory cycle.


Asunto(s)
Corteza Auditiva/fisiología , Neocórtex/fisiología , Sensación/fisiología , Corteza Somatosensorial/fisiología , Vías Aferentes/fisiología , Animales , Oscilometría
11.
J Neurosci ; 30(42): 13919-31, 2010 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-20962214

RESUMEN

The efficient cortical encoding of natural scenes is essential for guiding adaptive behavior. Because natural scenes and network activity in cortical circuits share similar temporal scales, it is necessary to understand how the temporal structure of natural scenes influences network dynamics in cortical circuits and spiking output. We examined the relationship between the structure of natural acoustic scenes and its impact on network activity [as indexed by local field potentials (LFPs)] and spiking responses in macaque primary auditory cortex. Natural auditory scenes led to a change in the power of the LFP in the 2-9 and 16-30 Hz frequency ranges relative to the ongoing activity. In contrast, ongoing rhythmic activity in the 9-16 Hz range was essentially unaffected by the natural scene. Phase coherence analysis showed that scene-related changes in LFP power were at least partially attributable to the locking of the LFP and spiking activity to the temporal structure in the scene, with locking extending up to 25 Hz for some scenes and cortical sites. Consistent with distributed place and temporal coding schemes, a key predictor of phase locking and power changes was the overlap between the spectral selectivity of a cortical site and the spectral structure of the scene. Finally, during the processing of natural acoustic scenes, spikes were locked to LFP phase at frequencies up to 30 Hz. These results are consistent with an idea that the cortical representation of natural scenes emerges from an interaction between network activity and stimulus dynamics.


Asunto(s)
Corteza Auditiva/fisiología , Ambiente , Estimulación Acústica , Animales , Interpretación Estadística de Datos , Ecosistema , Potenciales Evocados Auditivos/fisiología , Macaca fascicularis , Estimulación Luminosa
12.
Curr Biol ; 18(13): 963-8, 2008 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-18585039

RESUMEN

The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Lóbulo Temporal/fisiología , Percepción Visual , Animales , Macaca mulatta , Masculino
13.
Elife ; 102021 08 06.
Artículo en Inglés | MEDLINE | ID: mdl-34355695

RESUMEN

Cortical circuits are thought to contain a large number of cell types that coordinate to produce behavior. Current in vivo methods rely on clustering of specified features of extracellular waveforms to identify putative cell types, but these capture only a small amount of variation. Here, we develop a new method (WaveMAP) that combines non-linear dimensionality reduction with graph clustering to identify putative cell types. We apply WaveMAP to extracellular waveforms recorded from dorsal premotor cortex of macaque monkeys performing a decision-making task. Using WaveMAP, we robustly establish eight waveform clusters and show that these clusters recapitulate previously identified narrow- and broad-spiking types while revealing previously unknown diversity within these subtypes. The eight clusters exhibited distinct laminar distributions, characteristic firing rate patterns, and decision-related dynamics. Such insights were weaker when using feature-based approaches. WaveMAP therefore provides a more nuanced understanding of the dynamics of cell types in cortical circuits.


Asunto(s)
Corteza Motora , Vías Nerviosas/fisiología , Animales , Toma de Decisiones/fisiología , Macaca mulatta , Aprendizaje Automático , Masculino , Corteza Motora/citología , Corteza Motora/fisiología , Neuronas/fisiología , Dinámicas no Lineales , Programas Informáticos , Análisis y Desempeño de Tareas
14.
Eur J Neurosci ; 31(10): 1807-17, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-20584185

RESUMEN

Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre-existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth-grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth-grinds have consistent but distinct peak frequencies and that both fall well within the 2-7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0-3Hz), theta (3-8Hz), alpha (8-14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20-40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus 'prepared' for the advent of rhythmic audiovisual communication.


Asunto(s)
Evolución Biológica , Expresión Facial , Habla/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Animales , Interpretación Estadística de Datos , Electroencefalografía , Potenciales Evocados/fisiología , Gestos , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Boca/fisiología , Movimiento/fisiología , Neuronas/fisiología , Estimulación Luminosa
15.
PLoS Comput Biol ; 5(7): e1000436, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19609344

RESUMEN

Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Boca/fisiología , Habla/fisiología , Voz/fisiología , Bases de Datos Factuales , Análisis de Fourier , Humanos , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Percepción del Habla/fisiología , Factores de Tiempo , Percepción Visual/fisiología
16.
J Neurosci ; 28(17): 4457-69, 2008 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-18434524

RESUMEN

The existence of multiple nodes in the cortical network that integrate faces and voices suggests that they may be interacting and influencing each other during communication. To test the hypothesis that multisensory responses in auditory cortex are influenced by visual inputs from the superior temporal sulcus (STS), an association area, we recorded local field potentials and single neurons from both structures concurrently in monkeys. The functional interactions between the auditory cortex and the STS, as measured by spectral analyses, increased in strength during presentations of dynamic faces and voices relative to either communication signal alone. These interactions were not solely modulations of response strength, because the phase relationships were significantly less variable in the multisensory condition as well. A similar analysis of functional interactions within the auditory cortex revealed no similar interactions as a function of stimulus condition, nor did a control condition in which the dynamic face was replaced with a dynamic disk mimicking mouth movements. Single neuron data revealed that these intercortical interactions were reflected in the spiking output of auditory cortex and that such spiking output was coordinated with oscillations in the STS. The vast majority of single neurons that were responsive to voices showed integrative responses when faces, but not control stimuli, were presented in conjunction. Our data suggest that the integration of faces and voices is mediated at least in part by neuronal cooperation between auditory cortex and the STS and that interactions between these structures are a fast and efficient way of dealing with the multisensory communication signals.


Asunto(s)
Corteza Auditiva/fisiología , Cara , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Animales , Macaca mulatta , Masculino , Estimulación Luminosa/métodos , Vocalización Animal/fisiología
17.
J Neurosci Methods ; 328: 108432, 2019 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-31586868

RESUMEN

BACKGROUND: Decision-making is the process of choosing and performing actions in response to sensory cues to achieve behavioral goals. Many mathematical models have been developed to describe the choice behavior and response time (RT) distributions of observers performing decision-making tasks. However, relatively few researchers use these models because it demands expertise in various numerical, statistical, and software techniques. NEW METHOD: We present a toolbox - Choices and Response Times in R, or ChaRTr - that provides the user the ability to implement and test a wide variety of decision-making models ranging from classic through to modern versions of the diffusion decision model, to models with urgency signals, or collapsing boundaries. RESULTS: In three different case studies, we demonstrate how ChaRTr can be used to effortlessly discriminate between multiple models of decision-making behavior. We also provide guidance on how to extend the toolbox to incorporate future developments in decision-making models. COMPARISON WITH EXISTING METHOD(S): Existing software packages surmounted some of the numerical issues but have often focused on the classical decision-making model, the diffusion decision model. Recent models that posit roles for urgency, time-varying decision thresholds, noise in various aspects of the decision-formation process or low pass filtering of sensory evidence have proven to be challenging to incorporate in a coherent software framework that permits quantitative evaluation among these competing classes of decision-making models. CONCLUSION: ChaRTr can be used to make insightful statements about the cognitive processes underlying observed decision-making behavior and ultimately for deeper insights into decision mechanisms.


Asunto(s)
Toma de Decisiones/fisiología , Modelos Teóricos , Neurociencias/métodos , Tiempo de Reacción/fisiología , Análisis y Desempeño de Tareas , Conducta de Elección/fisiología , Humanos , Neurociencias/instrumentación , Programas Informáticos
18.
J Math Psychol ; 91: 159-175, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31404455

RESUMEN

In the redundant signals task, two target stimuli are associated with the same response. If both targets are presented together, redundancy gains are observed, as compared with single-target presentation. Different models explain these redundancy gains, including race and coactivation models (e.g., the Wiener diffusion superposition model, Schwarz, 1994, Journal of Mathematical Psychology, and the Ornstein Uhlenbeck diffusion superposition model, Diederich, 1995, Journal of Mathematical Psychology). In the present study, two monkeys performed a simple detection task with auditory, visual and audiovisual stimuli of different intensities and onset asynchronies. In its basic form, a Wiener diffusion superposition model provided only a poor description of the observed data, especially of the detection rate (i.e., accuracy or hit rate) for low stimulus intensity. We expanded the model in two ways, by (A) adding a temporal deadline, that is, restricting the evidence accumulation process to a stopping time, and (B) adding a second "nogo" barrier representing target absence. We present closed-form solutions for the mean absorption times and absorption probabilities for a Wiener diffusion process with a drift towards a single barrier in the presence of a temporal deadline (A), and numerically improved solutions for the two-barrier model (B). The best description of the data was obtained from the deadline model and substantially outperformed the two-barrier approach.

19.
Nat Commun ; 10(1): 1793, 2019 04 17.
Artículo en Inglés | MEDLINE | ID: mdl-30996222

RESUMEN

How deliberation on sensory cues and action selection interact in decision-related brain areas is still not well understood. Here, monkeys reached to one of two targets, whose colors alternated randomly between trials, by discriminating the dominant color of a checkerboard cue composed of different numbers of squares of the two target colors in different trials. In a Targets First task the colored targets appeared first, followed by the checkerboard; in a Checkerboard First task, this order was reversed. After both cues appeared in both tasks, responses of dorsal premotor cortex (PMd) units covaried with action choices, strength of evidence for action choices, and RTs- hallmarks of decision-related activity. However, very few units were modulated by checkerboard color composition or the color of the chosen target, even during the checkerboard deliberation epoch of the Checkerboard First task. These findings implicate PMd in the action-selection but not the perceptual components of the decision-making process in these tasks.


Asunto(s)
Conducta Animal/fisiología , Conducta de Elección/fisiología , Macaca mulatta/fisiología , Corteza Motora/fisiología , Desempeño Psicomotor/fisiología , Animales , Señales (Psicología) , Masculino , Neuronas/fisiología , Estimulación Luminosa , Tiempo de Reacción
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA