RESUMO
About 30 years ago, the Dynamical Hypothesis instigated a variety of insights and transformations in cognitive science. One of them was the simple observation that, quite unlike trial-based tasks in a laboratory, natural ecologically valid behaviors almost never have context-free starting points. Instead, they produce lengthy time series data that can be recorded with dense-sampling measures, such as heartrate, eye movements, EEG, etc. That emphasis on studying the temporal dynamics of extended behaviors may have been the trigger that led to a rethinking of what a "representation" is, and then of what a "cognitive agent" is. This most recent and perhaps most revolutionary transformation is the idea that a cognitive agent need not be a singular physiological organism. Perhaps a group of organisms, such as several people working on a joint task, can temporarily function as one cognitive agent - at least while they're working adaptively and successfully.
RESUMO
While the cognitivist school of thought holds that the mind is analogous to a computer, performing logical operations over internal representations, the tradition of ecological psychology contends that organisms can directly "resonate" to information for action and perception without the need for a representational intermediary. The concept of resonance has played an important role in ecological psychology, but it remains a metaphor. Supplying a mechanistic account of resonance requires a non-representational account of central nervous system (CNS) dynamics. Towards this, we present a series of simple models in which a reservoir network with homeostatic nodes is used to control a simple agent embedded in an environment. This network spontaneously produces behaviors that are adaptive in each context, including (1) visually tracking a moving object, (2) substantially above-chance performance in the arcade game Pong, (2) and avoiding walls while controlling a mobile agent. Upon analyzing the dynamics of the networks, we find that behavioral stability can be maintained without the formation of stable or recurring patterns of network activity that could be identified as neural representations. These results may represent a useful step towards a mechanistic grounding of resonance and a view of the CNS that is compatible with ecological psychology.
RESUMO
According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities recycle evolutionarily ancient mechanisms for perception and action. Here, building on these accounts, we investigate whether creativity builds on our capacity to forage in space ("creativity as strategic foraging"). We report systematic connections between specific forms of creative thinking-divergent and convergent-and corresponding strategies for searching in space. U.S. American adults completed two tasks designed to measure creativity. Before each creativity trial, participants completed an unrelated search of a city map. Between subjects, we manipulated the search pattern, with some participants seeking multiple, dispersed spatial locations and others repeatedly converging on the same location. Participants who searched divergently in space were better at divergent thinking but worse at convergent thinking; this pattern reversed for participants who had converged repeatedly on a single location. These results demonstrate a targeted link between foraging and creativity, thus advancing our understanding of the origins and mechanisms of high-level cognition.
Assuntos
Criatividade , Humanos , Adulto , Masculino , Feminino , Adulto Jovem , Cognição , Pensamento/fisiologia , Percepção Espacial/fisiologiaRESUMO
Informed by theories of embodied cognition, in the present study, we designed a novel priming technique to investigate the impact of spatial diversity and script direction on searching through concepts in both English and Persian (i.e., two languages with opposite script directions). First, participants connected a target dot either to one other dot (linear condition) or to multiple other dots (diverse condition) and either from left to right (rightward condition) or from right to left (leftward condition) on a computer touchscreen using their dominant hand's forefinger. Following the spatial prime, they were asked to generate as many words as possible using two-letter cues (e.g., "lo" â "love," "lobster") in 20 s. We hypothesized that greater spatial diversity, and consistency with script direction, should facilitate conceptual search and result in a higher number of word productions. In both languages, word production performance was superior for the diverse prime relative to the linear prime, suggesting that searching through lexical memory is facilitated by spatial diversity. Although some effects were observed for the directionality of the spatial prime, they were not consistent across experiments and did not correlate with script direction. This pattern of results suggests that a spatial prime that promotes diverse paths can improve word retrieval from lexical memory and lends empirical support to the embodied cognition framework, in which spatial relations play a crucial role in the conceptual system.
Assuntos
Cognição , Sinais (Psicologia) , Humanos , Tempo de Reação , Idioma , SemânticaRESUMO
Despite its many twists and turns, the arc of cognitive science generally bends toward progress, thanks to its interdisciplinary nature. By glancing at the last few decades of experimental and computational advances, it can be argued that-far from failing to converge on a shared set of conceptual assumptions-the field is indeed making steady consensual progress toward what can broadly be referred to as interactive frameworks. This inclination is apparent in the subfields of psycholinguistics, visual perception, embodied cognition, extended cognition, neural networks, dynamical systems theory, and more. This pictorial essay briefly documents this steady progress both from a bird's eye view and from the trenches. The conclusion is one of optimism that cognitive science is getting there, albeit slowly and arduously, like any good science should.
Assuntos
Cognição , Percepção Visual , Humanos , Psicolinguística , Redes Neurais de Computação , Ciência CognitivaRESUMO
While the notion of the brain as a prediction machine has been extremely influential and productive in cognitive science, there are competing accounts of how best to model and understand the predictive capabilities of brains. One prominent framework is of a "Bayesian brain" that explicitly generates predictions and uses resultant errors to guide adaptation. We suggest that the prediction-generation component of this framework may involve little more than a pattern completion process. We first describe pattern completion in the domain of visual perception, highlighting its temporal extension, and show how this can entail a form of prediction in time. Next, we describe the forward momentum of entrained dynamical systems as a model for the emergence of predictive processing in non-predictive systems. Then, we apply this reasoning to the domain of language, where explicitly predictive models are perhaps most popular. Here, we demonstrate how a connectionist model, TRACE, exhibits hallmarks of predictive processing without any representations of predictions or errors. Finally, we present a novel neural network model, inspired by reservoir computing models, that is entirely unsupervised and memoryless, but nonetheless exhibits prediction-like behavior in its pursuit of homeostasis. These explorations demonstrate that brain-like systems can get prediction "for free," without the need to posit formal logical representations with Bayesian probabilities or an inference machine that holds them in working memory.
Assuntos
Cognição/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Resolução de Problemas/fisiologia , Teorema de Bayes , Encéfalo/fisiologia , Humanos , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Probabilidade , Percepção Visual/fisiologiaRESUMO
Heyes' book is an important contribution that rightly integrates cognitive development and cultural evolution. However, understanding the cultural evolution of cognitive gadgets requires a deeper appreciation of complexity, feedback, and self-organization than her book exhibits.
RESUMO
A few decades ago, cognitive psychologists generally took for granted that the reason we perceive our visual environment as one contiguous stable whole (i.e., space constancy) is because we have an internal mental representation of the visual environment as one contiguous stable whole. They supposed that the non-contiguous visual images that are gathered during the brief fixations that intervene between pairs of saccadic eye movements (a few times every second) are somehow stitched together to construct this contiguous internal mental representation. Determining how exactly the brain does this proved to be a vexing puzzle for vision researchers. Bruce Bridgeman's research career is the story of how meticulous psychophysical experimentation, and a genius theoretical insight, eventually solved this puzzle. The reason that it was so difficult for researchers to figure out how the brain stitches together these visual snapshots into one accurately-rendered mental representation of the visual environment is that it doesn't do that. Bruce discovered that the brain couldn't do that if it tried. The neural information that codes for saccade amplitude and direction is simply too inaccurate to determine exact relative locations of each fixation. Rather than the perception of space constancy being the result of an internal representation, Bruce determined that it is the result of a brain that simply assumes that external space remains constant, and it rarely checks to verify this assumption. In our extension of Bridgeman's formulation, we suggest that objects in the world often serve as their own representations, and cognitive operations can be performed on those objects themselves, rather than on mental representations of them.
Assuntos
Encéfalo/fisiologia , Cognição/fisiologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Humanos , Apego ao Objeto , Psicofísica , Movimentos SacádicosRESUMO
A number of studies have suggested that perception of actions is accompanied by motor simulation of those actions. To further explore this proposal, we applied Transcranial magnetic stimulation (TMS) to the left primary motor cortex during the observation of handwritten and typed language stimuli, including words and non-word consonant clusters. We recorded motor-evoked potentials (MEPs) from the right first dorsal interosseous (FDI) muscle to measure cortico-spinal excitability during written text perception. We observed a facilitation in MEPs for handwritten stimuli, regardless of whether the stimuli were words or non-words, suggesting potential motor simulation during observation. We did not observe a similar facilitation for the typed stimuli, suggesting that motor simulation was not occurring during observation of typed text. By demonstrating potential simulation of written language text during observation, these findings add to a growing literature suggesting that the motor system plays a strong role in the perception of written language.
Assuntos
Córtex Motor/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Tratos Piramidais/fisiologia , Redação , Adolescente , Adulto , Potencial Evocado Motor , Feminino , Mãos/inervação , Mãos/fisiologia , Escrita Manual , Humanos , Masculino , Músculo Esquelético/fisiologia , Estimulação Magnética Transcraniana , Adulto JovemRESUMO
The main question that Firestone & Scholl (F&S) pose is whether "what and how we see is functionally independent from what and how we think, know, desire, act, and so forth" (sect. 2, para. 1). We synthesize a collection of concerns from an interdisciplinary set of coauthors regarding F&S's assumptions and appeals to intuition, resulting in their treatment of visual perception as context-free.
Assuntos
Intuição , Percepção Visual , Humanos , Visão OcularRESUMO
Although relational reasoning has been described as a process at the heart of human cognition, the exact character of relational representations remains an open debate. Symbolic-connectionist models of relational cognition suggest that relations are structured representations, but that they are ultimately grounded in feature sets; thus, they predict that activating those features can affect the trajectory of the relational reasoning process. The present work points out that such models do not necessarily specify what those features are though, and endeavors to show that spatial information is likely a part of it. To this end, it presents 2 experiments that used visuospatial priming to affect the course of relational reasoning. The first is a relational category-learning experiment in which this type of priming was shown to affect which spatial relation was learned when multiple were possible. The second used crossmapping analogy problems, paired with this same type of priming, to show that visuospatial cues can make participants more likely to map analogs based on relational roles, even with short presentation times. (PsycINFO Database Record
Assuntos
Resolução de Problemas , Priming de Repetição , Humanos , Aprendizagem , Reconhecimento Visual de Modelos , Estimulação Luminosa , Testes PsicológicosRESUMO
Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.
Assuntos
Tomada de Decisões , Movimentos Oculares , Princípios Morais , Adulto , Viés , Comportamento de Escolha , Cognição , Olho , Feminino , Fixação Ocular , Humanos , Masculino , Probabilidade , Reprodutibilidade dos Testes , Visão Ocular , Adulto JovemRESUMO
Learning of feature-based categories is known to interact with feature-variation in a variety of ways, depending on the type of variation (e.g., Markman and Maddox, 2003). However, relational categories are distinct from feature-based categories in that they determine membership based on structural similarities. As a result, the way that they interact with feature variation is unclear. This paper explores both experimental and computational data and argues that, despite its reliance on structural factors, relational category-learning should still be affected by the type of feature variation present during the learning process. It specifically suggests that within-feature and across-feature variation should produce different learning trajectories due to a difference in representational cost. The paper then uses the DORA model (Doumas et al., 2008) to discuss how this account might function in a cognitive system before presenting an experiment aimed at testing this account. The experiment was a relational category-learning task and was run on human participants and then simulated in DORA. Both sets of results indicated that learning a relational category from a training set with a lower amount of variation is easier, but that learning from a training set with increased within-feature variation is significantly less challenging than learning from a set with increased across-feature variation. These results support the claim that, like feature-based category-learning, relational category-learning is sensitive to the type of feature variation in the training set.
RESUMO
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior.
RESUMO
Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'efficient' to 'inefficient' search. One of the findings, consistent with this blurring of the serial-parallel distinction, is that concurrent spoken linguistic input influences the efficiency of visual search. In our first experiment we replicate those findings using a between-subjects design. Next, we utilize a localist attractor network to simulate the results from the first experiment, and then employ the network to make quantitative predictions about the influence of subtle timing differences of real-time language processing on visual search. These model predictions are then tested and confirmed in our second experiment. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension.
Assuntos
Linguística , Reconhecimento Visual de Modelos , Percepção da Fala , Percepção do Tempo , Adulto , Percepção de Cores , Simulação por Computador , Feminino , Humanos , Masculino , Orientação , Psicofísica , Tempo de Reação , Adulto JovemRESUMO
Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well.
Assuntos
Olho , Mãos/fisiologia , Memória/fisiologia , Movimento/fisiologia , Fenômenos Fisiológicos Oculares , Desempenho Psicomotor/fisiologia , Adolescente , Adulto , Feminino , Humanos , MasculinoRESUMO
Spatial formats of information are ubiquitous in the cognitive and neural sciences. There are neural uses of space in the topographic maps found throughout cortex. There are metaphorical uses of space in cognitive linguistics, physical uses of space in ecological psychology, and mathematical uses of space in dynamical systems theory. These varied informational uses of space each provide a single contiguous medium through which cognitive processes can be shared across subsystems. As we further develop our understanding of how the human mind processes information in real time, the continuous sharing and cascading of information patterns between brain areas can be extended to a sharing and cascading of information between multiple brains and bodies to produce coordinated behavior. Essentially, the way you and the people around you negotiate your shared space affects the way you think, because space is a fundamental part of how you think. It is via space that the mental processes of one mind can form an intersection with the mental processes of another mind.