Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
Add more filters










Publication year range
1.
Behav Brain Res ; 471: 115096, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38849007

ABSTRACT

BACKGROUND: Theoretical models and behavioural studies indicate faster approach behaviour for high-calorie food (approach bias) among healthy participants. A previous study with Virtual Reality (VR) and online motion-capture quantified this approach bias towards food and non-food cues in a controlled VR environment with hand movements. The aim of this study was to test the specificity of a manual approach bias for high-calorie food in grasp movements compared to low-calorie food and neutral objects of different complexity, namely, simple balls and geometrically more complex office tools. METHODS: In a VR setting, healthy participants (N = 27) repeatedly grasped or pushed high-calorie food, low-calorie food, balls and office tools in randomized order with 30 item repetitions. All objects were rated for valence and arousal. RESULTS: High-calorie food was less attractive and more arousing in subjective ratings than low-calorie food and neutral objects. Movement onset was faster for high-calorie food in push-trials, but overall push responses were comparable. In contrast, responses to high-calorie food relative to low-calorie food and to control objects were faster in grasp trials for later stages of interaction (grasp and collect). Non-parametric tests confirmed an approach bias for high-calorie food. CONCLUSION: A behavioural bias for food was specific to high-calorie food objects. The results confirm the presence of bottom-up advantages in motor-cognitive behaviour for high-calorie food in a non-clinical population. More systematic variations of object fidelity and in clinical populations are outstanding. The utility of VR in assessing approach behaviour is confirmed in this study by exploring manual interactions in a controlled environment.

2.
Psychol Res ; 88(2): 307-337, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37847268

ABSTRACT

Accounting for how the human mind represents the internal and external world is a crucial feature of many theories of human cognition. Central to this question is the distinction between modal as opposed to amodal representational formats. It has often been assumed that one but not both of these two types of representations underlie processing in specific domains of cognition (e.g., perception, mental imagery, and language). However, in this paper, we suggest that both formats play a major role in most cognitive domains. We believe that a comprehensive theory of cognition requires a solid understanding of these representational formats and their functional roles within and across different domains of cognition, the developmental trajectory of these representational formats, and their role in dysfunctional behavior. Here we sketch such an overarching perspective that brings together research from diverse subdisciplines of psychology on modal and amodal representational formats so as to unravel their functional principles and their interactions.


Subject(s)
Cognition , Humans
3.
Open Mind (Camb) ; 7: 111-129, 2023.
Article in English | MEDLINE | ID: mdl-37416076

ABSTRACT

Human behavioral choices can reveal intrinsic and extrinsic decision-influencing factors. We investigate the inference of choice priors in situations of referential ambiguity. In particular, we use the scenario of signaling games and investigate to which extent study participants profit from actively engaging in the task. Previous work has revealed that speakers are able to infer listeners' choice priors upon observing ambiguity resolution. However, it was also shown that only a small group of participants was able to strategically construct ambiguous situations to create learning opportunities. This paper sets to address how prior inference unfolds in more complex learning scenarios. In Experiment 1, we examine whether participants accumulate evidence about inferred choice priors across a series of four consecutive trials. Despite the intuitive simplicity of the task, information integration turns out to be only partially successful. Integration errors result from a variety of sources, including transitivity failure and recency bias. In Experiment 2, we investigate how the ability to actively construct learning scenarios affects the success of prior inference and whether the iterative settings improve the ability to choose utterances strategically. The results suggest that full task engagement and explicit access to the reasoning pipeline facilitates the invocation of optimal utterance choices as well as the accurate inference of listeners' choice priors.

4.
Front Rehabil Sci ; 3: 806114, 2022.
Article in English | MEDLINE | ID: mdl-36189032

ABSTRACT

Currently, there is neither a standardized mode for the documentation of phantom sensations and phantom limb pain, nor for their visualization as perceived by patients. We have therefore created a tool that allows for both, as well as for the quantification of the patient's visible and invisible body image. A first version provides the principal functions: (1) Adapting a 3D avatar for self-identification of the patient; (2) modeling the shape of the phantom limb; (3) adjusting the position of the phantom limb; (4) drawing pain and cramps directly onto the avatar; and (5) quantifying their respective intensities. Our tool (C.A.L.A.) was evaluated with 33 occupational therapists, physiotherapists, and other medical staff. Participants were presented with two cases in which the appearance and the position of the phantom had to be modeled and pain and cramps had to be drawn. The usability of the software was evaluated using the System Usability Scale and its functional range was evaluated using a self-developed questionnaire and semi-structured interview. In addition, our tool was evaluated on 22 patients with limb amputations. For each patient, body image as well as phantom sensation and pain were modeled to evaluate the software's functional scope. The accuracy of the created body image was evaluated using a self-developed questionnaire and semi-structured interview. Additionally, pain sensation was assessed using the SF-McGill Pain Questionnaire. The System Usability Scale reached a level of 81%, indicating high usability. Observing the participants, though, identified several operational difficulties. While the provided functions were considered useful by most participants, the semi-structured interviews revealed the need for an improved pain documentation component. In conclusion, our tool allows for an accurate visualization of phantom limbs and phantom limb sensations. It can be used as both a descriptive and quantitative documentation tool for analyzing and monitoring phantom limbs. Thus, it can help to bridge the gap between the therapist's conception and the patient's perception. Based on the collected requirements, an improved version with extended functionality will be developed.

5.
Front Neurorobot ; 16: 881673, 2022.
Article in English | MEDLINE | ID: mdl-36035589

ABSTRACT

Flexible, goal-directed behavior is a fundamental aspect of human life. Based on the free energy minimization principle, the theory of active inference formalizes the generation of such behavior from a computational neuroscience perspective. Based on the theory, we introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture, which processes sensorimotor information, infers behavior-relevant aspects of its world, and invokes highly flexible, goal-directed behavior. We show that our architecture, which is trained end-to-end to minimize an approximation of free energy, develops latent states that can be interpreted as affordance maps. That is, the emerging latent states signal which actions lead to which effects dependent on the local context. In combination with active inference, we show that flexible, goal-directed behavior can be invoked, incorporating the emerging affordance maps. As a result, our simulated agent flexibly steers through continuous spaces, avoids collisions with obstacles, and prefers pathways that lead to the goal with high certainty. Additionally, we show that the learned agent is highly suitable for zero-shot generalization across environments: After training the agent in a handful of fixed environments with obstacles and other terrains affecting its behavior, it performs similarly well in procedurally generated environments containing different amounts of obstacles and terrains of various sizes at different locations.

6.
Front Psychol ; 13: 867328, 2022.
Article in English | MEDLINE | ID: mdl-35846607

ABSTRACT

Pursuing a precise, focused train of thought requires cognitive effort. Even more effort is necessary when more alternatives need to be considered or when the imagined situation becomes more complex. Cognitive resources available to us limit the cognitive effort we can spend. In line with previous work, an information-theoretic, Bayesian brain approach to cognitive effort is pursued: to solve tasks in our environment, our brain needs to invest information, that is, negative entropy, to impose structure, or focus, away from a uniform structure or other task-incompatible, latent structures. To get a more complete formalization of cognitive effort, a resourceful event-predictive inference model (REPI) is introduced, which offers computational and algorithmic explanations about the latent structure of our generative models, the active inference dynamics that unfold within, and the cognitive effort required to steer the dynamics-to, for example, purposefully process sensory signals, decide on responses, and invoke their execution. REPI suggests that we invest cognitive resources to infer preparatory priors, activate responses, and anticipate action consequences. Due to our limited resources, though, the inference dynamics are prone to task-irrelevant distractions. For example, the task-irrelevant side of the imperative stimulus causes the Simon effect and, due to similar reasons, we fail to optimally switch between tasks. An actual model implementation simulates such task interactions and offers first estimates of the involved cognitive effort. The approach may be further studied and promises to offer deeper explanations about why we get quickly exhausted from multitasking, how we are influenced by irrelevant stimulus modalities, why we exhibit magnitude interference, and, during social interactions, why we often fail to take the perspective of others into account.

7.
Cognition ; 218: 104862, 2022 01.
Article in English | MEDLINE | ID: mdl-34634532

ABSTRACT

Bayesian accounts of social cognition successfully model the human ability to infer goals and intentions of others on the basis of their behavior. In this paper, we extend this paradigm to the analysis of ambiguity resolution during brief communicative exchanges. In a reference game experimental setup, we observed that participants were able to infer listeners' preferences when analyzing their choice of object given referential ambiguity. Moreover, a subset of speakers was able to strategically choose ambiguous over unambiguous utterances in an epistemic manner, although a different group preferred unambiguous utterances. We show that a modified Rational Speech Act model well-approximates the data of both the inference of listeners' preferences and their utterance choices. In particular, the observed preference inference is modeled by Bayesian inference, which computes posteriors over hypothetical, behavior-influencing inner states of conversation partners-such as their knowledge, beliefs, intentions, or preferences-after observing their utterance-interpretation behavior. Utterance choice is modeled by anticipating social information gain, which we formalize as the expected knowledge change, when choosing a particular utterance and watching the listener's response. Taken together, our results demonstrate how social conversations allow us to (sometimes strategically) learn about each other when observing interpretations of ambiguous utterances.


Subject(s)
Comprehension , Speech Perception , Bayes Theorem , Humans , Learning , Speech
8.
Cogn Sci ; 45(8): e13016, 2021 08.
Article in English | MEDLINE | ID: mdl-34379329

ABSTRACT

From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.


Subject(s)
Goals , Motivation , Hand , Humans , Infant , Learning , Movement
9.
Front Psychol ; 12: 695550, 2021.
Article in English | MEDLINE | ID: mdl-34447336

ABSTRACT

During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.

10.
Top Cogn Sci ; 13(1): 10-24, 2021 01.
Article in English | MEDLINE | ID: mdl-33274596

ABSTRACT

Our minds navigate a continuous stream of sensorimotor experiences, selectively compressing them into events. Event-predictive encodings and processing abilities have evolved because they mirror interactions between agents and objects-and the pursuance or avoidance of critical interactions lies at the heart of survival and reproduction. However, it appears that these abilities have evolved not only to pursue live-enhancing events and to avoid threatening events, but also to distinguish food sources, to produce and to use tools, to cooperate, and to communicate. They may have even set the stage for the formation of larger societies and the development of cultural identities. Research on event-predictive cognition investigates how events and conceptualizations thereof are learned, structured, and processed dynamically. It suggests that event-predictive encodings and processes optimally mediate between sensorimotor processes and language. On the one hand, they enable us to perceive and control physical interactions with our world in a highly adaptive, versatile, goal-directed manner. On the other hand, they allow us to coordinate complex social interactions and, in particular, to comprehend and produce language. Event-predictive learning segments sensorimotor experiences into event-predictive encodings. Once first encodings are formed, the mind learns progressively higher order compositional structures, which allow reflecting on the past, reasoning, and planning on multiple levels of abstraction. We conclude that human conceptual thought may be grounded in the principles of event-predictive cognition constituting its root.


Subject(s)
Cognition , Learning , Humans , Language , Motivation
11.
Vision (Basel) ; 3(2)2019 Apr 18.
Article in English | MEDLINE | ID: mdl-31735816

ABSTRACT

According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants' virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing.

12.
Neural Netw ; 117: 135-144, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31158645

ABSTRACT

We introduce REPRISE, a REtrospective and PRospective Inference SchEme, which learns temporal event-predictive models of dynamical systems. REPRISE infers the unobservable contextual event state and accompanying temporal predictive models that best explain the recently encountered sensorimotor experiences retrospectively. Meanwhile, it optimizes upcoming motor activities prospectively in a goal-directed manner. Here, REPRISE is implemented by a recurrent neural network (RNN), which learns temporal forward models of the sensorimotor contingencies generated by different simulated dynamic vehicles. The RNN is augmented with contextual neurons, which enable the encoding of distinct, but related, sensorimotor dynamics as compact event codes. We show that REPRISE concurrently learns to separate and approximate the encountered sensorimotor dynamics: it analyzes sensorimotor error signals adapting both internal contextual neural activities and connection weight values. Moreover, we show that REPRISE can exploit the learned model to induce goal-directed, model-predictive control, that is, approximate active inference: Given a goal state, the system imagines a motor command sequence optimizing it with the prospective objective to minimize the distance to the goal. The RNN activities thus continuously imagine the upcoming future and reflect on the recent past, optimizing the predictive model, the hidden neural state activities, and the upcoming motor activities. As a result, event-predictive neural encodings develop, which allow the invocation of highly effective and adaptive goal-directed sensorimotor control.


Subject(s)
Machine Learning , Models, Neurological , Humans , Learning
13.
J Exp Psychol Learn Mem Cogn ; 45(7): 1205-1223, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30047770

ABSTRACT

Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participant's body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of body-gravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Mental Recall/physiology , Posture/physiology , Space Perception/physiology , Spatial Memory/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
14.
Front Psychol ; 9: 622, 2018.
Article in English | MEDLINE | ID: mdl-29765347

ABSTRACT

Spatial, physical, and semantic magnitude dimensions can influence action decisions in human cognitive processing and interact with each other. For example, in the spatial-numerical associations of response code (SNARC) effect, semantic numerical magnitude facilitates left-hand or right-hand responding dependent on the small or large magnitude of number symbols. SNARC-like interactions of numerical magnitudes with the radial spatial dimension (depth) were postulated from early on. Usually, the SNARC effect in any direction is investigated using fronto-parallel computer monitors for presentation of stimuli. In such 2D setups, however, the metaphorical and literal interpretation of the radial depth axis with seemingly close/far stimuli or responses are not distinct. Hence, it is difficult to draw clear conclusions with respect to the contribution of different spatial mappings to the SNARC effect. In order to disentangle the different mappings in a natural way, we studied parametrical interactions between semantic numerical magnitude, horizontal directional responses, and perceptual distance by means of stereoscopic depth in an immersive virtual reality (VR). Two VR experiments show horizontal SNARC effects across all spatial displacements in traditional latency measures and kinematic response parameters. No indications of a SNARC effect along the depth axis, as it would be predicted by a direct mapping account, were observed, but the results show a non-linear relationship between horizontal SNARC slopes and physical distance. Steepest SNARC slopes were observed for digits presented close to the hands. We conclude that spatial-numerical processing is susceptible to effector-based processes but relatively resilient to task-irrelevant variations of radial-spatial magnitudes.

15.
Cognition ; 176: 65-73, 2018 07.
Article in English | MEDLINE | ID: mdl-29549760

ABSTRACT

It has been suggested that our mind anticipates the future to act in a goal-directed, event-oriented manner. Here we asked whether peripersonal hand space, that is, the space surrounding one's hands, is dynamically and adaptively mapped into the future while planning and executing a goal-directed object manipulation. We thus combined the crossmodal congruency paradigm (CCP), which has been used to study selective interactions between vision and touch within peripersonal space, with an object manipulation task. We expected crossmodal interactions in anticipation of the upcoming, currently planned object grasp, which varied trial-by-trial depending on the object's orientation. Our results confirm that visual distractors close to the future finger positions selectively influence vibrotactile perceptions. Moreover, vibrotactile stimulation influences gaze behavior in the light of the anticipated grasp. Both influences become apparent partly even before the hand starts to move, soon after visual target object onset. These results thus support theories of event encodings and anticipatory behavior, showing that peripersonal hand space is flexibly remapped onto a future, currently actively inferred hand position.


Subject(s)
Anticipation, Psychological , Goals , Personal Space , Psychomotor Performance , Space Perception , Adult , Female , Fixation, Ocular , Hand , Humans , Male , Models, Psychological , Photic Stimulation , Physical Stimulation , Time Factors , Touch Perception , Visual Perception , Young Adult
16.
Mem Cognit ; 46(1): 158-171, 2018 01.
Article in English | MEDLINE | ID: mdl-28875474

ABSTRACT

Previous behavioral and neurophysiological research has shown better memory for horizontal than for vertical locations. In these studies, participants navigated toward these locations. In the present study we investigated whether the orientation of the spatial plane per se was responsible for this difference. We thus had participants learn locations visually from a single perspective and retrieve them from multiple viewpoints. In three experiments, participants studied colored tags on a horizontally or vertically oriented board within a virtual room and recalled these locations with different layout orientations (Exp. 1) or from different room-based perspectives (Exps. 2 and 3). All experiments revealed evidence for equal recall performance in horizontal and vertical memory. In addition, the patterns for recall from different test orientations were rather similar. Consequently, our results suggest that memory is qualitatively similar for both vertical and horizontal two-dimensional locations, given that these locations are learned from a single viewpoint. Thus, prior differences in spatial memory may have originated from the structure of the space or the fact that participants navigated through it. Additionally, the strong performance advantages for perspective shifts (Exps. 2 and 3) relative to layout rotations (Exp. 1) suggest that configurational judgments are not only based on memory of the relations between target objects, but also encompass the relations between target objects and the surrounding room-for example, in the form of a memorized view.


Subject(s)
Mental Recall/physiology , Space Perception/physiology , Spatial Learning/physiology , Spatial Memory/physiology , Adult , Female , Humans , Male , Middle Aged , Young Adult
17.
Cogn Process ; 18(3): 211-228, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28349249

ABSTRACT

According to embodied cognition, bodily interactions with our environment shape the perception and representation of our body and the surrounding space, that is, peripersonal space. To investigate the adaptive nature of these spatial representations, we introduced a multisensory conflict between vision and proprioception in an immersive virtual reality. During individual bimanual interaction trials, we gradually shifted the visual hand representation. As a result, participants unknowingly shifted their actual hands to compensate for the visual shift. We then measured the adaptation to the invoked multisensory conflict by means of a self-localization and an external localization task. While effects of the conflict were observed in both tasks, the effects systematically interacted with the type of localization task and the available visual information while performing the localization task (i.e., the visibility of the virtual hands). The results imply that the localization of one's own hands is based on a multisensory integration process, which is modulated by the saliency of the currently most relevant sensory modality and the involved frame of reference. Moreover, the results suggest that our brain strives for consistency between its body and spatial estimates, thereby adapting multiple, related frames of reference, and the spatial estimates within, due to a sensory conflict in one of them.


Subject(s)
Brain/physiology , Proprioception/physiology , Spatial Processing/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Hand , Humans , Male , Personal Space , Space Perception , Virtual Reality , Vision, Ocular , Young Adult
18.
Top Cogn Sci ; 9(2): 343-373, 2017 04.
Article in English | MEDLINE | ID: mdl-28176449

ABSTRACT

In line with Allen Newell's challenge to develop complete cognitive architectures, and motivated by a recent proposal for a unifying subsymbolic computational theory of cognition, we introduce the cognitive control architecture SEMLINCS. SEMLINCS models the development of an embodied cognitive agent that learns discrete production rule-like structures from its own, autonomously gathered, continuous sensorimotor experiences. Moreover, the agent uses the developing knowledge to plan and control environmental interactions in a versatile, goal-directed, and self-motivated manner. Thus, in contrast to several well-known symbolic cognitive architectures, SEMLINCS is not provided with production rules and the involved symbols, but it learns them. In this paper, the actual implementation of SEMLINCS causes learning and self-motivated, autonomous behavioral control of the game figure Mario in a clone of the computer game Super Mario Bros. Our evaluations highlight the successful development of behavioral versatility as well as the learning of suitable production rules and the involved symbols from sensorimotor experiences. Moreover, knowledge- and motivation-dependent individualizations of the agents' behavioral tendencies are shown. Finally, interaction sequences can be planned on the sensorimotor-grounded production rule level. Current limitations directly point toward the need for several further enhancements, which may be integrated into SEMLINCS in the near future. Overall, SEMLINCS may be viewed as an architecture that allows the functional and computational modeling of embodied cognitive development, whereby the current main focus lies on the development of production rules from sensorimotor experiences.


Subject(s)
Cognition , Learning , Humans , Motivation
19.
Exp Brain Res ; 235(4): 1063-1079, 2017 04.
Article in English | MEDLINE | ID: mdl-28078359

ABSTRACT

Although several process models have described the cognitive processing stages that are involved in mentally rotating objects, the exact nature of the rotation process itself remains elusive. According to embodied cognition, cognitive functions are deeply grounded in the sensorimotor system. We thus hypothesized that modal rotation perceptions should influence mental rotations. We conducted two studies in which participants had to judge if a rotated letter was visually presented canonically or mirrored. Concurrently, participants had to judge if a tactile rotation on their palm changed direction during the trial. The results show that tactile rotations can systematically influence mental rotation performance in that same rotations are favored. In addition, the results show that mental rotations produce a response compatibility effect: clockwise mental rotations facilitate responses to the right, while counterclockwise mental rotations facilitate responses to the left. We conclude that the execution of mental rotations activates cognitive mechanisms that are also used to perceive rotations in different modalities and that are associated with directional motor control processes.


Subject(s)
Imagination/physiology , Mental Processes/physiology , Rotation , Space Perception/physiology , Touch/physiology , Adolescent , Analysis of Variance , Female , Humans , Male , Photic Stimulation , Physical Stimulation/instrumentation , Psychomotor Performance , Reaction Time/physiology , Young Adult
20.
Front Psychol ; 7: 925, 2016.
Article in English | MEDLINE | ID: mdl-27445895

ABSTRACT

This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper.

SELECTION OF CITATIONS
SEARCH DETAIL
...