Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Psychol Res ; 83(3): 498-513, 2019 Apr.
Article in English | MEDLINE | ID: mdl-28770385

ABSTRACT

Theories of Enactivism propose an action-oriented approach to understand human cognition. So far, however, empirical evidence supporting these theories has been sparse. Here, we investigate whether spatial navigation based on allocentric reference frames that are independent of the observer's physical body can be understood within an action-oriented approach. Therefore, we performed three experiments testing the knowledge of the absolute orientation of houses and streets towards north, the relative orientation of two houses and two streets, respectively, and the location of houses towards each other in a pointing task. Our results demonstrate that under time pressure, the relative orientation of two houses can be retrieved more accurately than the absolute orientation of single houses. With infinite time for cognitive reasoning, the performance of the task using house stimuli increased greatly for the absolute orientation and surpassed the slightly improved performance in the relative orientation task. In contrast, with streets as stimuli participants performed under time pressure better in the absolute orientation task. Overall, pointing from one house to another house yielded the best performance. This suggests, first, that orientation and location information about houses are primarily coded in house-to-house relations, whereas cardinal information is deduced via cognitive reasoning. Second, orientation information for streets is preferentially coded in absolute orientations. Thus, our results suggest that spatial information about house and street orientation is coded differently and that house orientation and location is primarily learned in an action-oriented way, which is in line with an enactive framework for human cognition.


Subject(s)
Cognition/physiology , Learning/physiology , Orientation, Spatial/physiology , Problem Solving/physiology , Space Perception/physiology , Spatial Navigation/physiology , Adult , Female , Germany , Humans , Male , Young Adult
2.
Exp Brain Res ; 236(10): 2811-2827, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30030590

ABSTRACT

A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontal-vertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontal-vertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations.


Subject(s)
Distance Perception/physiology , Motion Perception/physiology , Orientation/physiology , Adult , Analysis of Variance , Anisotropy , Eye Movements/physiology , Female , Humans , Male , Middle Aged , Models, Theoretical , Posture , Reflex, Vestibulo-Ocular , Rotation , Space Perception , Young Adult
3.
Mem Cognit ; 46(1): 158-171, 2018 01.
Article in English | MEDLINE | ID: mdl-28875474

ABSTRACT

Previous behavioral and neurophysiological research has shown better memory for horizontal than for vertical locations. In these studies, participants navigated toward these locations. In the present study we investigated whether the orientation of the spatial plane per se was responsible for this difference. We thus had participants learn locations visually from a single perspective and retrieve them from multiple viewpoints. In three experiments, participants studied colored tags on a horizontally or vertically oriented board within a virtual room and recalled these locations with different layout orientations (Exp. 1) or from different room-based perspectives (Exps. 2 and 3). All experiments revealed evidence for equal recall performance in horizontal and vertical memory. In addition, the patterns for recall from different test orientations were rather similar. Consequently, our results suggest that memory is qualitatively similar for both vertical and horizontal two-dimensional locations, given that these locations are learned from a single viewpoint. Thus, prior differences in spatial memory may have originated from the structure of the space or the fact that participants navigated through it. Additionally, the strong performance advantages for perspective shifts (Exps. 2 and 3) relative to layout rotations (Exp. 1) suggest that configurational judgments are not only based on memory of the relations between target objects, but also encompass the relations between target objects and the surrounding room-for example, in the form of a memorized view.


Subject(s)
Mental Recall/physiology , Space Perception/physiology , Spatial Learning/physiology , Spatial Memory/physiology , Adult , Female , Humans , Male , Middle Aged , Young Adult
4.
Psychol Res ; 79(6): 1000-8, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25416007

ABSTRACT

In everyday life, navigators often consult a map before they navigate to a destination (e.g., a hotel, a room, etc.). However, not much is known about how humans gain spatial knowledge from seeing a map and direct navigation together. In the present experiments, participants learned a simple multiple corridor space either from a map only, only from walking through the virtual environment, first from the map and then from navigation, or first from navigation and then from the map. Afterwards, they conducted a pointing task from multiple body orientations to infer the underlying reference frames. We constructed the learning experiences in a way such that map-only learning and navigation-only learning triggered spatial memory organized along different reference frame orientations. When learning from maps before and during navigation, participants employed a map- rather than a navigation-based reference frame in the subsequent pointing task. Consequently, maps caused the employment of a map-oriented reference frame found in memory for highly familiar urban environments ruling out explanations from environmental structure or north preference. When learning from navigation first and then from the map, the pattern of results reversed and participants employed a navigation-based reference frame. The priority of learning order suggests that despite considerable difference between map and navigation learning participants did not use the more salient or in general more useful information, but relied on the reference frame established first.


Subject(s)
Maps as Topic , Orientation , Spatial Learning , Spatial Navigation , Adult , Female , Humans , Male , Problem Solving , Reference Values , Social Environment , Space Perception , User-Computer Interface , Young Adult
5.
Psychol Sci ; 23(2): 120-5, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22207644

ABSTRACT

We examined how a highly familiar environmental space--one's city of residence--is represented in memory. Twenty-six participants faced a photo-realistic virtual model of their hometown and completed a task in which they pointed to familiar target locations from various orientations. Each participant's performance was most accurate when he or she was facing north, and errors increased as participants' deviation from a north-facing orientation increased. Pointing errors and latencies were not related to the distance between participants' initial locations and the target locations. Our results are inconsistent with accounts of orientation-free memory and with theories assuming that the storage of spatial knowledge depends on local reference frames. Although participants recognized familiar local views in their initial locations, their strategy for pointing relied on a single, north-oriented reference frame that was likely acquired from maps rather than experience from daily exploration. Even though participants had spent significantly more time navigating the city than looking at maps, their pointing behavior seemed to rely on a north-oriented mental map.


Subject(s)
Memory/physiology , Space Perception/physiology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Psychomotor Performance
6.
Mem Cognit ; 39(6): 1042-54, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21472477

ABSTRACT

The integration of spatial information perceived from different viewpoints is a frequent, yet largely unexplored, cognitive ability. In two experiments, participants saw two presentations, each consisting of three targets-that is, illuminated tiles on the floor-before walking the shortest possible path across all targets. In Experiment 1, participants viewed the targets either from the same viewpoint or from different viewpoints. Errors in recalling targets increased if participants changed their viewpoints between presentations, suggesting that memory acquired from different viewpoints had to be aligned for integration. Furthermore, the error pattern indicates that memory for the first presentation was transformed into the reference frame of the second presentation. In Experiment 2, we examined whether this transformation occurred because new information was integrated already during encoding or because memorized information was integrated when required. Results suggest that the latter is the case. This might serve as a strategy for avoiding additional alignments.


Subject(s)
Memory, Short-Term/physiology , Orientation/physiology , Space Perception/physiology , Adult , Female , Humans , Male , Neuropsychological Tests , Visual Perception
7.
Acta Psychol (Amst) ; 210: 103168, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32919093

ABSTRACT

The goal of new adaptive technologies is to allow humans to interact with technical devices, such as robots, in natural ways akin to human interaction. Essential for achieving this goal, is the understanding of the factors that support natural interaction. Here, we examined whether human motor control is linked to the visual appearance of the interaction partner. Motor control theories consider kinematic-related information but not visual appearance as important for the control of motor movements (Flash & Hogan, 1985; Harris & Wolpert, 1998; Viviani & Terzuolo, 1982). We investigated the sensitivity of motor control to visual appearance during the execution of a social interaction, i.e. a high-five. In a novel mixed reality setup participants executed a high-five with a three-dimensional life-size human- or a robot-looking avatar. Our results demonstrate that movement trajectories and adjustments to perturbations depended on the visual appearance of the avatar despite both avatars carrying out identical movements. Moreover, two well-known motor theories (minimum jerk, two-thirds power law) better predict robot than human interaction trajectories. The dependence of motor control on the human likeness of the interaction partner suggests that different motor control principles might be at work in object and human directed interactions.


Subject(s)
Movement , Social Interaction , Biomechanical Phenomena , Humans
8.
J Exp Psychol Learn Mem Cogn ; 45(6): 993-1013, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30179037

ABSTRACT

Objects learned within single enclosed spaces (e.g., rooms) can be represented within a single reference frame. Contrarily, the representation of navigable spaces (multiple interconnected enclosed spaces) is less well understood. In this study we examined different levels of integration within memory (local, regional, global), when learning object locations in navigable space. Participants consecutively learned two distinctive regions of a virtual environment that eventually converged at a common transition point and subsequently solved a pointing task. In Experiment 1 pointing latency increased with increasing corridor distance to the target and additionally when pointing into the other region. Further, when pointing within a region alignment with local and regional reference frames, when pointing across regional boundaries alignment with a global reference frame was found to accelerate pointing. Thus, participants memorized local corridors, clustered corridors into regions, and integrated globally across the entire environment. Introducing the transition point at the beginning of learning each region in Experiment 2 caused previous region effects to vanish. Our findings emphasize the importance of locally confined spaces for structuring spatial memory and suggest that the opportunity to integrate novel into existing spatial information early during learning may influence unit formation on the regional level. Further, global representations seem to be consulted only when accessing spatial information beyond regional borders. Our results are inconsistent with conceptions of spatial memory for large scale environments based either exclusively on local reference frames or upon a single reference frame encompassing the whole environment, but rather support hierarchical representation of space. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Spatial Memory , Spatial Navigation , Adult , Female , Humans , Male , Psychological Theory , Spatial Learning , Virtual Reality
9.
J Exp Psychol Learn Mem Cogn ; 45(7): 1205-1223, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30047770

ABSTRACT

Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participant's body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of body-gravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Mental Recall/physiology , Posture/physiology , Space Perception/physiology , Spatial Memory/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
10.
Cogn Sci ; 32(4): 755-70, 2008 Jun.
Article in English | MEDLINE | ID: mdl-21635352

ABSTRACT

This study examines the working memory systems involved in human wayfinding. In the learning phase, 24 participants learned two routes in a novel photorealistic virtual environment displayed on a 220° screen while they were disrupted by a visual, a spatial, a verbal, or-in a control group-no secondary task. In the following wayfinding phase, the participants had to find and to "virtually walk" the two routes again. During this wayfinding phase, a number of dependent measures were recorded. This research shows that encoding wayfinding knowledge interfered with the verbal and with the spatial secondary task. These interferences were even stronger than the interference of wayfinding knowledge with the visual secondary task. These findings are consistent with a dual-coding approach of wayfinding knowledge.

11.
Front Psychol ; 9: 1374, 2018.
Article in English | MEDLINE | ID: mdl-30123172

ABSTRACT

The object orientation effect describes shorter perceived distances to the front than to the back of oriented objects. The present work extends previous studies in showing that the object orientation effect occurs not only for egocentric distances between an observer and an object, but also for exocentric distances, that are between two oriented objects. Participants watched animated virtual humans (avatars) which were either facing each other or looking away, and afterward adjusted a bar to estimate the perceived length. In two experiments, participants judged avatars facing each other as closer than avatars facing away from each other. As the judged distance was between two objects and did not involve the observer, results rule out an explanation that observers perceive object fronts as closer to prepare for future interaction with them. The second experiment tested an explanation by predictive coding, this is the extrapolation of the current state of affairs to likely future states here that avatars move forward. We used avatars standing on bridges either connecting them or running orthogonal to the inter-avatar line thus preventing forward movement. This variation of walkability did not influence participants' judgments. We conclude that if predictive coding was used by participants, they did not consider the whole scene layout for prediction, but concentrated on avatars. Another potential explanation of the effect assumes a general asymmetrical distribution of inter-person distances: people facing each other might typically be closer to each other than when facing away and that this asymmetry is reflected as a bias in perception.

12.
Sci Rep ; 7(1): 17037, 2017 12 06.
Article in English | MEDLINE | ID: mdl-29213057

ABSTRACT

The perception of relative target movement from a dynamic observer is an unexamined psychological three body problem. To test the applicability of explanations for two moving bodies participants repeatedly judged the relative movements of two runners chasing each other in video clips displayed on a stationary screen. The chased person always ran at 3 m/s with an observer camera following or leading at 4.5, 3, 1.5 or 0 m/s. We harmonized the chaser speed in an adaptive staircase to determine the point of subjective equal movement speed between runners and observed (i) an underestimation of chaser speed if the runners moved towards the viewer, and (ii) an overestimation of chaser speed if the runners moved away from the viewer, leading to a catch-up illusion in case of equidistant runners. The bias was independent of the richness of available self-movement cues. Results are inconsistent with computing individual speeds, relying on constant visual angles, expansion rates, occlusions, or relative distances but are consistent with inducing the impression of relative movement through perceptually compressing and enlarging inter-runner distance. This mechanism should be considered when predicting human behavior in complex situations with multiple objects moving in depth such as driving or team sports.


Subject(s)
Distance Perception , Illusions , Motion Perception , Adult , Female , Humans , Male , Photic Stimulation , Young Adult
13.
Neuropsychologia ; 103: 154-161, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28684296

ABSTRACT

OBJECTIVE: In a recent systematic review, Claessen and van der Ham (2017) have analyzed the types of navigation impairment in the single-case study literature. Three dissociable types related to landmarks, locations, and paths were identified. This recent model as well as previous models of navigation impairment have never been verified in a systematic manner. The aim of the current study was thus to investigate the prevalence of landmark-based, location-based, and path-based navigation impairment in a large sample of stroke patients. METHOD: Navigation ability of 77 stroke patients in the chronic phase and 60 healthy participants was comprehensively evaluated using the Virtual Tübingen test, which contains twelve subtasks addressing various aspects of knowledge about landmarks, locations, and paths based on a newly learned virtual route. Participants also filled out the Wayfinding Questionnaire to allow for making a distinction between stroke patients with and without significant subjective navigation-related complaints. RESULTS: Analysis of responses on the Wayfinding Questionnaire indicated that 33 of the 77 participating stroke patients had significant navigation-related complaints. An examination of their performance on the Virtual Tübingen test established objective evidence for navigation impairment in 27 patients. Both landmark-based and path-based navigation impairment occurred in isolation, while location-based navigation impairment was only found along with the other two types. CONCLUSIONS: The current study provides the first empirical support for the distinction between landmark-based, location-based, and path-based navigation impairment. Future research relying on other assessment instruments of navigation ability might be helpful to further validate this distinction.


Subject(s)
Spatial Navigation , Stroke/psychology , Adult , Aged , Aged, 80 and over , Chronic Disease , Cognition Disorders/etiology , Female , Humans , Male , Middle Aged , Models, Neurological , Neuropsychological Tests , Pattern Recognition, Visual , Recognition, Psychology , Stroke/complications , Surveys and Questionnaires , Virtual Reality , Young Adult
14.
PLoS One ; 11(4): e0154088, 2016.
Article in English | MEDLINE | ID: mdl-27101011

ABSTRACT

Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.


Subject(s)
Memory, Short-Term , Space Perception , Adolescent , Adult , Female , Humans , Male , Young Adult
15.
Cognition ; 155: 77-95, 2016 10.
Article in English | MEDLINE | ID: mdl-27367592

ABSTRACT

Two classes of space define our everyday experience within our surrounding environment: vista spaces, such as rooms or streets which can be perceived from one vantage point, and environmental spaces, for example, buildings and towns which are grasped from multiple views acquired during locomotion. However, theories of spatial representations often treat both spaces as equal. The present experiments show that this assumption cannot be upheld. Participants learned exactly the same layout of objects either within a single room or spread across multiple corridors. By utilizing a pointing and a placement task we tested the acquired configurational memory. In Experiment 1 retrieving memory of the object layout acquired in environmental space was affected by the distance of the traveled path and the order in which the objects were learned. In contrast, memory retrieval of objects learned in vista space was not bound to distance and relied on different ordering schemes (e.g., along the layout structure). Furthermore, spatial memory of both spaces differed with respect to the employed reference frame orientation. Environmental space memory was organized along the learning experience rather than layout intrinsic structure. In Experiment 2 participants memorized the object layout presented within the vista space room of Experiment 1 while the learning procedure emulated environmental space learning (movement, successive object presentation). Neither factor rendered similar results as found in environmental space learning. This shows that memory differences between vista and environmental space originated mainly from the spatial compartmentalization which was unique to environmental space learning. Our results suggest that transferring conclusions from findings obtained in vista space to environmental spaces and vice versa should be made with caution.


Subject(s)
Orientation , Space Perception , Spatial Learning , Spatial Memory , Adult , Environment , Female , Humans , Male , Young Adult
16.
Psychon Bull Rev ; 23(1): 246-52, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26088668

ABSTRACT

Reference frames in spatial memory encoding have been examined intensively in recent years. However, their importance for recall has received considerably less attention. In the present study, passersby used tags to arrange a configuration map of prominent city center landmarks. It has been shown that such configurational knowledge is memorized within a north-up reference frame. However, participants adjusted their maps according to their body orientations. For example, when participants faced south, the maps were likely to face south-up. Participants also constructed maps along their location perspective-that is, the self-target direction. If, for instance, they were east of the represented area, their maps were oriented west-up. If the location perspective and body orientation were in opposite directions (i.e., if participants faced away from the city center), participants relied on location perspective. The results indicate that reference frames in spatial recall depend on the current situation rather than on the organization in long-term memory. These results cannot be explained by activation spread within a view graph, which had been used to explain similar results in the recall of city plazas. However, the results are consistent with forming and transforming a spatial image of nonvisible city locations from the current location. Furthermore, prior research has almost exclusively focused on body- and environment-based reference frames. The strong influence of location perspective in an everyday navigational context indicates that such a reference frame should be considered more often when examining human spatial cognition.


Subject(s)
Mental Recall/physiology , Space Perception/physiology , Spatial Memory/physiology , Spatial Navigation/physiology , Adult , Humans , Orientation/physiology
17.
Front Psychol ; 7: 76, 2016.
Article in English | MEDLINE | ID: mdl-26869975

ABSTRACT

Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before-wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections' names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections' names were descriptive and participants' route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids.

18.
Front Psychol ; 7: 217, 2016.
Article in English | MEDLINE | ID: mdl-27014108

ABSTRACT

People maintain larger distances to other peoples' front than to their back. We investigated if humans also judge another person as closer when viewing their front than their back. Participants watched animated virtual characters (avatars) and moved a virtual plane toward their location after the avatar was removed. In Experiment 1, participants judged avatars, which were facing them as closer and made quicker estimates than to avatars looking away. In Experiment 2, avatars were rotated in 30 degree steps around the vertical axis. Observers judged avatars roughly facing them (i.e., looking max. 60 degrees away) as closer than avatars roughly looking away. No particular effect was observed for avatars directly facing and also gazing at the observer. We conclude that body orientation was sufficient to generate the asymmetry. Sensitivity of the orientation effect to gaze and to interpersonal distance would have suggested involvement of social processing, but this was not observed. We discuss social and lower-level processing as potential reasons for the effect.

19.
PLoS One ; 11(12): e0166647, 2016.
Article in English | MEDLINE | ID: mdl-27959914

ABSTRACT

Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.


Subject(s)
Consciousness , Learning , Sensorimotor Cortex/physiology , Space Perception , Adult , Cognition , Female , Humans , Magnetic Resonance Imaging/instrumentation , Magnetic Resonance Imaging/methods , Male , Sleep
20.
Accid Anal Prev ; 34(5): 649-54, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12214959

ABSTRACT

The risk of a collision with another vehicle due to the presence of passengers is analysed in detail in a large sample of accidents from Mittelfranken, Germany, from the years 1984 to 1997. Using a responsibility analysis, the overall effect of the presence of passengers and the influence of modifying variables is examined. While a general protective effect of the presence of passengers is found, this is reduced in young drivers, during darkness, in slow traffic and at crossroads, especially when disregarding the right of way and passing a car. These findings are interpreted as a general positive effect of the presence of passengers who influence the driver's behaviour towards more cautious and thus safer driving behaviour. However, passengers may also distract drivers' attention in an amount which cannot be compensated for in all situations and by all drivers by cautious driving. Besides educational measure, a potential solution to this problem may be driver assistance systems which give an adapted kind of support when passengers are present.


Subject(s)
Accidents, Traffic/statistics & numerical data , Automobile Driving , Attention , Humans , Logistic Models , Risk Assessment , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL