RESUMO
Taking a motor planning perspective, this study investigates whether haptic force cues displayed on the steering wheel are more effective than visual cues in signaling the direction of an upcoming lane change. Licensed drivers drove in a fixed-base driving simulator equipped with an active steering system for realistic force feedback. They were instructed to make lane changes upon registering a directional cue. Cues were delivered according to the movement precuing technique employing a pair of precues and imperative cues which could be either visual, haptic, or crossmodal (a visual precue with a haptic imperative cue, and vice versa). The main dependent variable was response time. Additional analyses were conducted on steering wheel angle profiles and the rate of initial steering errors. Conditions with a haptic imperative cue produced considerably faster responses than conditions with a visual imperative cue, irrespective of the precue modality. Valid and invalid precues produced the typical gains and costs, with one exception. There appeared to be little cost in response time or initial steering errors associated with invalid cueing when both cues were haptic. The results are consistent with the hypothesis that imperative haptic cues facilitate action selection while visual stimuli require additional time-consuming cognitive processing.
Assuntos
Condução de Veículo , Humanos , Condução de Veículo/psicologia , Tecnologia Háptica , Tempo de Reação , Sinais (Psicologia) , MovimentoRESUMO
This paper aims to improve the steering performance of the Ackermann personal mobility scooter based on a new meta-heuristic optimization algorithm named Differential Harris Hawks Optimization (DHHO) and the modeling of the steering encoder. The steering response in the Ackermann mechanism is crucial for automated driving systems (ADS), especially in localization and path-planning phases. Various methods presented in the literature are used to control the steering, and meta-heuristic optimization algorithms have achieved prominent results. Harris Hawks optimization (HHO) algorithm is a recent algorithm that outperforms state-of-the-art algorithms in various optimization applications. However, it has yet to be applied to the steering control application. The research in this paper was conducted in three stages. First, practical experiments were performed on the steering encoder sensor that measures the steering angle of the Landlex mobility scooter, and supervised learning was applied to model the results obtained for the steering control. Second, the DHHO algorithm is proposed by introducing mutation between hawks in the exploration phase instead of the Hawks perch technique, improving population diversity and reducing premature convergence. The simulation results on CEC2021 benchmark functions showed that the DHHO algorithm outperforms the HHO, PSO, BAS, and CMAES algorithms. The mean error of the DHHO is improved with a confidence level of 99.8047% and 91.6016% in the 10-dimension and 20-dimension problems, respectively, compared with the original HHO. Third, DHHO is implemented for interactive real-time PID tuning to control the steering of the Ackermann scooter. The practical transient response results showed that the settling time is improved by 89.31% compared to the original response with no overshoot and steady-state error, proving the superior performance of the DHHO algorithm compared to the traditional control methods.
RESUMO
Recent research indicates that installing shoulders on rural roads for safety purposes causes drivers to steer further inside on right bends and thus exceed lane boundaries. The present simulator study examined whether continuous rather than broken edge-line delineation would help drivers to keep their vehicles within the lane. The results indicated that continuous delineation significantly impacts the drivers' gaze and steering trajectories. Drivers looked more towards the lane centre and shifted their steering trajectories accordingly. This was accompanied by a significant decrease in lane-departure frequency when driving on a 3.50-m lane but not on a 2.75-m lane. Overall, the findings provide evidence that continuous delineation influences steering control by altering the visual processes underlying trajectory planning. It is concluded that continuous edge-line delineation between lanes and shoulders may induce safer driver behaviour on right bends, which has potential implications for preventing run-off-road crashes and cyclist safety.Practitioner summary: This study examined how continuous and broken edge lines influence driving behaviour around bends with shoulders. With continuous delineation, drivers gazed and steered in the bend further from the edge line and thus had fewer lane departures. Continuous marking can therefore help prevent run-off-road crashes and improve cyclists' safety.
RESUMO
Improvements in vehicle safety require understanding of the neural systems that support the complex, dynamic task of real-world driving. We used functional near infrared spectroscopy (fNIRS) and pupilometry to quantify cortical and physiological responses during a realistic, simulated driving task in which vehicle dynamics were manipulated. Our results elucidate compensatory changes in driver behavior in response to changes in vehicle handling. We also describe associated neural and physiological responses under different levels of mental workload. The increased cortical activation we observed during the late phase of the experiment may indicate motor learning in prefrontal-parietal networks. Finally, relationships among cortical activation, steering control, and individual personality traits suggest that individual brain states and traits may be useful in predicting a driver's response to changes in vehicle dynamics. Results such as these will be useful for informing the design of automated safety systems that facilitate safe and supportive driver-car communication.
Assuntos
Condução de Veículo , Córtex Cerebral/fisiologia , Neuroimagem Funcional/métodos , Aprendizagem/fisiologia , Sistemas Homem-Máquina , Personalidade/fisiologia , Desempenho Psicomotor/fisiologia , Pupila/fisiologia , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Adolescente , Adulto , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Masculino , Adulto JovemRESUMO
OBJECTIVE: The aim of this study was to investigate steering control in a low-cost driving simulator with and without a virtual vehicle cab. BACKGROUND: In low-cost simulators, the lack of a vehicle cab denies driver access to vehicle width, which could affect steering control, insofar as locomotor adjustments are known to be based on action-scaled visual judgments of the environment. METHOD: Two experiments were conducted in which steering control with and without a virtual vehicle cab was investigated in a within-subject design, using cornering and straight-lane-keeping tasks. RESULTS: Driving around curves without vehicle cab information made drivers deviate more from the lane center toward the inner edge in right (virtual cab = 4 ± 19 cm; no cab = 42 ± 28 cm; at the apex of the curve, p < .001) but not in left curves. More lateral deviation from the lane center toward the edge line was also found in driving without the virtual cab on straight roads (virtual cab = 21 ± 28 cm; no cab = 36 ± 27 cm; p < .001), whereas driving stability and presence ratings were not affected. In both experiments, the greater lateral deviation in the no-cab condition led to significantly more time driving off the lane. CONCLUSION: The findings strongly suggest that without cab information, participants underestimate the distance to the right edge of the car (in contrast to the left edge) and thus vehicle width. This produces considerable differences in the steering trajectory. APPLICATION: Providing a virtual vehicle cab must be encouraged for more effectively capturing drivers' steering control in low-cost simulators.
Assuntos
Condução de Veículo , Automóveis , Desempenho Psicomotor/fisiologia , Treinamento por Simulação/normas , Percepção Espacial/fisiologia , Realidade Virtual , Adulto , Feminino , Humanos , Masculino , Adulto JovemRESUMO
Steering demands rapid responses to heading deviations and uses optic flow to redirect self-movement toward the intended destination. We trained monkeys in a naturalistic steering paradigm and recorded dorsal medial superior temporal area (MSTd) cortical neuronal responses to the visual motion and spatial location cues in optic flow. We found that neuronal responses to the initial heading direction are dominated by the optic flow's global radial pattern cue. Responses to subsequently imposed heading deviations are dominated by the local direction of motion cue. Finally, as the monkey steers its heading back to the goal location, responses are dominated by the spatial location cue, the screen location of the flow field's center of motion. We conclude that MSTd responses are not rigidly linked to specific stimuli, but rather are transformed by the task relevance of cues that guide performance in learned, naturalistic behaviors. SIGNIFICANCE STATEMENT: Unplanned heading changes trigger lifesaving steering back to a goal. Conventionally, such behaviors are thought of as cortical sensory-motor reflex arcs. We find that a more reciprocal process underlies such cycles of perception and action, rapidly transforming visual processing to suit each stage of the task. When monkeys monitor their simulated self-movement, dorsal medial superior temporal area (MSTd) neurons represent their current heading direction. When monkeys steer to recover from an unplanned change in heading direction, MSTd shifts toward representing the goal location. We hypothesize that this transformation reflects the reweighting of bottom-up visual motion signals and top-down spatial location signals, reshaping MSTd's response properties through task-dependent interactions with adjacent cortical areas.
Assuntos
Intenção , Percepção de Movimento/fisiologia , Movimento/fisiologia , Orientação/fisiologia , Córtex Visual/fisiologia , Potenciais de Ação/fisiologia , Análise de Variância , Animais , Sinais (Psicologia) , Feminino , Macaca mulatta , Masculino , Neurônios/fisiologia , Fluxo Óptico , Estimulação Luminosa , Córtex Visual/citologiaRESUMO
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.
RESUMO
The pigeon robot has attracted significant attention in the field of animal robotics thanks to its outstanding mobility and adaptive capability in complex environments. However, research on pigeon robots is currently facing bottlenecks, and achieving fine control over the motion behavior of pigeon robots through brain-machine interfaces remains challenging. Here, we systematically quantify the relationship between electrical stimulation and stimulus-induced motion behaviors, and provide an analytical method to demonstrate the effectiveness of pigeon robots based on electrical stimulation. In this study, we investigated the influence of gradient voltage intensity (1.2-3.0 V) on the indoor steering motion control of pigeon robots. Additionally, we discussed the response time of electrical stimulation and the effective period of the brain-machine interface. The results indicate that pigeon robots typically exhibit noticeable behavioral responses at a 2.0 V voltage stimulus. Increasing the stimulation intensity significantly controls the steering angle and turning radius (p < 0.05), enabling precise control of pigeon robot steering motion through stimulation intensity regulation. When the threshold voltage is reached, the average response time of a pigeon robot to the electrical stimulation is 220 ms. This study quantifies the role of each stimulation parameter in controlling pigeon robot steering behavior, providing valuable reference information for the precise steering control of pigeon robots. Based on these findings, we offer a solution for achieving precise control of pigeon robot steering motion and contribute to solving the problem of encoding complex trajectory motion in pigeon robots.
RESUMO
Spherical movable tensegrity robots, resorting to the intrinsic hallmark of being lightweight and resilient, have exhibited tremendous potential in exploring unpredictable terrains and extreme environments where traditional robots often struggle. The geometry of spherical tensegrities is suitable for rolling locomotion, which guarantees the system to react to changing demands, navigate unexplored terrain, and perform missions even after suffering massive damage. The objective of this article is to enrich the type of spherical movable tensegrity robots with multiple kinematic gait patterns and to gain superior motion paths that are in conformity with the intrinsic features of structural rolling locomotion. Aiming at this purpose, three 12-rod spherical tensegrities with multi-gait patterns are investigated, and the dynamic simulation on independent (or evolutionary) gait patterns is conducted and testified on ADAMS. The routing spaces and the blind zones formed by single kinematic gait are compared to assess the suitability of the assigned kinematic gait pattern. Accordingly, we develop a trajectory planning method with the embedding of the steering control strategy into a modified rapidly exploring random tree (MRRT) algorithm to produce qualified marching routes. In the meantime, two momentous evaluation indictors, applicable to multi-gaits tensegrities, are introduced in searching the corresponding optimal gait patterns that conform to specified needs. The techniques are illustrated and validated in simulation with comparisons on several prototypes of tensegrity robots, indicating that the proposed method is a viable means of attaining marching routes on rolling locomotion of spherical movable tensegrity robots.
RESUMO
Human-machine interfaces (HMIs) can be used to decode a user's motor intention to control an external device. People that suffer from motor disabilities, such as spinal cord injury, can benefit from the uses of these interfaces. While many solutions can be found in this direction, there is still room for improvement both from a decoding, hardware, and subject-motor learning perspective. Here we show, in a series of experiments with non-disabled participants, a novel decoding and training paradigm allowing naïve participants to use their auricular muscles (AM) to control two degrees of freedom with a virtual cursor. AMs are particularly interesting because they are vestigial muscles and are often preserved after neurological diseases. Our method relies on the use of surface electromyographic records and the use of contraction levels of both AMs to modulate the velocity and direction of a cursor in a two-dimensional paradigm. We used a locking mechanism to fix the current position of each axis separately to enable the user to stop the cursor at a certain location. A five-session training procedure (20-30 min per session) with a 2D center-out task was performed by five volunteers. All participants increased their success rate (Initial: 52.78 ± 5.56%; Final: 72.22 ± 6.67%; median ± median absolute deviation) and their trajectory performances throughout the training. We implemented a dual task with visual distractors to assess the mental challenge of controlling while executing another task; our results suggest that the participants could perform the task in cognitively demanding conditions (success rate of 66.67 ± 5.56%). Finally, using the Nasa Task Load Index questionnaire, we found that participants reported lower mental demand and effort in the last two sessions. To summarize, all subjects could learn to control the movement of a cursor with two degrees of freedom using their AM, with a low impact on the cognitive load. Our study is a first step in developing AM-based decoders for HMIs for people with motor disabilities, such as spinal cord injury.
RESUMO
Previous studies argued that body turns are executed in an ordered sequence: the eyes turn first, followed by the head and then by the trunk. The purpose of this study was to find out whether this sequence holds even if body turns are not explicitly instructed, but nevertheless are necessary to reach an instructed distal goal. We asked participants to shop for grocery products in a simulated supermarket. To retrieve each product, they had to walk down and aisle, and then turn left or right into a corridor that led towards the target shelf. The need to make a turn was never mentioned by the experimenter, but it nevertheless was required in order to approach the target shelf. Main variables of interest were the delay between eye and head turns towards the target shelf, as well as the delay between head and trunk turns towards the target shelf. We found that both delays were consistently positive, and that their magnitude was near the top of the range reported in literature. We conclude that the ordered sequence of eye - then head - then trunk turns can be observed not only with a proximal, but also with a distal goal.
Assuntos
Movimentos da Cabeça , Caminhada , Fenômenos Biomecânicos , Cabeça , Humanos , Orientação , TroncoRESUMO
To avoid crashes caused by driver error in avoiding obstacles, a driver-centered steering assist controller with an adaptive authority allocation system is proposed for cooperative control purposes. To begin with, a concept of space collision risk (SCR) is introduced creatively for assessing the vehicle's safety status in which the relative distance and the relative angle between the vehicle and the obstacle are taken into consideration. Meanwhile, the SCR-based authority allocation system is addressed to allocate steering authorities between the human driver and the assist controller adaptively to reduce SCR to zero as soon as possible. After that, an autonomous steering controller based on the model predictive control (MPC) technique and the artificial potential field (APF) method, considering not only the vehicle stability constraints but also roads and obstacles constraints, is developed to aid the human driver when necessary. In the end, the proposed algorithm is simulated in the CarSim-Simulink co-simulation platform under a series of typical scenarios, which shows the feasibility and effectiveness of the presented shared steering control method.
Assuntos
Acidentes de Trânsito , Algoritmos , Acidentes de Trânsito/prevenção & controle , Automação , Simulação por Computador , HumanosRESUMO
When negotiating bends car drivers perform gaze polling: their gaze shifts between guiding fixations (GFs; gaze directed 1-2 s ahead) and look-ahead fixations (LAFs; longer time headway). How might this behavior change in autonomous vehicles where the need for constant active visual guidance is removed? In this driving simulator study, we analyzed this gaze behavior both when the driver was in charge of steering or when steering was delegated to automation, separately for bend approach (straight line) and the entry of the bend (turn), and at various speeds. The analysis of gaze distributions relative to bend sections and driving conditions indicate that visual anticipation (through LAFs) is most prominent before entering the bend. Passive driving increased the proportion of LAFs with a concomitant decrease of GFs, and increased the gaze polling frequency. Gaze polling frequency also increased at higher speeds, in particular during the bend approach when steering was not performed. LAFs encompassed a wide range of eccentricities. To account for this heterogeneity two sub-categories serving distinct information requirements are proposed: mid-eccentricity LAFs could be more useful for anticipatory planning of steering actions, and far-eccentricity LAFs for monitoring potential hazards. The results support the idea that gaze and steering coordination may be strongly impacted in autonomous vehicles.