DriveLLaVA: Human-Level Behavior Decisions via Vision Language Model.
Sensors (Basel)
; 24(13)2024 Jun 25.
Article
in En
| MEDLINE
| ID: mdl-39000891
ABSTRACT
Human-level driving is the ultimate goal of autonomous driving. As the top-level decision-making aspect of autonomous driving, behavior decision establishes short-term driving behavior strategies by evaluating road structures, adhering to traffic rules, and analyzing the intentions of other traffic participants. Existing behavior decisions are primarily implemented based on rule-based methods, exhibiting insufficient generalization capabilities when faced with new and unseen driving scenarios. In this paper, we propose a novel behavior decision method that leverages the inherent generalization and commonsense reasoning abilities of visual language models (VLMs) to learn and simulate the behavior decision process in human driving. We constructed a novel instruction-following dataset containing a large number of image-text instructions paired with corresponding driving behavior labels, to support the learning of the Drive Large Language and Vision Assistant (DriveLLaVA) and enhance the transparency and interpretability of the entire decision process. DriveLLaVA is fine-tuned on this dataset using the Low-Rank Adaptation (LoRA) approach, which efficiently optimizes the model parameter count and significantly reduces training costs. We conducted extensive experiments on a large-scale instruction-following dataset, and compared with state-of-the-art methods, DriveLLaVA demonstrated excellent behavior decision performance. DriveLLaVA is capable of handling various complex driving scenarios, showing strong robustness and generalization abilities.
Key words
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Main subject:
Automobile Driving
/
Decision Making
Limits:
Humans
Language:
En
Journal:
Sensors (Basel)
Year:
2024
Document type:
Article
Affiliation country:
China
Country of publication:
Suiza