Uncategorized

Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving



Download a PDF of the paper titled DriveMLM: Aligning Multi-Modal Large Language Models with Behavioral Planning States for Autonomous Driving, by Wenhai Wang and 15 other authors

Download PDF
HTML (experimental)

Abstract:Large language models (LLMs) have opened up new possibilities for intelligent agents, endowing them with human-like thinking and cognitive abilities. In this work, we delve into the potential of large language models (LLMs) in autonomous driving (AD). We introduce DriveMLM, an LLM-based AD framework that can perform close-loop autonomous driving in realistic simulators. To this end, (1) we bridge the gap between the language decisions and the vehicle control commands by standardizing the decision states according to the off-the-shelf motion planning module. (2) We employ a multi-modal LLM (MLLM) to model the behavior planning module of a module AD system, which uses driving rules, user commands, and inputs from various sensors (e.g., camera, lidar) as input and makes driving decisions and provide explanations; This model can plug-and-play in existing AD systems such as Apollo for close-loop driving. (3) We design an effective data engine to collect a dataset that includes decision state and corresponding explanation annotation for model training and evaluation. We conduct extensive experiments and show that our model achieves 76.1 driving score on the CARLA Town05 Long, and surpasses the Apollo baseline by 4.7 points under the same settings, demonstrating the effectiveness of our model. We hope this work can serve as a baseline for autonomous driving with LLMs. Code and models shall be released at this https URL.

Submission history

From: Wenhai Wang [view email]
[v1]
Thu, 14 Dec 2023 18:59:05 UTC (19,105 KB)
[v2]
Mon, 25 Dec 2023 15:50:52 UTC (18,850 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *