Learning to Drive (L2D) as a Low-Cost Benchmark for Real-World Reinforcement Learning

Ari Viitala, Rinu Boney, Yi Zhao, Alexander Ilin, Juho Kahala

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


We present Learning to Drive (L2D), a low-cost benchmark for real-world reinforcement learning (RL). L2D involves a simple and reproducible experimental setup where an RL agent has to learn to drive a Donkey car around three miniature tracks, given only monocular image observations and speed of the car. The agent has to learn to drive from disengagements, which occurs when it drives off the track. We present and open-source our training pipeline, which makes it straightforward to apply any existing RL algorithm to the task of autonomous driving with a Donkey car. We test imitation learning, state-of-the-art model-free, and model-based algorithms on the proposed L2D benchmark. Our results show that existing RL algorithms can learn to drive the car from scratch in less than five minutes of interaction. We demonstrate that RL algorithms can learn from sparse and noisy disengagement to drive even faster than imitation learning and a human operator.
Original languageEnglish
Title of host publication20th International Conference on Advanced Robotics, ICAR
Number of pages7
ISBN (Electronic)978-1-6654-3684-7
ISBN (Print)978-1-6654-3685-4
Publication statusPublished - Jan 2022
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Advanced Robotics - Virtual, online, Ljubljana, Slovenia
Duration: 7 Dec 202110 Dec 2021
Conference number: 20


ConferenceInternational Conference on Advanced Robotics
Abbreviated titleICAR
Internet address


Dive into the research topics of 'Learning to Drive (L2D) as a Low-Cost Benchmark for Real-World Reinforcement Learning'. Together they form a unique fingerprint.

Cite this