Reinforcement Learning-Based Plug-in Electric Vehicle Charging with Forecasted Price

Research output: Contribution to journalArticleScientificpeer-review

Researchers

Research units

Abstract

This paper proposes a novel demand response method that aims at reducing the long term cost of charging the battery of an individual plug-in electric vehicle (PEV). The problem is cast as a daily decision making problem for choosing the amount of energy to be charged in the PEV battery within a day. We model the problem as a Markov decision process with unknown transition probabilities. A batch reinforcement learning algorithm is proposed for learning an optimum cost reducing charging policy from a batch of transition samples and making cost reducing charging decisions in new situations. In order to capture the day-to-day differences of electricity charging costs the method makes use of actual electricity prices for the current day and predicted electricity prices for the following day. A Bayesian neural network is employed for predicting the electricity prices. For constructing the reinforcement learning training data set we use historical prices. A linear programming based method is developed for creating a data-set of optimal charging decisions. Different charging scenarios are simulated for each day of the historical time frame using the set of past electricity prices. Simulation results using real world pricing data demonstrate cost savings of 10%-50% for the PEV owner when using the proposed charging method.

Details

Original languageEnglish
Article number7553556
Pages (from-to)3674-3684
JournalIEEE Transactions on Vehicular Technology
Volume66
Issue number5
Early online date2016
Publication statusPublished - 2017
MoE publication typeA1 Journal article-refereed

    Research areas

  • cost reduction, demand response, plug-in electric vehicle, price prediction, reinforcement learning, smart charging

ID: 10192014