Q-Learning Based Autonomous Control of the Auxiliary Power Network of a Ship

Tutkimustuotos: Lehtiartikkelivertaisarvioitu

Standard

Q-Learning Based Autonomous Control of the Auxiliary Power Network of a Ship. / Huotari, Janne; Ritari, Antti; Ojala, Risto; Vepsäläinen, Jari; Tammi, Kari.

julkaisussa: IEEE Access, Vuosikerta 7, 2169-3536, 30.10.2019, s. 152879-152890.

Tutkimustuotos: Lehtiartikkelivertaisarvioitu

Harvard

APA

Vancouver

Author

Bibtex - Lataa

@article{de23476f491f4061a9cf6c304cca769c,
title = "Q-Learning Based Autonomous Control of the Auxiliary Power Network of a Ship",
abstract = "We present a reinforcement learning (RL) model that is based on Q-learning for the autonomous control of ship auxiliary power networks. The development and application of the proposed model is demonstrated using a case-study ship as the platform. The auxiliary power network of the ship is represented as a Markov Decision Process (MDP). Q-learning is then used to teach an agent to operate in this MDP by choosing actions in each operating state which would minimize fuel consumption while also respecting the boundary conditions of the network. The presented work is based on an extensive data set received from one of the cruise-line operators on the Baltic Sea. This data set was preprocessed to extract information for the state representation of the auxiliary network, which was used for training and validating the model. As a result, it is shown that the developed method produces an autonomous control policy for the auxiliary power network that outperforms the current human operated manual control of the case-study ship. An average of 0.9 {\%} fuel oil savings are attained over the analyzed round-trips with control that displayed similar robustness against blackouts as the current operation of the ship. This amounts to 32 tons of fuel oil saved annually. In addition, it is shown that the developed model can be reconfigured for different levels of robustness, depending on the preferred trade-off between maintained reserve power and fuel savings.",
keywords = "Autonomous shipping, energy consumption reduction, Ferry, machinery, Q-learning, reinforcement learning, ship",
author = "Janne Huotari and Antti Ritari and Risto Ojala and Jari Veps{\"a}l{\"a}inen and Kari Tammi",
year = "2019",
month = "10",
day = "30",
doi = "10.1109/ACCESS.2019.2947686",
language = "English",
volume = "7",
pages = "152879--152890",
journal = "IEEE Access",
issn = "2169-3536",

}

RIS - Lataa

TY - JOUR

T1 - Q-Learning Based Autonomous Control of the Auxiliary Power Network of a Ship

AU - Huotari, Janne

AU - Ritari, Antti

AU - Ojala, Risto

AU - Vepsäläinen, Jari

AU - Tammi, Kari

PY - 2019/10/30

Y1 - 2019/10/30

N2 - We present a reinforcement learning (RL) model that is based on Q-learning for the autonomous control of ship auxiliary power networks. The development and application of the proposed model is demonstrated using a case-study ship as the platform. The auxiliary power network of the ship is represented as a Markov Decision Process (MDP). Q-learning is then used to teach an agent to operate in this MDP by choosing actions in each operating state which would minimize fuel consumption while also respecting the boundary conditions of the network. The presented work is based on an extensive data set received from one of the cruise-line operators on the Baltic Sea. This data set was preprocessed to extract information for the state representation of the auxiliary network, which was used for training and validating the model. As a result, it is shown that the developed method produces an autonomous control policy for the auxiliary power network that outperforms the current human operated manual control of the case-study ship. An average of 0.9 % fuel oil savings are attained over the analyzed round-trips with control that displayed similar robustness against blackouts as the current operation of the ship. This amounts to 32 tons of fuel oil saved annually. In addition, it is shown that the developed model can be reconfigured for different levels of robustness, depending on the preferred trade-off between maintained reserve power and fuel savings.

AB - We present a reinforcement learning (RL) model that is based on Q-learning for the autonomous control of ship auxiliary power networks. The development and application of the proposed model is demonstrated using a case-study ship as the platform. The auxiliary power network of the ship is represented as a Markov Decision Process (MDP). Q-learning is then used to teach an agent to operate in this MDP by choosing actions in each operating state which would minimize fuel consumption while also respecting the boundary conditions of the network. The presented work is based on an extensive data set received from one of the cruise-line operators on the Baltic Sea. This data set was preprocessed to extract information for the state representation of the auxiliary network, which was used for training and validating the model. As a result, it is shown that the developed method produces an autonomous control policy for the auxiliary power network that outperforms the current human operated manual control of the case-study ship. An average of 0.9 % fuel oil savings are attained over the analyzed round-trips with control that displayed similar robustness against blackouts as the current operation of the ship. This amounts to 32 tons of fuel oil saved annually. In addition, it is shown that the developed model can be reconfigured for different levels of robustness, depending on the preferred trade-off between maintained reserve power and fuel savings.

KW - Autonomous shipping

KW - energy consumption reduction

KW - Ferry

KW - machinery

KW - Q-learning

KW - reinforcement learning

KW - ship

U2 - 10.1109/ACCESS.2019.2947686

DO - 10.1109/ACCESS.2019.2947686

M3 - Article

VL - 7

SP - 152879

EP - 152890

JO - IEEE Access

JF - IEEE Access

SN - 2169-3536

M1 - 2169-3536

ER -

ID: 38301605