Abstract
When deploying machine learning algorithms in the real world, guaranteeing safety is an essential asset. Existing safe learning approaches typically consider continuous variables, i.e., regression tasks. However, in practice, robotic systems are also subject to discrete, external environmental changes, e.g., having to carry objects of certain weights or operating on frozen, wet, or dry surfaces. Such influences can be modeled as discrete context variables. In the existing literature, such contexts are, if considered, mostly assumed to be known. In this work, we drop this assumption and show how we can perform safe learning when we cannot directly measure the context variables. To achieve this, we derive frequentist guarantees for multiclass classification, allowing us to estimate the current context from measurements. Furthermore, we propose an approach for identifying contexts through experiments. We discuss under which conditions we can retain theoretical guarantees and demonstrate the applicability of our algorithm on a Furuta pendulum with camera measurements of different weights that serve as contexts.
Original language | English |
---|---|
Pages (from-to) | 1828-1841 |
Number of pages | 14 |
Journal | IEEE Transactions on Robotics |
Volume | 40 |
Early online date | 15 Jan 2024 |
DOIs | |
Publication status | Published - 2024 |
MoE publication type | A1 Journal article-refereed |
Keywords
- multiclass classification
- safe reinforcement learning
- Frequentist bounds