Abstract
In vision-based behaviour cloning (BC), conventional image augmentations like Random Crop and Colour Jitter often fall short when addressing substantial visual domain shifts, such as variations in shadow, distractors and backgrounds. Superimposition-based augmentations, which blend in-domain and out-of-domain images, have shown promise for improving model generalisation in the computer vision community, but their suitability for BC remains uncertain due to the need to preserve task-critical semantics, spatial-temporal relationships, and agent-target interactions. To address this, we introduce RoboSaGA-a Saliency-Guided Augmentation method within the superimposition family, tailored for vision-based BC. RoboSaGA dynamically adjusts augmentation intensity per pixel based on policy-driven saliency, enabling aggressive augmentation in task-trivial areas while preserving task-critical information. Moreover, it integrates seamlessly into existing architectures without requiring structural changes or additional learning objectives. Empirical evaluations in both simulated and real-world settings show that RoboSaGA maintains in-domain performance while significantly enhancing robustness to visual domain shifts, including distractors and background variations, as well as handling lighting and shadow variations. Code available at: https://github.com/Zheyu-Zhuang/RoboSaGA.
Original language | English |
---|---|
Pages (from-to) | 4314-4331 |
Number of pages | 18 |
Journal | Proceedings of Machine Learning Research |
Volume | 270 |
Publication status | Published - 2025 |
MoE publication type | A4 Conference publication |
Event | Conference on Robot Learning - Munich, Germany Duration: 6 Nov 2024 → 9 Nov 2024 https://www.corl.org/ |
Keywords
- Behaviour Cloning
- Data Augmentation
- Visual Generalisation