RP1M: A Large-Scale Motion Dataset for Piano Playing with Bimanual Dexterous Robot Hands

Yi Zhao*, Le Chen*, Jan Schneider, Quankai Gao, Juho Kannala, Bernhard Schölkopf, Joni Pajarinen, Dieter Büchler

*Corresponding author for this work

Research output: Contribution to journalConference articleScientificpeer-review

3 Downloads (Pure)

Abstract

It has been a long-standing research goal to endow robot hands with human-level dexterity. Bimanual robot piano playing constitutes a task that combines challenges from dynamic tasks, such as generating fast while precise motions, with slower but contact-rich manipulation problems. Although reinforcement learning-based approaches have shown promising results in single-task performance, these methods struggle in a multi-song setting. Our work aims to close this gap and, thereby, enable imitation learning approaches for robot piano playing at scale. To this end, we introduce the Robot Piano 1 Million (RP1M) dataset, containing bimanual robot piano playing motion data of more than one million trajectories. We formulate finger placements as an optimal transport problem, thus, enabling automatic annotation of vast amounts of unlabeled songs.

Original languageEnglish
Pages (from-to)5184-5203
Number of pages20
JournalProceedings of Machine Learning Research
Volume270
Publication statusPublished - 2025
MoE publication typeA4 Conference publication
EventConference on Robot Learning - Munich, Germany
Duration: 6 Nov 20249 Nov 2024
https://www.corl.org/

Keywords

  • Bimanual dexterous robot hands
  • dataset for robot piano playing
  • imitation learning
  • robot learning at scale

Fingerprint

Dive into the research topics of 'RP1M: A Large-Scale Motion Dataset for Piano Playing with Bimanual Dexterous Robot Hands'. Together they form a unique fingerprint.

Cite this