DDGC: Generative Deep Dexterous Grasping in Clutter

Research output: Contribution to journalArticleScientificpeer-review

1 Downloads (Pure)

Abstract

Recent advances in multi-fingered robotic grasping have enabled fast 6-Degrees-of-Freedom (DOF) single object grasping. Multi-finger grasping in cluttered scenes, on the other hand, remains mostly unexplored due to the added difficulty of reasoning over obstacles which greatly increases the computational time to generate high-quality collision-free grasps. In this work, we address such limitations by introducing DDGC, a fast generative multi-finger grasp sampling method that can generate high quality grasps in cluttered scenes from a single RGB-D image. DDGC is built as a network that encodes scene information to produce coarse-to-fine collision-free grasp poses and configurations. We experimentally benchmark DDGC against two state-of-the-art methods on 1200 simulated cluttered scenes and 7 real-world scenes. The results show that DDGC outperforms the baselines in synthesizing high-quality grasps and removing clutter. DDGC is also 4-5 times faster than GraspIt!. This, in turn, opens the door for using multi-finger grasps in practical applications which has so far been limited due to the excessive computation time needed by other methods. Code and videos are available at https://irobotics.aalto.fi/ddgc/.

Original languageEnglish
Article number9483683
Pages (from-to)6899 - 6906
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number4
DOIs
Publication statusPublished - Oct 2021
MoE publication typeA1 Journal article-refereed

Keywords

  • Clutter
  • Collision avoidance
  • Deep Learning in Grasping and Manipulation
  • Dexterous Manipulation
  • Grasping
  • Grippers
  • Image coding
  • Robots
  • Shape

Fingerprint

Dive into the research topics of 'DDGC: Generative Deep Dexterous Grasping in Clutter'. Together they form a unique fingerprint.

Cite this