Beyond Top-Grasps Through Scene Completion

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

10 Downloads (Pure)


Current end-to-end grasp planning methods propose grasps in the order of (milli)seconds that attain high grasp success rates on a diverse set of objects, but often by constraining the workspace to top-grasps. In this work, we present a method that allows end-to-end top grasp planning methods to generate full six-degree-of-freedom grasps using a single RGB-D view as input. This is achieved by estimating the complete shape of the object to be grasped, then simulating different viewpoints of the object, passing the simulated viewpoints to an end-to-end grasp generation method, and finally executing the overall best grasp. The method was experimentally validated on a Franka Emika Panda by comparing 429 grasps generated by the state-of-the-art Fully Convolutional Grasp Quality CNN, both on simulated and real camera viewpoints. The results show statistically significant improvements in terms of grasp success rate when using simulated viewpoints over real camera viewpoints, especially when the real camera viewpoint is angled.
Original languageEnglish
Title of host publicationProceedings of the IEEE Conference on Robotics and Automation, ICRA 2020
Number of pages7
ISBN (Electronic)978-1-7281-7395-5
Publication statusPublished - 2020
MoE publication typeA4 Article in a conference publication
EventIEEE International Conference on Robotics and Automation - Online
Duration: 31 May 202031 Aug 2020

Publication series

NameIEEE International Conference on Robotics and Automation
ISSN (Print)2152-4092
ISSN (Electronic)2379-9552


ConferenceIEEE International Conference on Robotics and Automation
Abbreviated titleICRA


  • Shape
  • Cameras
  • Grasping
  • Planning
  • Robot vision systems
  • Pipelines

Fingerprint Dive into the research topics of 'Beyond Top-Grasps Through Scene Completion'. Together they form a unique fingerprint.

Cite this