Learning task constraints for robot grasping using graphical models

D. Song*, K. Huebner, V. Kyrki, D. Kragic

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

58 Citations (Scopus)


This paper studies the learning of task constraints that allow grasp generation in a goal-directed manner. We show how an object representation and a grasp generated on it can be integrated with the task requirements. The scientific problems tackled are (i) identification and modeling of such task constraints, and (ii) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains. We first define constraint functions given a set of object and action attributes, and then model the relationships between object, action, constraint features and the task using Bayesian networks. The probabilistic framework deals with uncertainty, combines apriori knowledge with observed data, and allows inference on target attributes given only partial observations. We present a system designed to structure data generation and constraint learning processes that is applicable to new tasks, embodiments and sensory data. The application of the task constraint model is demonstrated in a goal-directed imitation experiment.

Original languageEnglish
Title of host publicationIEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings
Number of pages7
Publication statusPublished - 2010
MoE publication typeA4 Article in a conference publication
EventIEEE/RSJ International Conference on Intelligent Robots and Systems - Taipei, Taiwan, Republic of China
Duration: 18 Oct 201022 Oct 2010
Conference number: 23


ConferenceIEEE/RSJ International Conference on Intelligent Robots and Systems
Abbreviated titleIROS
CountryTaiwan, Republic of China

Fingerprint Dive into the research topics of 'Learning task constraints for robot grasping using graphical models'. Together they form a unique fingerprint.

Cite this