Interactive Perception-Action-Learning for Modelling Objects

Project Details


Manipulating everyday objects without detailed prior models is still beyond the capabilities of existing robots. This is due to many challenges posed by diverse types of objects: Manipulation requires understanding and accurate model of physical properties of objects such as shape, mass, friction, elasticity, etc. Many objects are deformable, articulated, or even organic with undefined shape (e.g., plants) such that a fixed model is insufficient. On top of this, objects may be difficult to perceive, typically because of cluttered scenarios, or complex lighting and reflectance properties such as specularity or partial transparency. Creating such rich representations of objects is beyond current datasets and benchmarking practices used for grasping and manipulation. In this project we will develop an automated interactive perception pipeline for building such rich digitization.
Effective start/end date01/05/201930/11/2022

Collaborative partners

  • Aalto University (lead)
  • Suomen Akatemia (Project partner)
  • Imperial College of Science, Technology and Medicine


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.