About the Workshop

Since its early developments, artificial perception in robotics has been reduced to sensor signal processing and pattern recognition mechanisms. This approach has produced feature extraction and classification methods that allow robots to detect pre-determined objects or events in their environment as long as the robot remains well within expected working conditions. More recently, representation learning methods (such as deep learning) have been developed that remove the burden of selecting adequate feature representations from the engineer and instead extract suitable representations automatically from data. These methods further improve the performance of pattern recognition by discovering solutions (feature representations) that would have been difficult to formalize for a human designer. However, although these data-driven methods are often claimed to be unsupervised, they still require a human to complete the (artificial) perception process: the human has to attach labels to the extracted patterns to provide them with abstract semantic content. This dependency on the humans interpretation of the sensory input reveals a major difference between what roboticists call perception and the term Perception as referring to the ability of humans and other animals to capture the state of their environment. Artificial perception, while efficient in detecting certain patterns, so far remains unusable without a human providing the meaning. On the contrary, animals explore their world and autonomously learn what it is made of and how its content can be used, without external influence. In order to overcome this major limitation, we must reconsider the question: what is perception? This fundamental problem should be prevailing especially in the developmental robotics community, as understanding perception is a cornerstone in modeling cognitive development in robots. Recently, novel accounts for perception have been proposed in the cognitive science literature. In particular, the Sensorimotor Contingencies (SMC) theory and Predictive Processing (PP) theories promise to provide research directions towards genuinely perceiving robots. The former argues that perception should be understood as an intrinsic component of an agents interaction with the world. The latter suggest that perception relies on building a hierarchical generative model of the agents experience. This workshop aims at exploring these (arguably compatible) theories, and at discussing how they can be applied to robots and cognitive systems.