Embodiment-Based Object Recognition for Vision-Based Mobile Agents
Kazunori Terada, Takayuki Nakamura, Hideaki Takeda, and Tsukasa Ogasawara
Dept. of Information Systems, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara 630-0101, Japan
In this paper, we propose a new architecture for object recognition based on the concept of “embodiment” as a primitive function for a cognitive robot. We define the term “embodiment” as the extent of the agent itself, locomotive ability, and its sensor. Based on this concept, an object is represented by reaching action paths, which correspond to a set of sequences of movement by the agent for reaching the object. Such behavior is acquired by trial-and-error based on visual and tactile information. Visual information is used to obtain sensorimotor mapping, which represents the relationship between the change of an object’s appearance and the movement of the agent. Tactile information is used to evaluate the change of physical condition of the object caused by such movement. By such means, the agent can recognize an object regardless of its position and orientation in the environment. To demonstrate the feasibility of our method, we detail experimental results of computer simulation.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Copyright© 2001 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.