single-rb.php

JRM Vol.13 No.1 pp. 88-95
doi: 10.20965/jrm.2001.p0088
(2001)

Paper:

Embodiment-Based Object Recognition for Vision-Based Mobile Agents

Kazunori Terada, Takayuki Nakamura, Hideaki Takeda, and Tsukasa Ogasawara

Dept. of Information Systems, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara 630-0101, Japan

Received:
August 31, 2000
Accepted:
December 15, 2000
Published:
February 20, 2001
Keywords:
object cognition, embodiment, vision
Abstract
In this paper, we propose a new architecture for object recognition based on the concept of "embodiment" as a primitive function for a cognitive robot. We define the term "embodiment" as the extent of the agent itself, locomotive ability, and its sensor. Based on this concept, an object is represented by reaching action paths, which correspond to a set of sequences of movement by the agent for reaching the object. Such behavior is acquired by trial-and-error based on visual and tactile information. Visual information is used to obtain sensorimotor mapping, which represents the relationship between the change of an object's appearance and the movement of the agent. Tactile information is used to evaluate the change of physical condition of the object caused by such movement. By such means, the agent can recognize an object regardless of its position and orientation in the environment. To demonstrate the feasibility of our method, we detail experimental results of computer simulation.
Cite this article as:
K. Terada, T. Nakamura, H. Takeda, and T. Ogasawara, “Embodiment-Based Object Recognition for Vision-Based Mobile Agents,” J. Robot. Mechatron., Vol.13 No.1, pp. 88-95, 2001.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024