single-rb.php

JRM Vol.8 No.3 pp. 272-277
doi: 10.20965/jrm.1996.p0272
(1996)

Paper:

State Estimation for Mobile Robot using Partially Observable Markov Decision Process

Daehee Kang, Hideki Hashimoto and Fumio Harashima

Institute of Industrial Science, The University of Tokyo, 7-22-1, Roppongi, Minato-ku, Tokyo 106, Japan

Received:
January 24, 1996
Accepted:
February 10, 1996
Published:
June 20, 1996
Keywords:
Partially observable markov decision process, Mobile robot, Position estimation
Abstract
Dead Reckoning has been commonly used for position estimation. However, this method has inherent problems, one of the biggest being it always cumulates estimation errors. In this paper, we propose a new method to estimate a current mobile robot state using Partially Observable Markov Decision Process (POMDP). POMDP generalizes the Markov Decision Process (MDP) framework to the case where the agent must make its decisions in partial ignorance of its current situation. Here, the robot state means the robot position or current subgoal at which the mobile robot is located. It is shown that we will be able to estimate the mobile robot state precisely and robustly, even if the environment is changed slightly, through a case study.
Cite this article as:
D. Kang, H. Hashimoto, and F. Harashima, “State Estimation for Mobile Robot using Partially Observable Markov Decision Process,” J. Robot. Mechatron., Vol.8 No.3, pp. 272-277, 1996.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024