Paper:
Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points
Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura,
Fumio Ozaki, and Nobuto Matsuhira
Toshiba Research & Development Center, Toshiba Corporation, 1 Komukai-Toshiba-cho, Saiwai-ku, Kawasaki 212-8582, Japan
- [1] Z. Jia, A. Balasuriya, and S. Challa, “Sensor Fusion based 3D Target Visual Tracing for Autono-mous Vehicles with IMM,” Int. Conf. on Robotics and Automation, pp. 1841-1846, 2005.
- [2] M. Weser, D. Westhoff, M. Hiiser, and J. Zhang, “Multimodal People Tracking and Trajectory Prediction based on Learned Generalized Motion Patterns,” Int. Conf. on Multisensor Fusion and Integration for Intel-ligent Systems, pp. 541-546, 2006.
- [3] Z. Chen and S. T. Birchfield, “Person Following with a Mobile Robot Using Binocular Feature-Based Tracking,” Int. Conf. on Intelligent Robots and Systems, pp. 815-820, 2007.
- [4] K. Yamazawa, Y. Yagi, and M. Yachida, “HyperOmni vision: Visual navigation with an omnidirecional image sensor,” Systems and Computers in Japan, Vol.28, No.4, pp. 36-47, 1997.
- [5] Y. Yagi, H. Nagai, K. Yamazawa, and M. Yachida, “Reactive Visual Navigation based on Omnidirectional Sensing - Path Following and Collision Avoidance -,” Int. Conf. on Intelligent Robots and Systems, pp. 58-63, 1999.
- [6] G. Silveira, E. Malis, and P. Rives, “Real-time Robust Detection of Planar Regions in a Pair of Images,” Int. Conf. on Intelligent Robots and Systems, pp. 49-54, 2006.
- [7] Y. Pang, Q. Huang, W. Zhang, Z. Hu, A. H. Rajpar, and K. Li, “Real-time Object Tracking of a Robot Head Based on Multiple Visual Cues Integration,” Int. Conf. on Intelligent Robots and Systems, pp. 686-691, 2006.
- [8] B. Jung and G. S. Sukhatme, “Detecting Moving Objects using a Single Camera on a Mobile Robot in an Outdoor Environement,” Conf. on Intelligent Autonomous Systems, pp. 980-987, 2004.
- [9] G. Chivilo, F.Mezzaro, A. Sgorbissa, and R. Zaccaria, “Follow-the-Leader Behavior through Optical Flow Minimization,” Int. Conf. on Intelligent Robots and Systems, pp. 3182-3187, 2004.
- [10] A. C. Murillo, J. J. Guerrero, and C. Sagues, “SURF features for efficient robot localization with omnidirectional images,” Int. Conf. on Robotics and Automation, pp. 3901-3907, 2007.
- [11] S. Ahn,W. K. Chung, and S. R. Oh, “Construction of Hybrid Visual Map for Indoor SLAM,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1695-1701, 2007.
- [12] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
- [13] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speed up robust features,” the ninth European Confernce on Computer Vision 2006.
- [14] H. Takeshima, T. Ida, and T. Kaneko, “Extracting Object Regions Using Locally Es-timated Probability Density Functions,” Conf. on Machine Vision Applications, 2007.
- [15] S. M. Bopalkar, P. Talwai, and B. H. Parmar, “Body Parts Detection in Gesture Recognition using Color Information,” Int. Conf. and Workshop on Emerging Trends in Technology, pp. 149-152, 2011.
- [16] R. Hassanpour, A. Shahbahrami, and S. Wong, “Adaptive Gaussian Mixture Model for Sking Color Segmentation,” World Academy of Science, pp. 1-6, 2008.
- [17] J. H.Ward, “Hierarchical Grouping to Optimize an Objective Function,” J. of the American Statistical Association, Vol.58, No.301, pp. 236-244, 1963.
- [18] Y. Negishi, J. Miura, and Y. Shirai, “Calibration of Omnidirectional Stereo forMobile Robots,” Int. Conf. on Intelligent Robots and Systems, pp. 2600-2605, 2004.
- [19] N. Mitsunaga, T. Miyashita, H. Ishiguro, K. Kogure, and N. Hagita, “Robovie-IV: A Communication Robot Interacting with People Daily in an Office,” Int. Conf. on Intelligent Robots and Systems, pp. 5066-5072, 2006.
- [20] T. Kanda and H. Ishiguro, “Friendship estimation model for social robots to understand human relationships,” Int. Workshop on Robot and Human Communication, pp. 539-544, 2004.
- [21] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Int. Joint Conf. on Artificial Intelligence, pp. 674-679, 1981
- [22] J. Y. Bouguet, “Pyramidial Implementation of the Lucas Kanade Feature Tracker,” OpenCV Documentation, Intel Corporation, Microprocessor Research Labs, 1999.
- [23] R. Kawanishi, A. Yamashita, and T. Kaneko, “Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera,” Int. Conf. on Intelligent Robots and Systems, pp. 3089-3094, 2009.
- [24] C. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, Vol.20, pp. 273-297, 1995.
- [25] N. Ando, T. Suehiro, K. Kitagaki, T. Kotoku, and W. K. Yoon, “RT-Middleware: Distributed Component Middleware for RT (Robot Technology),” Int. Conf. on Intelligent Robots and Systems, pp. 3555-3560, 2005.
- [26] M. Piaggio, R. Formaro, A. Piombo, L. Sanna, and R. Zaccaria, “An Optical-Flow Person Following Behavior,” IEEE ISIC/CIRNISAS Joint Conf., pp. 4078-4083, 1998.
- [27] S. Oyama, T. Kokubo, and T. Ishida, “Domain Specific Search with Keyword Spices,” IEEE Trans. on Knowledge and Data Engineering, Vol.16, No.1, pp. 17-27, 2004.
- [28] H. Nabeshima, R. Miyagawa, Y. Suzuki, and K. Iwanuma, “Rapid Synthesis of Domain-Specific Web Search Engines Based on Semi-Automatic Training-Example Generation,” Int. Conf. on Web Intelligence, pp. 769-772, 2006.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.
Copyright© 2011 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.