Paper:
Human Intention Detection and Activity Support System for Ubiquitous Sensor Room
Yasushi Nakauchi*, Katsunori Noguchi**, Pongsak Somwong**,
and Takashi Matsubara**
*Dept. of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba, Ibaraki 305-8573, Japan
**Department of Computer Science, National Defense Academy, 1-10-20 Hashirimizu, Yokosuka 239-8686, Japan
In this paper, we propose the human behavior detection and activity support environment Vivid Room. Behavior in Vivid Room is detected by numerous sensors built into the room, i.e., magnet sensors for doors and drawers, microswitches for chairs, and ID tags for personnel, and information is collected by a sensor server via an RF tag system and LAN. To recognize meaningful behavior, e.g., studying, eating, and resting, we use ID4-based learning system. We also developed activity support using sound and voice taking into account human behavior in the room. Experimental results confirmed the accuracy of behavior recognition and the quality of support.
- [1] K. Asaki, Y. Kishimoto, T. Sato, and T. Mori, “One-Room-Type Sensing System for Recognition and Accumulation of Human Behavior –Proposal of Behavior Recognition Techniques–,” Proc. of JSME ROBOMEC’00, 2P1-76-119, 2000.
- [2] B. Brumitt et al., “Easy Living: Technologies for Intelligent Environments,” Proc. ofInternational Symposium on Handheld and Ubiquitous Computing, 2000.
- [3] I. A. Essa, “Ubiquitous sensing for smart and aware environments: technologies towards the building on an aware home,” Position Paper for the DARPA/NFS/NIST workshop on Smart Environment, 1999.
- [4] J. Krumm, S. Harris, B. Meyers, B. Brumitt, M. Hale, and S. Shafer, “Multi-Camera Multi-Person Tracking for Easy Living,” Proc. of 3rd IEEE International Workshop on Visual Surveillance, pp. 3-10, 2000.
- [5] J. Lee, N. Ando, and H. Hashimoto, “Design Policy for Intelligent Space,” Proc. of IEEE SMC’99, 1999.
- [6] D. J. Moore, I. A. Essa, and M. H. Hayes III, “ObjectSpaces: Context Management for Human Activity Recognition,” Georgia Institute of Technology, Graphics, Visualization and Usability Center, Technical Report #GIT-GVU-98-26, 1998.
- [7] D. J. Moore, I. A. Essa, and M. H. Hayes III, “Exploiting Human Actions and Object Context for Recognition Tasks,” Proc. of The 7th IEEE International Conference on Computer Vision, pp. 80-86, 1999.
- [8] T. Mori, T. Sato et al., “One-Room-Type Sensing System for Recognition and Accumulation of Human Behavior,” Proc. of IROS’00, pp. 345-350, 2000.
- [9] A. Pentland, “Smart Rooms,” Scientific American, pp. 54-62, 1996.
- [10] A. Pentland, R. Picard, and P. Maes “Smart Rooms, Desks, and Clothes: Toward Seamlessly Networked Living,” British Telecommunications Engineering, Vol.15, pp. 168-172, July 1996.
- [11] J. Quinlann, “C4.5: Programs for Machine Learning,” Morgan Kauffman Publishers, 1992.
- [12] T. Sato, Y. Nishida, and H. Mizoguchi, “Robotic Room: Symbiosis with human through behavior media,” Robotics and Autonomous Systems 18 International Workshop on Biorobotics: Human-Robot Symbiosis, Elsevier, pp. 185-194, 1996.
- [13] J. C. Schlimmer, and D. Fisher, “A Case Study of Incremental Concept Induction,” Proc. of the 5th National Conference of Artificial Intelligence, pp. 496-501, 1986.
- [14] M. Shimosaka et al., “Recognition of Human Daily Life Action and Its Performance Adjustment based on Support Vector Learning,” Proc. of the Third IEEE International Conference on Humanoid Robots, 2003.
- [15] N. Teraura, “Technologies and Materials for EDLC and Electrochemical Supercapacitors,” CMC Publishing Co.,Ltd., 2003.
- [16] M. Weiser, “The Computer for the Twenty-First Century,” Scientific American, pp. 94-104, September 1991.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.
Copyright© 2004 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.