JRM Vol.21 No.4 pp. 498-506
doi: 10.20965/jrm.2009.p0498


Cooking Procedure Recognition and Support by Ubiquitous Sensors

Sho Murakami, Takuo Suzuki, Akira Tokumasu, and Yasushi Nakauchi

Graduate School of Systems and Information Engineering, University of Tsukuba
Tsukuba, Ibaraki 305-8573, Japan

January 15, 2009
July 13, 2009
August 20, 2009
intelligent environment, ubiquitous sensors, cooking support, machine learning

This paper proposes cooking support using ubiquitous sensors. We developed a machine learning algorithm that recognizes cooking procedures by taking into account widely varying sensor information and user behavior. To provide appropriate instructions to users, we developed a Markov-model-based behavior prediction algorithm. Using these algorithms, we developed cooking support automatically displaying cooking instruction videos based on user progress. Experiments and experimental results confirmed the feasibility of our proposed cooking support.

Cite this article as:
Sho Murakami, Takuo Suzuki, Akira Tokumasu, and Yasushi Nakauchi, “Cooking Procedure Recognition and Support by Ubiquitous Sensors,” J. Robot. Mechatron., Vol.21, No.4, pp. 498-506, 2009.
Data files:
  1. [1] M. Weiser, “The Computing for the Twenty-First Century,” Scientific American, pp. 94-104, September, 1991.
  2. [2] D.J. Moore, I.A. Essa, and M.H. Hayes III, “Exploiting Human Actions and Object Context for Recognition Tasks,” Proc. of The 7th IEEE Int. Conf. on Computer Vision, pp. 80-86, 1999.
  3. [3] T. Sato, Y. Nishida, and H. Mizoguchi, “Robotic Room: Symbiosis with human through behavior media,” Proc. of Robotics and Autonomous Systems 18 Int. Workshop on Biorobotics, Human-Robot Symbiosis, ELSEVIER, pp. 185-194, 1996.
  4. [4] M. Minoh, “Human Daily Life Support at a Ubiquitous Computing Home,” Journal of Japanese Society for Artificial Intelligence, Vol.20, No.5, pp. 579-586, 2005.
  5. [5] T. Fukuda, Y. Nakauchi, K. Noguchi, and T. Matsubara, “Sequential Human Behaviors Recognition for Cooking-Support Robots,” Journal of Robotics and Mechatronics, Vol.17, No.6, pp. 717-724, 2005.
  6. [6] Y. Yamakata, H. Ohara, A. Sawada, K. Kakusyo, and M. Minoh, “Peeling or Cutting Action and Its Target Food Product Recognition Based on Relationship between the Action and the Food Product,” IEICE TRANSACTIONS on Information and Systems, Vol.90, No.9, pp. 2550-2561, 2007.
  7. [7] Y. Yamakata, T. Shoji, K. Kakusyo, and M. Minoh, “Automatic Cooking Archiving with Spoken Dialogue with Assistant Agent,” IEICE TRANSACTIONS on Information and Systems, Vol.90, No.10, pp. 2817-2829, 2007.
  8. [8] I. Siio, R. Hamada, and N. Mima, “Kitchen of the Future and Its Applications,” COMPUTER SOFTWARE, Vol.23, No.4, pp. 78-83, 2006.
  9. [9] R. Hamada, et al., “Cooking navi: assistant for daily cooking in kitchen,” Proc. of 13th Annual ACM Int. Conf. on Multimedia, pp. 371-374, 2005.
  10. [10] Complete Cooking Recipes, “All about the Typical Recepes and Cooking Instructions in Home,” NHK, 2004 (in Japanese).
  11. [11] J.R. Quinlan, “Data Mining Tools See5 and C5.0,”
  12. [12] A.R. Smith, “Color gamut transform pairs,” Proc. of the 5th Annual Conf. on Computer Graphics and Interactive Techniques, pp. 12-19, 1978.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Feb. 25, 2021