single-rb.php

JRM Vol.18 No.6 pp. 772-778
doi: 10.20965/jrm.2006.p0772
(2006)

Paper:

Violent Action Detection for Elevator

Kentaro Hayashi, Makito Seki, Takahide Hirai,
Koichi Takeuchi, and Koichi Sasakawa

Mitsubishi Electric Co., 8-1-1 Tsukaguchi-Honmachi, Amagasaki, Hyogo 661-8661, Japan

Received:
March 31, 2006
Accepted:
September 5, 2006
Published:
December 20, 2006
Keywords:
violent action, optical flow, texture background subtraction, built-in device
Abstract
This paper presents a new critical event detection method simplified for built into elevators. We first define that the critical event is unusual action such as violent action, counteraction, etc, and introduce the violent action level (VA level). We use an optical flow based method to analyze the current state of the motion through an ITV (Industrial TeleVision) camera. After motion analysis, we calculate a normalized statistical value, which is the VA level. The statistical value is the multiple of the optical flow direction variance, the optical flow magnitude variance, and optical flow area. Our method calculates the statistical value variance and normalize it by the variance. At last we can detect critical event by thresholding the VA level. Then we implement this method on a built-in device. The device has an A/D converter with special designed frame buffer, a 400 MIPS high-performance microprocessor, dynamic memory, and flash ROM. Since we need to process the method 4Hz or faster to keep the detection performance, we shrink the images into 80 by 60 pixels, introduce recursive correlation, and analyze optical flows. The specially designed frame buffer enables us to capture two sequential images at any time. After that we achieved a processing performance of 8Hz on it. Our method detects 80% of critical events where at a maximum false acception rate of 6%.
Cite this article as:
K. Hayashi, M. Seki, T. Hirai, K. Takeuchi, and K. Sasakawa, “Violent Action Detection for Elevator,” J. Robot. Mechatron., Vol.18 No.6, pp. 772-778, 2006.
Data files:
References
  1. [1] “Web site of Japan elevator association (in Japanese),”
    http://www.n-elekyo.or.jp/ .
  2. [2] A. Datta, M. Shah, and N. D. V. Lobo, “Person-on-person violence detection in video data,” Proc. of ICPR, pp. 433-438, 2002.
  3. [3] N. Jojic, M. Turk, and T. S. Huang, “Tracking self-occluding articulated objects in dense disparity maps,” Proc. of ICCV, bf 1, pp. 123-130, 1999.
  4. [4] H. Takada, “Technology trends of embedded system development,” Systems, Control and Information, bf 45(3), pp. 115-117, 2001.
  5. [5] K. Takahashi, S. Seki, H. Kojima, and R. Oka, “Spotting recognition of human gestures from time-varying images,” IEICE Trans. on Information and Systems D-II, bf J77-D-II(8), pp. 1552-1561, 1994.
  6. [6] T. Hata, Y. Iwai, and M. Yachida, “Robust gesture recognition by using image motion and data compression,” IEICE Trans. on Information and Systems D-II, bf J81-D-II(9), pp. 1983-1992, 1998.
  7. [7] M. Gengyu and L. Xueyin, “Canonical sequence extraction and hmm model building based on hierarchical clustering 1,” The 6th International Conference on Automatic Face and Gesture Recognition (FGR2004), pp. 595-601, IEEE, 2004.
  8. [8] J. Gao and J. Shi, “Multiple frame motion inference using belief propagation,” The 6th International Conference on Automatic Face and Gesture Recognition (FGR2004), pp. 875-880, IEEE, 2004.
  9. [9] O. Faugeras, B. Hots, H. Mathieu, T. Vieville, Z. Zhang, P. Fua, E. Theron, L. Moll, G. Berry, J. Vuillemin, P. Bertin, and C. Proy, “Real time correlation-based stereo: Algorithm, implementations and applications,” Tech. Rep. N. 2013, INRIA, 1993.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024