JACIII Vol.22 No.4 pp. 483-490
doi: 10.20965/jaciii.2018.p0483


Research on Continuous Sign Language Sentence Recognition Algorithm Based on Weighted Key-Frame

Xin-Xin Xu*, Yuan-Yuan Huang*, and Zuo-Jin Hu**,†

*Institute of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
29 General Avenue, Jiangning District, Nanjing, China

**School of Mathematics and Information Science, Nanjing Normal University of Special Education
No.1 Shennong Road, Qixia District, Nanjing, China

Corresponding author

August 25, 2017
April 9, 2018
July 20, 2018
sign language sentence recognition, key-frame, gesture trace, motion-control device
Research on Continuous Sign Language Sentence Recognition Algorithm Based on Weighted Key-Frame

Recognition of continuous sign language

At present, most of the dynamic sign language recognition is only for sign language words, the continuous sign language sentence recognition research and the corresponding results are less, because the segmentation of such sentence is very difficult. In this paper, a sign language sentence recognition algorithm is proposed based on weighted key-frames. Key-frames can be regarded as the basic unit of sign word, therefore, according to key frames we can get related vocabularies, and thus we can further organize these vocabularies into meaningful sentences. Such work can avoid the hard point of dividing sign language sentence directly. With the help of Kinect, i.e. motion-control device, a kind of self-adaptive algorithm of key-frame extraction based on the trajectory of sign language is brought out in the paper. After that, the key-frame is given weight according to its semantic contribution. Finally, the recognition algorithm is designed based on these weighted key-frames and thus get the continuous sign language sentence. Experiments show that the algorithm designed in this paper can realize real-time recognition of continuous sign language sentences.

Cite this article as:
X. Xu, Y. Huang, and Z. Hu, “Research on Continuous Sign Language Sentence Recognition Algorithm Based on Weighted Key-Frame,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.4, pp. 483-490, 2018.
Data files:
  1. [1] T. Starner and A. Pentland, “Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.2, No.20, pp. 1371-1375, 1998.
  2. [2] U. Von Agris, J. Zieren, U. Canzler, et al., “Recent developments in visual sign language recognition,” Universal Access in the Information Society, Vol.6, No.4, pp. 323-362, 2008.
  3. [3] K. Grobel and M. Assan, “Isolated sign language recognition using hidden Markov models,” Vol.1, No.1, pp. 162-167, 1997.
  4. [4] J. Ravikiran, K. Mahesh, S. Mahishi, et al., “Finger Detection for Sign Language Recognition,” Int. Association of Engineers, pp. 489-493, 2009.
  5. [5] X. L. Guo and T. T. Yang, “Gesture recognition based on HMM-FNN model using a Kinect,” J. on Multimodal User Interfaces, Vol.10, No.2, pp. 1-7, 2016.
  6. [6] J. Tan and W. Xu, “Fingertip detection and gesture recognition method based on Kinect,” J. of Computer Applications, Vol.35, No.6, pp. 1795-1800, 2015.
  7. [7] Z. Halim and G. Abbas, “A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language,” Assistive Technology: the Official J. of RESNA, Vol.27, No.1, pp. 34-43, 2015.
  8. [8] H. Yan, M. Zhang, J. Tong, et al., “Real time robust multi-fingertips tracking in 3D space using Kinect,” J. of Computer-Aided Design and Computer Graphics, Vol.25, No.12, pp. 1801-1809, 2013.
  9. [9] F. Jiang, W. Gao, C. L. Wang, et al., “Development in Signer-Independent Sign Language Recognition and the Ideas of Solving Some Key Problems,” J. of Software, Vol.18, No.3, pp. 477-489, 2007.
  10. [10] S. Nasri, A. Behrad, and F. Razzazi, “A novel approach for dynamic hand gesture recognition using contour-based similarity images,” Int. J. of Computer Mathematics, Vol.92, No.4, pp. 662-685, 2014.
  11. [11] Y. Lin, X. Chai, Y. Zhou, et al., “Curve Matching from the View of Manifold for Sign Language Recognition,” Lecture Notes in Computer Science, Vol.9010, pp. 233-246, 2014.
  12. [12] J. Pu, W. Zhou, J. Zhang, et al., “Sign Language Recognition Based on Trajectory Modeling with HMMs,” MultiMedia Modeling, pp. 686-697, 2016.
  13. [13] T. Starner, “Visual Recognition of American Sign Language Using Hidden Markov Models,” MIT Media Lab, 1995.
  14. [14] G. Fang, W. Gao, X. Chen, C. Wang, and J. Ma, “Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM,” Gesture and Sign Language in Human-Computer Interaction, pp. 76-85, 2002
  15. [15] The Department for Education and Employment of China Disabled Persons’ Federation, China Association of the Deaf and Hard of Hearing, “Chinese Sign Language,” Huaxia Publishing House, 2003.
  16. [16] L. Shurong, H. Yuanyuan, H. Zuojin, and D. Qun, “Key Frame Detection Algorithm based on Dynamic Sign Language Video for the Non Specific Population,” Int. J. of Signal Processing, Image Processing and Pattern Recognition, Vol.8, No.12, pp. 135-148, 2015.
  17. [17] S. Manman, H. Yuanyuan, H. Zuojin, and D. Qun, “Dynamic Sign Language Recognition Algorithm Using Weighted Gesture Units,” J. of Information and Computational Science, Vol.12, No.15, pp. 5611-5621, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Aug. 19, 2018