single-jc.php

JACIII Vol.28 No.2 pp. 255-264
doi: 10.20965/jaciii.2024.p0255
(2024)

Research Paper:

Greeting Gesture Classification Using Machine Learning Based on Politeness Perspective in Japan

Angga Wahyu Wibowo*,† ORCID Icon, Kurnianingsih** ORCID Icon, Azhar Aulia Saputra* ORCID Icon, Eri Sato-Shimokawara* ORCID Icon, Yasufumi Takama* ORCID Icon, and Naoyuki Kubota* ORCID Icon

*Tokyo Metropolitan University
6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan

Corresponding author

**Politeknik Negeri Semarang
Jl. Prof. H. Soedarto, SH Tembalang, Semarang, Indonesia

Received:
July 25, 2023
Accepted:
October 4, 2023
Published:
March 20, 2024
Keywords:
greeting, gesture, culture, bowing, waving hand
Abstract

Understanding traditional culture is important. Various methods are used to achieve better cross-cultural understanding, and certain researchers have studied human behavior. However, behavior does not always represent a culture. Therefore, our study aims to understand Japanese greeting culture by classifying it through machine learning. Following are our study contributions. (1) The first study to analyze cultural differences in greeting gestures based on the politeness level of Japanese people by classifying them. (2) Classify Japanese greeting gestures eshaku, keirei, saikeirei, and waving hand. (3) Analyze the performance results of machine and deep learning. Our study noted that bowing and waving were the behaviors that could symbolize the culture in Japan. In conclusion, first, this is the first study to analyze the eshaku, keirei, saikeirei, and waving hand greeting gestures. Second, this study complements several human activity recognition studies that have been conducted but do not focus on behavior representing a culture. Third, according to our analysis, by using a small dataset, SVM and CNN methods provide better results than k-nearest neighbors (k-NN) with Euclidean distance, k-NN with DTW, logistic regression and LightGBM in classifying greeting gestures eshaku, keirei, saikeirei, and waving hand. In the future, we will investigate other behaviors from different perspectives using another method to understand cultural differences.

Cite this article as:
A. Wibowo, Kurnianingsih, A. Saputra, E. Sato-Shimokawara, Y. Takama, and N. Kubota, “Greeting Gesture Classification Using Machine Learning Based on Politeness Perspective in Japan,” J. Adv. Comput. Intell. Intell. Inform., Vol.28 No.2, pp. 255-264, 2024.
Data files:
References
  1. [1] H. Gaspard, Y. Jiang, H. Piesch, B. Nagengast, N. Jia, J. Lee, and M. Bong, “Assessing Students’ Values and Costs in Three Countries: Gender and Age Differences Within Countries and Structural Differences Across Countries,” Learning and Individual Differences, Vol.79, Article No.101836, 2020. https://doi.org/10.1016/j.lindif.2020.101836
  2. [2] S. Zhao, D. Kirk, S. Bowen, D. Chatting, and P. Wright, “Supporting the Cross-Cultural Appreciation of Traditional Chinese Puppetry Through a Digital Gesture Library,” J. on Computing and Cultural Heritage, Vol.12, No.4, 2019. https://doi.org/10.1145/3341882
  3. [3] H. Wu, J. Gai, Y. Wang, J. Liu, J. Qiu, J. Wang, and X. Zhang, “Influence of Cultural Factors on Freehand Gesture Design,” Int. J. of Human-Computer Studies, Vol.143, Article No.102502, 2020. https://doi.org/10.1016/j.ijhcs.2020.102502
  4. [4] A. Y. Kwon, C. D. Vallotton, M. Kiegelmann, and K. H. Wilhelm, “Cultural Diversification of Communicative Gestures Through Early Childhood: A Comparison of Children in English-, German-, and Chinese-Speaking Families,” Infant Behavior and Development, Vol.50, pp. 328-339, 2018. https://doi.org/10.1016/j.infbeh.2017.10.003
  5. [5] E. Nicoladis, J. Nagpal, P. Marentette, and B.Hauer, “Gesture Frequency Is Linked to Story-Telling Style: Evidence from Bilinguals,” Language and Cognition, Vol.10, No.4, pp. 661-664, 2018. https://doi.org/10.1017/langcog.2018.25
  6. [6] D. McNeill, “Gesture in linguistics,” J. D. Wright (Ed.) “Int. Encyclopedia of the Social & Behavioral Sciences,” Elsevier, pp. 109-120, 2015. https://doi.org/10.1016/B978-0-08-097086-8.53050-5
  7. [7] R. Li, J. Lee, W. Woo, and T. Starner, “Kissglass: Greeting Gesture Recognition Using Smart Glasses,” Proc. of the Augmented Humans Int. Conf. (AHs’20), 2020. https://doi.org/10.1145/3384657.3384801
  8. [8] M. Bâce, S. Staal, G. Sörös, and G. Corbellini, “Collocated Multi-User Gestural Interactions with Unmodified Wearable Devices,” Augmented Human Research, Vol.2, No.1, Article No.6, 2017. https://doi.org/10.1007/s41133-017-0009-z
  9. [9] A. Melnyk and P. Hénaff, “Physical Analysis of Handshaking Between Humans: Mutual Synchronisation and Social Context,” Int. J. of Social Robotics, Vol.11, No.4, pp. 541-554, 2019. https://doi.org/10.1007/s12369-019-00525-y
  10. [10] S. Aina, K. V. Sholesi, A. R. Lawal, S. D. Okegbile, and A. I. Oluwaranti, “Gesture Recognition System for Nigerian Tribal Greeting Postures Using Support Vector Machine,” Malaysian J. of Computing, Vol.5, No.2, Article No.609, 2020. https://doi.org/10.24191/mjoc.v5i2.10347
  11. [11] G. Trovato, M. Zecca, M. Do, Ö. Terlemez, M. Kuramochi, A. Waibel, T. Asfour, and A. Takanishi, “A novel greeting selection system for a culture-adaptive humanoid robot,” Int. J. of Advanced Robotic Systems, Vol.12, Issue 4, 2015. https://doi.org/10.5772/60117
  12. [12] T. Osugi and J. I. Kawahara, “The Spill-Over Effect of Formal Bowing Motion on Subjective Facial Attractiveness” Japanese Psychological Research, Vol.65, No.1, pp. 37-47, 2021. https://doi.org/10.1111/jpr.12347
  13. [13] M. Amri, “Ojigi: The Ethics of Japanese Community’s Nonverbal Language,” Proc. of the Social Sciences, Humanities; Education Conf. (SoSHEC 2019), 2019. https://doi.org/10.2991/soshec-19.2019.9
  14. [14] G. Gusnawaty, L. Lukman, A. Nurwati, A. Adha, N. Nurhawara, and A. Edy, “Strategy of Kinship Terms as a Politeness Model in Maintaining Social Interaction: Local Values Towards Global Harmony,” Heliyon, Vol.8, No.9, Article No.e10650, 2022. https://doi.org/10.1016/j.heliyon.2022.e10650
  15. [15] Z. Ye, “The Politeness Bias and the Society of Strangers,” Language Sciences, Vol.76, Article No.101183, 2019. https://doi.org/10.1016/j.langsci.2018.06.009
  16. [16] P. G.-C. Blitvich and M. Sifianou, “IM/politeness and Discursive Pragmatics,” J. of Pragmatics, Vol.145, pp. 91-101, 2019. https://doi.org/10.1016/j.pragma.2019.03.015
  17. [17] M. Hendon, L. Powell, and H. Wimmer, “Emotional Intelligence and Communication Levels in Information Technology Professionals,” Computers in Human Behavior, Vol.71, pp. 165-171, 2017. https://doi.org/10.1016/j.chb.2017.01.048
  18. [18] P. Xie and L. Deng, “Simulated analysis of modeling of driving behavior characteristics based on satellite positioning data,” J. Adv. Comput. Intell. Intell. Inform., Vol.23, No.1, pp. 114-118, 2019. https://doi.org/10.20965/jaciii.2019.p0114
  19. [19] C. Hofmann, C. Patschkowski, B. Haefner, and G. Lanza, “Machine Learning Based Activity Recognition to Identify Wasteful Activities in Production,” Procedia Manufacturing, Vol.45, pp. 171-176, 2020. https://doi.org/10.1016/j.promfg.2020.04.090
  20. [20] D. Katagami, Y. Ikeda, and K. Nitta, “Behavior Generation and Evaluation of Negotiation Agent Based on Negotiation Dialogue Instances,” J. Adv. Comput. Intell. Intell. Inform., Vol.14, No.7, pp. 840-851, 2010. https://doi.org/10.20965/jaciii.2010.p0840
  21. [21] Q. Xu, W. Zheng, Y. Song, C. Zhang, X. Yuan, and Y. Li, “Scene Image and Human Skeleton-Based Dual-Stream Human Action Recognition,” Pattern Recognition Letters, Vol.148, pp. 136-145, 2021. https://doi.org/10.1016/j.patrec.2021.06.003
  22. [22] Y. Fuse, H. Takenouchi, and M. Tokumaru, “A Robot in a Human–Robot Group Learns Group Norms and Makes Decisions Through Indirect Mutual Interaction With Humans,” J. Adv. Comput. Intell. Intell. Inform., Vol.24, No.1, pp. 169-178, 2020. https://doi.org/10.20965/jaciii.2020.p0169
  23. [23] Y. Li, W. F. Hsieh, E. Sato-Shimokawara, and T. Yamaguchi, “Expression and Identification of Confidence Based on Individual Verbal and Non-Verbal Features in Human-Robot Interaction,” J. Adv. Comput. Intell. Intell. Inform., Vol.23, No.6, pp. 1089-1097, 2019. https://doi.org/10.20965/jaciii.2019.p1089
  24. [24] K. Ohkura, T. Yasuda, and Y. Matsumura, “Generating Cooperative Collective Behavior in Swarm Robotic Systems,” J. Adv. Comput. Intell. Intell. Inform., Vol.17, No.5, pp. 699-706, 2013. https://doi.org/10.20965/jaciii.2013.p0699
  25. [25] S. Hoshino and K. Niimura, “Optical Flow for Real-Time Human Detection and Action Recognition Based on CNN Classifiers,” J. Adv. Comput. Intell. Intell. Inform., Vol.23, No.4, pp. 735-742, 2019. https://doi.org/10.20965/jaciii.2019.p0735
  26. [26] Z. Chen, X. Ma, Z. Peng, Y. Zhou, M. Yao, Z. Ma, C. Wang, Z. Gao, and M. Shen, “User-Defined Gestures for Gestural Interaction: Extending from Hands to Other Body Parts,” Int. J. of Human-Computer Interaction, Vol.34, No.3, pp. 238-250, 2018. https://doi.org/10.1080/10447318.2017.1342943
  27. [27] S. Shao, N. Kubota, K. Hotta, and T. Sawayama, “Behavior Estimation Based on Multiple Vibration Sensors for Elderly Monitoring Systems,” J. Adv. Comput. Intell. Intell. Inform., Vol.25, No.4, pp. 489-497, 2021. https://doi.org/10.20965/jaciii.2021.p0489
  28. [28] S. Shao and N. Kubota, “A Fuzzy Inference-Based Spiking Neural Network for Behavior Estimation in Elderly Health Care System,” J. Adv. Comput. Intell. Intell. Inform., Vol.23, No.3, pp. 528-535, 2019. https://doi.org/10.20965/jaciii.2019.p0528.
  29. [29] H. Igarashi, Y. Adachi, and K. Takahashi, “Adaptive Cooperation for Multi Agent Systems Based on Human Social Behavior,” J. Adv. Comput. Intell. Intell. Inform., Vol.16, No.1, pp. 139-146, 2012. https://doi.org/10.20965/jaciii.2012.p0139
  30. [30] P. Li, Q. Fei, Z. Chen, X. Yao, and Y. Zhang, “Characteristic Behavior of Human Multi-Joint Spatial Trajectory in Slalom Skiing,” J. Adv. Comput. Intell. Intell. Inform., Vol.26, No.5, pp. 801-807, 2022. https://doi.org/10.20965/jaciii.2022.p0801
  31. [31] K. Zhang, Y. Maeda, and Y. Takahashi, “Cooperative Behavior Learning Based on Social Interaction of State Conversion and Reward Exchange Among Multi-Agents,” J. Adv. Comput. Intell. Intell. Inform., Vol.15, No.5, pp. 606-616, 2011. https://doi.org/10.20965/jaciii.2011.p0606
  32. [32] K. Zhang, Y. Maeda, and Y. Takahashi, “Group Behavior Learning in Multi-Agent Systems Based on Social Interaction Among Agents,” J. Adv. Comput. Intell. Intell. Inform., Vol.15, No.7, pp. 896-903, 2011. https://doi.org/10.20965/jaciii.2011.p0896
  33. [33] K. Sakai, F. D. Libera, Y. Yoshikawa, and H. Ishiguro, “Generation of Bystander Robot Actions Based on Analysis of Relative Probability of Human Actions,” J. Adv. Comput. Intell. Intell. Inform., Vol.21, No.4, pp. 686-696, 2017. https://doi.org/10.20965/jaciii.2017.p0686
  34. [34] N. Kubota, T. Obo, and H. Liu, “Human Behavior Measurement Based on Sensor Network and Robot Partners,” J. Adv. Comput. Intell. Intell. Inform., Vol.14, No.3, pp. 309-315, 2010. https://doi.org/10.20965/jaciii.2010.p0309
  35. [35] Z. Benhaili, Y. Balouki, and L. Moumoun, “A Hybrid Deep Neural Network for Human Activity Recognition Based on IOT Sensors,” Int. J. of Advanced Computer Science and Applications, Vol.12, No.11, 2021. https://doi.org/10.14569/IJACSA.2021.0121129
  36. [36] P. Tokas, “Machine Learning Based Text Neck Syndrome Detection Using Microsoft Kinect Sensor,” Materials Today: Proc., Vol.80, pp. 3751-3756, 2023. https://doi.org/10.1016/j.matpr.2021.07.373
  37. [37] Y. J. Luwe, C. P. Lee, and K. M. Lim, “Wearable sensor-based human activity recognition with ensemble learning: A Comparison Study,” Int. J. of Electrical and Computer Engineering (IJECE), Vol.13, No.4, Article No.4029, 2023. https://doi.org/10.11591/ijece.v13i4.pp4029-4040
  38. [38] H. Los, G. S. Mendes, D. Cordeiro, N. Grosso, H. Costa, P. Benevides, and M. Caetano, “Evaluation of Xgboost and LGBM Performance in Tree Species Classification with Sentinel-2 Data,” 2021 IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS), pp. 5803-5806, 2021. https://doi.org/10.1109/IGARSS47720.2021.9553031
  39. [39] D. K. Basuki, A. A. Saputra, N. Kubota, and K. Wada, “Joint Angle-Based Activity Recognition System for Paro Therapy Observation,” IFAC-PapersOnLine, Vol.56, No.2, pp. 1145-1151, 2023. https://doi.org/10.1016/j.ifacol.2023.10.1718
  40. [40] Ö. F. İnce, I. F. Ince, M. E. Yıldırım,, J. S. Park, J. K. Song, and B. W. Yoon, “Human Activity Recognition with Analysis of Angles Between Skeletal Joints Using a RGB-Depth Sensor,” ETRI J., Vol.42, No.1, pp. 78-89, 2020. https://doi.org/10.4218/etrij.2018-0577
  41. [41] J. Zhou and T. Komuro, “An Asymmetrical-Structure Auto-Encoder for Unsupervised Representation Learning of Skeleton Sequences,” Computer Vision and Image Understanding, Vol.222, Article No.103491, 2022. https://doi.org/10.1016/j.cviu.2022.103491
  42. [42] Y.-H. Lee, C.-P. Wei, T.-H. Cheng, and C.-T. Yang, “Nearest-Neighbor-Based Approach to Time-Series Classification,” Decision Support Systems, Vol.53, No.1, pp. 207-217, 2012. https://doi.org/10.1016/j.dss.2011.12.014
  43. [43] Z. Geler, V. Kurbalija, M. Ivanovic, and M. Radovanovic, “Time-Series Classification with Constrained DTW Distance and Inverse-Square Weighted k-NN,” 2020 Int. Conf. on Innovations in Intelligent Systems and Applications (INISTA), 2020. https://doi.org/10.1109/INISTA49547.2020.9194639
  44. [44] S. Ghodsi, H. Mohammadzade, and E. Korki, “Simultaneous Joint and Object Trajectory Templates for Human Activity Recognition from 3-D data,” J. of Visual Communication and Image Representation, Vol.55, pp. 729-741, 2018. https://doi.org/10.1016/j.jvcir.2018.08.001
  45. [45] L. Lo Presti and M. La Cascia, “3D Skeleton-Based Human Action Classification: A Survey,” Pattern Recognition, Vol.53, pp. 130-147, 2016. https://doi.org/10.1016/j.patcog.2015.11.019
  46. [46] M. R. Widyanto, S. N. Endah, and K. Hirota, “Human Behavior Classification Using Thinning Algorithm and Support Vector Machine,” J. Adv. Comput. Intell. Intell. Inform., Vol.14, No.1, pp. 28-33, 2010. https://doi.org/10.20965/jaciii.2010.p0028
  47. [47] A. A. Saputra, A. R. Besari, and N. Kubota, “Human joint skeleton tracking using multiple kinect azure,” 2022 Int. Electronics Symp. (IES), pp. 430-435, 2022. https://doi.org/10.1109/IES55876.2022.9888532
  48. [48] P. C. Huu, L. Q. Khanh, and L. T. Ha, “Human Action Recognition Using Dynamic Time Warping and Voting Algorithm (1),” VNU J. of Science: Comp. Science & Com. Eng., Vol.30, Issue 3, 2014.
  49. [49] S. K. Yadav, K. Tiwari, H. M. Pandey, and S. A. Akbar, “Skeleton-Based Human Activity Recognition Using CONVLSTM and Guided Feature Learning,” Soft Computing, Vol.26, No.2, pp. 877-890, 2022. https://doi.org/10.1007/s00500-021-06238-7
  50. [50] J. K. Aggarwal and L. Xia, “Human Activity Recognition from 3D Data: A Review,” Pattern Recognition Letters, Vol.48, pp. 70-80, 2014. https://doi.org/10.1016/j.patrec.2014.04.011
  51. [51] S. Gaglio, G. L. Re, and M. Morana, “Human Activity Recognition Process Using 3-D Posture Data,” IEEE Trans. on Human-Machine Systems, Vol.45, No.5, pp. 586-597, 2015. https://doi.org/10.1109/THMS.2014.2377111
  52. [52] E. Cippitelli, S. Gasparrini, E. Gambi, and S. Spinsante, “A Human Activity Recognition System Using Skeleton Data from RGBD Sensors,” Computational Intelligence and Neuroscience, Vol.2016, 2016. https://doi.org/10.1155/2016/4351435
  53. [53] G. Hu, B. Cui, and S. Yu, “Joint Learning in the Spatio-Temporal and Frequency Domains for Skeleton-Based Action Recognition,” IEEE Trans. on Multimedia, Vol.22, No.9, pp. 2207-2220, 2020. https://doi.org/10.1109/TMM.2019.2953325
  54. [54] B. M. V. Guerra, S. Ramat, R. Gandolfi, G. Beltrami, and M. Schmid, “Skeleton Data Pre-Processing for Human Pose Recognition Using Neural Network,” 2020 42nd Annual Int. Conf. of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 4265-4268, 2020. https://doi.org/10.1109/EMBC44109.2020.9175588
  55. [55] G. Hu, B. Cui, and S. Yu, “Skeleton-Based Action Recognition with Synchronous Local and Non-Local Spatio-Temporal Learning and Frequency Attention,” 2019 IEEE Int. Conf. on Multimedia and Expo (ICME), pp. 1216-1221, 2019. https://doi.org/10.1109/ICME.2019.00212
  56. [56] I. Rodríguez-Moreno, J. M. Martínez-Otzeta, I. Goienetxea, I. Rodriguez-Rodriguez, and B. Sierra, “Shedding Light on People Action Recognition in Social Robotics by Means of Common Spatial Patterns,” Sensors, Vol.20, No.8, Article No.2436, 2020. https://doi.org/10.3390/s20082436
  57. [57] L.-Y. Hu, M.-W. Huang, S.-W. Ke, and C.-F. Tsai, “The Distance Function Effect on k-Nearest Neighbor Classification for Medical Datasets,” SpringerPlus, Vol.5, No.1, Article No.1304, 2016. https://doi.org/10.1186/s40064-016-2941-7
  58. [58] S. Celebi, T. T. Temiz, A. S. Aydin, and T. Arici, “Gesture Recognition Using Skeleton Data with Weighted Dynamic Time Warping,” Proc. of the Int. Conf. on Computer Vision Theory and Applications, 2013. https://doi.org/10.5220/0004217606200625
  59. [59] H. Basly, W. Ouarda, F. E. Sayadi, B. Ouni, and A. M. Alimi, “CNN-SVM Learning Approach Based Human Activity Recognition,” Lecture Notes in Computer Science, pp. 271-281, 2020. https://doi.org/10.1007/978-3-030-51935-3_29
  60. [60] H. Qian, Y. Mao, W. Xiang, and Z. Wang, “Recognition of Human Activities Using SVM Multi-Class Classifier,” Pattern Recognition Letters, Vol.31, No.2, pp. 100-111, 2010. https://doi.org/10.1016/j.patrec.2009.09.019
  61. [61] C. Schuldt, I. Laptev, and B. Caputo, “Recognizing Human Actions: A Local SVM Spproach,” Proc. of the 17th Int. Conf. on Pattern Recognition (ICPR 2004), Vol.3, pp. 32-36, 2004. https://doi.org/10.1109/ICPR.2004.1334462
  62. [62] Y. Jung, “Multiple Predictingk-Fold Cross-Validation for Model Selection,” J. of Nonparametric Statistics, Vol.30, No.1, pp. 197-215, 2018. https://doi.org/10.1080/10485252.2017.1404598

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024