single-jc.php

JACIII Vol.24 No.5 pp. 685-702
doi: 10.20965/jaciii.2020.p0685
(2020)

Paper:

Smartphone Naïve Bayes Human Activity Recognition Using Personalized Datasets

Moses L. Gadebe, Okuthe P. Kogeda, and Sunday O. Ojo

Department of Computer Science, Tshwane University of Technology
Private Bag X680, Pretoria 0001, South Africa

Received:
July 10, 2018
Accepted:
July 15, 2020
Published:
September 20, 2020
Keywords:
tilt angles, signal magnitude vector, real-time, Gaussian distribution function, personalized dataset
Abstract

Recognizing human activity in real time with a limited dataset is possible on a resource-constrained device. However, most classification algorithms such as Support Vector Machines, C4.5, and K Nearest Neighbor require a large dataset to accurately predict human activities. In this paper, we present a novel real-time human activity recognition model based on Gaussian Naïve Bayes (GNB) algorithm using a personalized JavaScript object notation dataset extracted from the publicly available Physical Activity Monitoring for Aging People dataset and University of Southern California Human Activity dataset. With the proposed method, the personalized JSON training dataset is extracted and compressed into a 12×8 multi-dimensional array of the time-domain features extracted using a signal magnitude vector and tilt angles from tri-axial accelerometer sensor data. The algorithm is implemented on the Android platform using the Cordova cross-platform framework with HTML5 and JavaScript. Leave-one-activity-out cross validation is implemented as a testTrainer() function, the results of which are presented using a confusion matrix. The testTrainer() function leaves category K as the testing subset and the remaining K-1 as the training dataset to validate the proposed GNB algorithm. The proposed model is inexpensive in terms of memory and computational power owing to the use of a compressed small training dataset. Each K category was repeated five times and the algorithm consistently produced the same result for each test. The result of the simulation using the tilted angle features shows overall precision, recall, F-measure, and accuracy rates of 90%, 99.6%, 94.18%, and 89.51% respectively, in comparison to rates of 36.9%, 75%, 42%, and 36.9% when the signal magnitude vector features were used. The results of the simulations confirmed and proved that when using the tilt angle dataset, the GNB algorithm is superior to Support Vector Machines, C4.5, and K Nearest Neighbor algorithms.

Cite this article as:
M. Gadebe, O. Kogeda, and S. Ojo, “Smartphone Naïve Bayes Human Activity Recognition Using Personalized Datasets,” J. Adv. Comput. Intell. Intell. Inform., Vol.24 No.5, pp. 685-702, 2020.
Data files:
References
  1. [1] H. Ellekjær, J. Holmen, E. Ellekjær, and L. Vatten, “Physical activity and stroke mortality in women: ten-year follow-up of the Nord-Trøndelag health survey, 1984–1986,” Stroke, Vo.31, No.1, pp. 14-18, 2000.
  2. [2] F. B. Hu, M. J. Stampfer, G. A. Colditz, A. Ascherio, K. M. Rexrode, W. C. Willett, and J. E. Manson, “Physical activity and risk of stroke in women,” J. of the American Medical Association, Vol.283, No.22, pp. 2961-2967, 2000.
  3. [3] I. M. Lee, K. M. Rexrode, N. R. Cook, J. E. Manson, and J. E. Buring, “Physical activity and coronary heart disease in women: Is “no pain, no gain” passé?,” J. of the American Medical Association, Vol.285, No.11, pp. 1447-1454, 2001.
  4. [4] S. Zhang, “Smartphone Based Activity Recognition System,” Ph.D. dissertation, The Ohio State University, 2012.
  5. [5] J. W. Lockhart, T. Pulickal, and G. M. Weiss, “Applications of mobile activity recognition,” Proc. of the 2012 ACM Conf. on Ubiquitous Computing, pp. 1054-1058, 2012.
  6. [6] X. Su, H. Tong, and P. Ji, “Activity recognition with smartphone sensors,” Tsinghua Science and Technology, Vo.19, No.3, pp. 235-249, 2014.
  7. [7] V. V. Quang, M. T. Hoang, and D. Choi, “Personalization in mobile activity recognition system using K-medoids clustering algorithm,” Int. J. of Distributed Sensor Networks, Vol.9, No.7, 2013.
  8. [8] J. W. Lockhart and G. M. Weiss, “Limitations with activity recognition methodology & data sets,” Proc. of the 2014 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing: Adjunct Publication, pp. 747-756, 2014.
  9. [9] A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” 2012 16th Int. Symp. on Wearable Computers, pp. 108-109, 2012.
  10. [10] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public domain dataset for human activity recognition using smartphones,” Proc. of European Symp. on Artificial Neural Networks, Computational Intelligenceand Machine Learning, pp. 437-442, 2013.
  11. [11] O. Banos, R. Garcia, J. A. Holgado-Terriza, M. Damas, H. Pomares, I. Rojas, A. Saez, and C. Villalonga, “mHealthDroid: a novel framework for agile development of mobile health applications,” Int. Workshop on Ambient Assisted Living, Lecture Notes in Computer Science, Vol.8868, pp. 91-98, 2014.
  12. [12] J.-L. Reyes-Ortiz, L. Oneto, A. Samà, X. Parra, and D. Anguita, “Transition-aware human activity recognition using smartphones,” Neurocomputing, Vol.171, pp. 754-767, 2016.
  13. [13] C. A. Martins, M. C. Monard, and E. T. Matsubara, “Reducing the dimensionality of bag-of-words text representation used by learning algorithms,” Proc. of 3rd IASTED Int. Conf. on Artificial Intelligence and Applications, pp. 228-233, 2003.
  14. [14] Z. Yan, V. Subbaraju, D. Chakraborty, A. Misra, and K. Aberer, “Energy-efficient continuous activity recognition on mobile phones: An activity-adaptive approach,” 2012 16th Int. Symp. on Wearable Computers, pp. 17-24, 2012.
  15. [15] L. T. Nguyen, M. Zeng, P. Tague, and J. Zhang, “Recognizing new activities with limited training data,” Proc. of the 2015 ACM Int. Symp. on Wearable Computers, pp. 67-74, 2015.
  16. [16] M. L. Gadebe and O. P. Kogeda, “Personification of Bag-of-Features Dataset for Real Time Activity Recognition,” 2016 3rd Int. Conf. on Soft Computing & Machine Intelligence (ISCMI), pp. 73-78, 2016.
  17. [17] S. Harous, M. El Menshawy, M. Adel Serhani, and A. Benharref, “Mobile health architecture for obesity management using sensory and social data,” Informatics in Medicine Unlocked, Vol.10, pp. 27-44, 2018.
  18. [18] S. Harous, M. Adel Serhani, M. El Menshawy, and A. Benharref, “Hybrid obesity monitoring model using sensors and community engagement,” 2017 13th Int. Wireless Communications and Mobile Computing Conf. (IWCMC), pp. 888-893, 2017.
  19. [19] M. Zhang and A. A. Sawchuk, “USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors,” Proc. of the 2012 ACM Conf. on Ubiquitous Computing, pp. 1036-1043, 2012.
  20. [20] J. de Jong, “JSON Editor Online 3.7.5,” http://www.jsoneditoronline.org/ [accessed June 15, 2019]
  21. [21] H. Junker, P. Lukowicz, and G. Troster, “Sampling frequency, signal resolution and the accuracy of wearable context recognition systems,” 8th Int. Symp. on Wearable Computers, Vol.1, pp. 176-177, 2004.
  22. [22] D. Figo, P. C. Diniz, D. R. Ferreira, and J. M. P. Cardoso, “Preprocessing techniques for context recognition from accelerometer data,” Personal and Ubiquitous Computing, Vol.14, No.7, pp. 645-662, 2010.
  23. [23] F. Miao, Y. He, J. Liu, Y. Li, and I. Ayoola, “Identifying typical physical activity on smartphone with varying positions and orientations,” Biomedical Engineering Online, Vol.14, Article No.32, 2015.
  24. [24] C. Maher, J. Ryan, C. Ambrosi, and S. Edney, “Users’ experiences of wearable activity trackers: a cross-sectional study,” BioMed Central J. of Public Health, Vol.17, Article No.880, 2017.
  25. [25] Apache Commons Math, 2016, http://commons.apache.org/proper/commons-math/userguide/stat.html [accessed June 15, 2019]
  26. [26] D. J. Hand and K. Yu, “Idiot’s Bayes: not so stupid after all?,” Int. Statistical Review, Vol.69, No.3, pp. 385-398, 2001.
  27. [27] P. Langley, W. Iba, and K. Thompson, “An analysis of Bayesian classifiers,” Proc. of the 10th National Conf. on Artificial Intelligence, Vol.90, pp. 223-228. 1992.
  28. [28] P. Langley and S. Sage, “Induction of selective Bayesian classifiers,” Proc. of the 10th Int. Conf. on Uncertainty in Artificial Intelligence (UAI), pp. 399-406, 1994.
  29. [29] V. N. Inukollu, D. D. Keshamoni, T. Kang, and M. Inukollu, “Factors influencing quality of mobile apps: Role of mobile app development life cycle,” arXiv preprint, arXiv:1410.4537, 2014.
  30. [30] A. G. Parada, T. A. Alves, and L. Brisolara, “Modeling Android applications using UML,” Proc. of 27th SIM – South Symp. on Microelectronics, pp. 1-4, 2012.
  31. [31] J. Jain, “Apache Cordova: Powerful Framework for Hybrid Mobile App Development,” 2016, http://www.codeproject.com/Articles/1069661/Apache-Cordova-Powerful-Framework-for-Hybrid-Mobil [accessed June 15, 2019]
  32. [32] M. Kuhn, “A Short Introduction to the caret Package,” R Found Stat Comput, pp. 1-10, 2015.
  33. [33] J. Huang, J. Lu, and C. X. Ling, “Comparing naive Bayes, decision trees, and SVM with AUC and accuracy,” 3rd IEEE Int. Conf. on Data Mining, pp. 553-556, 2003.
  34. [34] I. Kononenko, “Machine learning for medical diagnosis: history, state of the art and perspective,” Artificial Intelligence in Medicine, Vol.23, No.1, pp. 89-109, 2001.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024