single-jc.php

JACIII Vol.23 No.2 pp. 305-308
doi: 10.20965/jaciii.2019.p0305
(2019)

Short Paper:

The Application of A-CNN in Crowd Counting of Scenic Spots

Wanli Luo and Jialiang Wang

College of Information and Engineering, Sichuan Tourism University
No. 459 Hongling Road, Longquanyi District, Chengdu, Sichuan 610000, China

Corresponding author

Received:
June 22, 2018
Accepted:
August 20, 2018
Published:
March 20, 2019
Keywords:
crowd counting, scenic spots, CNN, deep learning
Abstract

In places where people are concentrated, such as scenic spots, the statistical accuracy of existing crowd statistics algorithms is not enough. In order to solve this problem, a crowd counting algorithm based on adaptive convolution neural network (A-CNN) is proposed, which is based on video monitoring technology. The process of its pooling is dynamically adjusted according to different feature graphs. Then the pooled weights are adjusted adaptively according to the contents of each pooled domain. Therefore, CNN can extract more accurate features when processing different pooled domains under different iteration times, so as to achieve adaptive effect finally. The experimental results show that the proposed A-CNN algorithm has improved the recognition accuracy.

The accurate crowd counting statistics in scenic spots

The accurate crowd counting statistics in scenic spots

Cite this article as:
W. Luo and J. Wang, “The Application of A-CNN in Crowd Counting of Scenic Spots,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.2, pp. 305-308, 2019.
Data files:
References
  1. [1] V. A. Sindagi and V. M. Patel, “A survey of recent advances in CNN-based single image crowd counting and density estimation,” Pattern Recognition Letters, Vol.107, pp. 3-16, 2018.
  2. [2] N. Ahuja and S. Todorovic, “Extracting texels in 2.1D natural textures,” 2007 IEEE 11th Int. Conf. on Computer Vision (ICCV), pp. 1-8, 2007.
  3. [3] V. Rabaud and S. Belongie, “Counting crowded moving objects,” 2006 IEEE Computer Society Conf. on Computer Vision and Patern Recognition (CVPR’06), Vol.1, pp. 705-711, 2006.
  4. [4] H. Liu, R. Song, and B. Wang, “A surveillance video crowd counting algorithm based on convolutional neural network,” J. of Anhui University (Natural Science Edition), Vol.3, pp. 47-50, 2015.
  5. [5] X. Sun, P. Wu, and S. C. H. Hoi, “Face Detection using Deep Learning: An Improved Faster RCNN Approach,” Neurocomputing, Vol.299, pp. 42-50, 2018.
  6. [6] V. B. Subburaman, A. Descamps, and C. Carincotte, “Counting people in the crowd using a generic head detector,” 2012 IEEE 9th Int. Conf. on Advanced Video and Signal-Based Surveilance (AVSS), pp. 470-475, 2012.
  7. [7] D. Ryan, S. Denman, S. Sridharan, and C. Fookes, “An evaluation of crowd counting methods, features and regression models,” Computer Vision and Image Understanding, Vol.130, pp. 1-17, 2015.
  8. [8] V. S. Lempitsky and A. Zisserman, “Learning to count objects in images,” Proc. 24th Annual Conf. on Neural Information Processing Systems 2010, pp. 1324-1332, 2010.
  9. [9] Z. Yan and Y. Wu, “A Neural N-Gram Network for Text Classification,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.3, pp. 380-386, 2018.
  10. [10] M. Kitahashi and H. Handa, “Estimating Classroom Situations by Using CNN with Environmental Sound Spectrograms,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.2, pp. 242-248, 2018.
  11. [11] L.-W. Jin, Z.-Y. Zhong, and Z. Yang, “Applications of Deep Learning for Handwritten Chinese Character Recognition: A Review,” Acta Automatica Sinica, Vol.42, No.8, pp. 1125-1141, 2016.
  12. [12] J. Cai, J. Y. Cai, X. D. Liao et al., “Preliminary study on hand gesture recognition based on convolutional neural network,” Computer Systems & Applications, Vol.24, No.4, pp. 113-117, 2015.
  13. [13] Y. Goldberg, “Neural network methods for natural language processing,” Synthesis Lectures on Human Language Technologies, Vol.10, No.1, pp. 1-309, 2017.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024