JACIII Vol.22 No.5 pp. 704-710
doi: 10.20965/jaciii.2018.p0704


Microscopic Road Traffic Scene Analysis Using Computer Vision and Traffic Flow Modelling

Robert Kerwin C. Billones, Argel A. Bandala, Laurence A. Gan Lim, Edwin Sybingco, Alexis M. Fillone, and Elmer P. Dadios

De La Salle University
2401 Taft Avenue, Manila 0922, Philippines

March 13, 2018
June 15, 2018
September 20, 2018
traffic scene analysis, intelligent transport systems, computer vision, vehicle detection and tracking, microscopic traffic flow modelling

This paper presents the development of a vision-based system for microscopic road traffic scene analysis and understanding using computer vision and computational intelligence techniques. The traffic flow model is calibrated using the information obtained from the road-side cameras. It aims to demonstrate an understanding of different levels of traffic scene analysis from simple detection, tracking, and classification of traffic agents to a higher level of vehicular and pedestrian dynamics, traffic congestion build-up, and multi-agent interactions. The study used a video dataset suitable for analysis of a T-intersection. Vehicle detection and tracking have 88.84% accuracy and 88.20% precision. The system can classify private cars, public utility vehicles, buses, and motorcycles. Vehicular flow of every detected vehicles from origin to destination are also monitored for traffic volume estimation, and volume distribution analysis. Lastly, a microscopic traffic model for a T-intersection was developed to simulate a traffic response based on actual road scenarios.

Cite this article as:
R. Billones, A. Bandala, L. Lim, E. Sybingco, A. Fillone, and E. Dadios, “Microscopic Road Traffic Scene Analysis Using Computer Vision and Traffic Flow Modelling,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.5, pp. 704-710, 2018.
Data files:
  1. [1] Department of Transportation (DOTr), “INTELLIGENT TRANSPORT SYSTEM: Initiatives of DOTr, ITS Forum 2017,” Manila, Philippines, 2017.
  2. [2] W. Xiao-Xiong et al., “Decouple Analysis on Distributed Architecture of Urban Traffic System,” 9th Int. Conf. on Control, Automation, Robotics and Vision, pp. 1-6, 2006.
  3. [3] R. Danescu, F. Oniga and S. Nedevschi, “Modelling and Tracking the Driving Environment With a Particle-Based Occupancy Grid,” IEEE Trans. on Intelligent Transportation Systems, Vol.12, No.4, pp. 1331-1342, 2011.
  4. [4] R. Danescu, F. Oniga, and S. Nedevschi, “Particle Grid Tracking System for Stereovision Based Environment Perception,” 2010 IEEE Intelligent Vehicles Symp., pp. 987-992, 2010.
  5. [5] R. Boel and L. Mihaylova, “Modelling Freeway Networks by Hybrid Stochastic Models,” 2004 IEEE Intelligent Vehicles Symp., pp. 182-187, 2004.
  6. [6] B. Packer, K. Saenko, and D. Koller, “A Combined Pose, Object, and Feature Model for Action Understanding,” 2012 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1378-1385, 2012.
  7. [7] A. A. Maashri et al., “Accelerating Neuromorphic Vision Algorithms for Recognition,” DAC Design Automation Conf. 2012, pp. 579-584, 2012.
  8. [8] L. Zhang, L. Zhang, and B. Du, “Deep Learning for Remote Sensing Data A Technical Tutorial on the State of the Art,” IEEE Geoscience and Remote Sensing Magazine, Vol.4, Issue 2, pp. 22-40, 2016.
  9. [9] N. Buch, S. A. Velastin, and J. Orwell, “A Review of Computer Vision Techniques for the Analysis of Urban Traffic,” IEEE Trans. on Intelligent Transportation Systems, Vol.12, No.3, p. 920, 2011.
  10. [10] Y. Zhu, N. M. Nayak, and A. K. Roy-Chowdhury, “Context-Aware Activity Recognition and Anomaly Detection in Video,” IEEE J. of Selected Topics in Signal Processing, Vol.7, No.1, pp. 91-101, 2013.
  11. [11] J. Liu et al., “Video Event Recognition Using Concept Attributes,” 2013 IEEE Workshop on Applications of Computer Vision (WACV), pp. 339-346, 2013.
  12. [12] Y.-G. Jiang, Z. Li, and S.-F. Chang, “modelling Scene and Object Contexts for Human Action Retrieval with Few Examples,” IEEE Trans. on Circuits And Systems for Video Technology, Vol.21, No.5, pp. 674-681, 2011.
  13. [13] J. Shao et al., “Deeply Learned Attributes for Crowded Scene Understanding,” 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 4657-4666, 2015.
  14. [14] C. Zhang et al., “Cross-scene Crowd Counting via Deep Convolutional Neural Networks,” 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 833-841, 2015.
  15. [15] J. Shao, C. C. Loy, and X. Wang, “Scene-Independent Group Profling in Crowd,” 2014 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2227-2234, 2014.
  16. [16] J. Candamo et al. “Understanding Transit Scenes: A Survey on Human Behavior-Recognition Algorithms,” IEEE Trans. on Intelligent Transportation Systems, Vol.11, No.1, pp. 206-224, 2010.
  17. [17] M. Robol, P. Giorgini, and P. Busetta, “Applying social norms to high-fidelity pedestrian and traffic simulations,” 2016 IEEE Int. Smart Cities Conf. (ISC2), pp. 1-6, 2016.
  18. [18] H. Hattori, Y. Nakajima, and T. Ishida, “Learning From Humans: Agent modelling With Individual Human Behaviors,” IEEE Trans. on Systems, Man, And Cybernetics, Vol.41, No.1, pp. 1-9, 2011.
  19. [19] L. Yu and J. Gou, “Multi-level Agent model in the Behavior-Intensive Road Intersection,” 2014 Int. Conf. on Intelligent Networking and Collaborative Systems, pp. 533-538, 2014.
  20. [20] J. E. Almeida, Z. Kokkinogenis, and R. J. F. Rossetti, “Towards a Framework for Pedestrian Simulation for Intermodal Interfaces,” 2013 European Modelling Symp., pp. 335-340, 2013.
  21. [21] R. K. C. Billones et al., “Vehicle Detection and Tracking using Corner Feature Points and Artificial Neural Networks for a Visionbased Contactless Apprehension System,” Computing Conf. 2017, pp. 688-691, 2017.
  22. [22] R. K. C. Billones et al., “Intelligent system architecture for a vision-based contactless apprehension of traffic violations,” 2016 IEEE Region 10 Conf. (TENCON), pp. 1871-1874, 2016.
  23. [23] A. Barth and U. Franke, “Tracking oncoming and turning vehicles at intersections,” 13th Int. IEEE Conf. on Intelligent Transportation Systems, pp. 868-881, 2010.
  24. [24] S. Sivaraman and M. M. Trivedi, “Combining monocular and stereovision for real-time vehicle ranging and tracking on multilane highways,” 14th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC), 2011.
  25. [25] S. Sivaraman and M. M. Trivedi, “Integrated lane and vehicle detection, localization, and tracking: A synergistic approach,” IEEE Trans. on Intelligent Transportation Systems, Vol.14, Issue 2, pp. 906-917, 2013.
  26. [26] K. Qian, “Simple guide to confusion matrix terminology,” Data School: March 25, 2014, [accessed March 26, 2017].

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 27, 2020