JRM Vol.24 No.4 pp. 686-698
doi: 10.20965/jrm.2012.p0686


Real-Time Optical Flow Estimation Using Multiple Frame-Straddling Intervals

Lei Chen, Hua Yang, Takeshi Takaki, and Idaku Ishii

Robotics Laboratory, Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527, Japan

December 13, 2011
March 12, 2012
August 20, 2012
real-time motion estimation, high-frame-rate vision, gradient-based optical flow, accuracy

In this paper, we propose a novel method for accurate optical flow estimation in real time for both high-speed and low-speed moving objects based on High-Frame-Rate (HFR) videos. We introduce a multiframe-straddling function to select several pairs of images with different frame intervals from an HFR image sequence even when the estimated optical flow is required to output at standard video rates (NTSC at 30 fps and PAL at 25 fps). The multiframestraddling function can remarkably improve the measurable range of velocities in optical flow estimation without heavy computation by adaptively selecting a small frame interval for high-speed objects and a large frame interval for low-speed objects. On the basis of the relationship between the frame intervals and the accuracies of the optical flows estimated by the Lucas–Kanade method, we devise a method to determine multiple frame intervals in optical flow estimation and select an optimal frame interval from these intervals according to the amplitude of the estimated optical flow. Our method was implemented using software on a high-speed vision platform, IDP Express. The estimated optical flows were accurately outputted at intervals of 40 ms in real time by using three pairs of 512×512 images; these images were selected by frame-straddling a 2000-fps video with intervals of 0.5, 1.5, and 5 ms. Several experiments were performed for high-speed movements to verify that our method can remarkably improve the measurable range of velocities in optical flow estimation, compared to optical flows estimated for 25-fps videos with the Lucas–Kanade method.

  1. [1] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, Vol.17, pp. 185-203, 1981.
  2. [2] B. Lucas and T. Kanade, “An iterative image registration technique with applications in stereo vision,” in Proc. of the DARPA Image Understanding Workshop, pp. 121-130, 1981.
  3. [3] H. Liu, T. H. Hong, M. Herman, T. Camus, and R. Chellappa, “Accuracy vs. efficiency trade-offs in optical flow algorithms,” Computer Vision and Image Understanding, Vol.72, No.3, pp. 271-286, 1998.
  4. [4] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” Int. J. of Computer Vision, Vol.12, No.1, pp. 32-77, 1994.
  5. [5] D. T. Lawton, “Processing translational motion sequences,” Computer Vision, Graphics, and Image Processing, Vol.22, pp. 116-144, 1983.
  6. [6] J. R. Bergen, P. J. Burt, R. Hingorani, and S. Peleg, “Computing two motions from three frames,” in Proc. of Int. Conf. on Computer Vision, pp. 27-32, 1990.
  7. [7] H. Niitsuma and T. Maruyama, “Real-time detection of moving objects,” Lecture Notes in Computer Science, FPL 2004, Vol.3203, pp. 1153-1157, 2004.
  8. [8] J. L.Martin, A. Zuloaga, C. Cuadrado, J. L. Lazaro, and U. Bidarte, “Hardware implementation of optical flow constraint equation using FPGAs,” Computer Vision and Image Understanding, Vol.98, pp. 462-490, 2005.
  9. [9] J. Diaz, E. Ros, F. Pelayo, E. M. Ortigosa, and S. Mota, “FPGAbased real-time optical-flow system,” IEEE Trans. on Circuits and Systems for Video Technology, Vol.16, No.2, pp. 274-279, 2006.
  10. [10] Z. Wei, D. J. Lee, and B. E. Nelson, “FPGA-based real-time optical flow algorithm design and implementation,” J. of Multimedia, Vol.2, No.5, pp. 38-45, 2007.
  11. [11] M. V. Correia and A. C. Campilh, “Real-time implementation of an optical flow algorithm,” in Proc. of the Int. Conf. on Pattern Recognition, pp. 247-250, 2002.
  12. [12] D. J. Fleet, “Measurement of Image Velocity,” Kluwer Academic, Norwell, 1992.
  13. [13] P. Anandan, “A computational framework and an algorithm for the measurement of visual motion,” Int. J. of Computer Vision, Vol.2, pp. 283-310, 1989.
  14. [14] M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise smooth flow fields,” Computer Vision and Image Understanding, Vol.63, No.1, pp. 75-104, 1996.
  15. [15] E. Memin and P. Perez, “A multi-grid approach for hierarchical motion estimation,” in Proc. of Int. Conf. on Computer Vision, pp. 933-938, 1998.
  16. [16] T. Brox, A. Bruhn, N. Papenberg, and J. Welckert, “High accuracy optical flow estimation based on a theory for warping,” in Proc. of the European Conf. on Computer Vision, pp. 25-36, 2004.
  17. [17] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. Freeman, “SIFT flow: Dense correspondence across difference scenes,” in Proc. of the European Conf. on Computer Vision, pp. 28-42, 2008.
  18. [18] L. Xu, J. Chen, and J. Jia, “A segmentation based variational model for accurate optical flow estimation,” in Proc. of the European Conf. on Computer Vision, pp. 671-684, 2008.
  19. [19] X. Ren, “Local grouping for optical flow,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
  20. [20] T. Brox, C. Bregler, and J.Malik, “Large displacement optical flow,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 41-48, 2009.
  21. [21] N. Stevanovic, M. Hillegrand, B. J. Hostica, and A. Teuner, “A CMOS image sensor for high-speed imaging,” in Digest of Technical Papers of IEEE Int. Solid-State Circuits Conf., pp. 104-105, 2000.
  22. [22] S. Kleinfelder, Y. Chen, K. Kwiatkowski, and A. Shah, “High-speed CMOS image sensor circuits with in-situ frame storage,” IEEE Trans. on Nuclear Science, Vol.51, pp. 1648-1656, 2004.
  23. [23] M. Furuta, Y. Nishikawa, T. Inoue, and S. Kawahito, “A highspeed, high-sensitivity digital CMOS image sensor with a global shutter and 12-bit column-parallel cyclic A/D converters,” IEEE J. of Solid-State Circuits, Vol.42, No.4, pp. 766-774, 2007.
  24. [24] M. El-Desouki, M. J. Deen, Q. Fang, L. Liu, F. Tse, and D. Armstrong, “CMOS image sensors for high speed applications,” Sensors, Vol.9, No.1, pp. 430-444, 2009.
  25. [25] Y. Watanabe, T. Komuro, and M. Ishikawa, “955-fps real-time shape measurement of a moving/deforming object using high-speed vision for numerous-point analysis,” in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 3192-3197, 2007.
  26. [26] S. Hirai, M. Zakoji, A. Masubuchi, and T. Tsuboi, “Realtime FPGA-based vision system,” J. of Robotics and Mechatronics, Vol.17, No.4, pp. 401-409, 2005.
  27. [27] I. Ishii, R. Sukenobe, T. Taniguchi, and K. Yamamoto, “Development of high-speed and real-time vision platform, H3 Vision,” in Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3671-3678, 2009.
  28. [28] I. Ishii, T. Tatebe, Q. Gu, Y. Moriue, T. Takaki, and K. Tajima, “2000 fps real-time vision system with high-frame-rate video recording,” in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 1536-1541, 2010.
  29. [29] S. Lim, J. G. Apostolopouls, and A. E. Gamal, “Optical flow estimation using temporally over-sampled video,” IEEE Trans. on Image Processing, Vol.14, No.8, pp. 1074-1087, 2005.
  30. [30] I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “1000 fps realtime optical flow detection system,” in Proc. of SPIE-IS&T Electronics Imaging 2010 Meeting, Vol.7538, pp. 75380M-1-75380M-11, 2010.
  31. [31] L. Chen, T. Takaki, and I. Ishii, “Accuracy of gradient-based optical flow estimation in high-frame-rate video analysis,” IEICE Trans. on Information and Systems, Vol.E95-D, No.4, pp. 1130-1141, 2012.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Sep. 20, 2017