single-rb.php

JRM Vol.21 No.6 pp. 698-708
doi: 10.20965/jrm.2009.p0698
(2009)

Paper:

Parallel Computation of the Region-Based Level Set Method for Boundary Detection of Moving Objects

Xianfeng Fei*,**, Yasunobu Igarashi***, Makoto Shinkai****,
Masatoshi Ishikawa*****, and Koichi Hashimoto*

*Graduate School of Information Science, Tohoku University, 6-6-01 Aramaki Aza Aoba, Aoba-ku, Sendai 980-8579, Japan

**Electrical Engineering College, Guizhou University, Nanming, Guiyang, Guizhou 550003, China

***Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara 630-0192, Japan

****Corporate Research and Development Group, Sharp Corporation, 1-9-2 Nakase, Mihama-ku, Chiba 261-8520, Japan

*****Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

Received:
April 21, 2009
Accepted:
October 19, 2009
Published:
December 20, 2009
Keywords:
level set method, parallel computing, high-speed camera, high-speed image processing, cell
Abstract
We formulate a parallel, region-based level set model to speed up accurate boundary detection of moving objects in low-contrast images, applying parallelization and discretization to a Chan-Vese (CV) model. We implement the model in a column parallel vision (CPV) system that is one of parallel image processing systems we developed for robot vision. Using a microscopic image of moving paramecia as a sample of a low-contrast image, our model detects moving paramecia boundaries within 2 ms per image. Comparisons of our model to a CV model using the CPV system and a nonparallel PC, we found that our model cuts calculation time for a CV model while obtaining accuracy similar to the CV model in boundary detection of moving objects.
Cite this article as:
X. Fei, Y. Igarashi, M. Shinkai, M. Ishikawa, and K. Hashimoto, “Parallel Computation of the Region-Based Level Set Method for Boundary Detection of Moving Objects,” J. Robot. Mechatron., Vol.21 No.6, pp. 698-708, 2009.
Data files:
References
  1. [1] I. Ishii and M. Ishikawa, “Self windowing for high-speed vision,” Systems and Computers in Japan, 32, pp. 51-58, 2001.
  2. [2] Y. Nakabo, M. Ishikawa, H. Toyoda, and S. Mizuno, “1ms Column Parallel Vision System and It's Application of High Speed Target Tracking,” Proc. of the 2000 IEEE Int. Conf. on Robotics and Automation, 10, pp. 650-655, 2000.
  3. [3] H. Oku, M. Ishikawa, Theodorus, and K. Hashimoto, “High-speed autofocusing of a cell using diffraction patterns,” Optics Express, 14, pp. 3952-3960, 2006.
  4. [4] H. Oku, N. Ogawa, K. Hashimoto, and M. Ishikawa, “Two-dimensional tracking of a motile micro-organism allowing high-resolution observation with various imaging techniques,” Review of Scientific Instruments, 76, 034301-1-034301-8, 2005.
  5. [5] K. Hashimoto, “Visual servoing: real-time control of robot manipulators Based on Visual Sensory Feedback,” World Scientific Series in Robotics and Automated Systems - Vol.7, World Scientific, 1993.
  6. [6] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int. J. of Computer Vision, 1, pp. 321-331, 1988.
  7. [7] S. Osher and R. P. Fedkiw, “Level set methods: an overview and some recent results,” J. of Computational Physics, 169, pp. 463-502, 2001.
  8. [8] S. Osher and J. A. Sethian, “Fronts propagating with curvature dependent speed: Algorithms based on hamilton-jacobi formulations,” J. of Computational Physics, 79, pp. 12-49, 1988.
  9. [9] R. Malladi, J. Sethian, and B. Vemuri, “A fast level set based algorithm for topology-independent shape modeling,” J. of Mathematical Imaging and Vision, Vol.6, No.2, pp. 269-289, 1996.
  10. [10] J. Sethian, “A fast marching level set method for monotonically advancing fronts,” Proc. of the National Academy of Sciences, Vol.93, No.4, pp. 1591-1595, 1996.
  11. [11] T. F. Chan and L. A. Vese, “Active contours without edges,“ IEEE Trans. on Image Processing, 10, pp. 266-277, 2001.
  12. [12] D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math, 42, pp. 577-685, 1989.
  13. [13] M. Ishikawa, K. Ogawa, T. Komuro, and I. Ishii, “A cmos vision chip with simd processing element array for lms image processing,” Dig. Tech. Papers of 1999 IEEE Int. Solid-State Circuits Conf., pp. 206-207, 1999.
  14. [14] T. Komuro, I. Ishii, and M. Ishikawa, “Vision chip architecture using general-purpose processing elements for lms vision system,” Proc. of IEEE Int. Workshop on Computer Architecture for Machine Perception, pp. 276-279, 1997.
  15. [15] J. Abbott, Z. Nagy, F. Beyeler, and B. Nelson, “Robotics in the Small, Part 1: Microbotics,“ IEEE Robotics and Automation Magazine, 14, pp. 92-103, 2007.
  16. [16] M. A. Greminger and B. J. Nelson, “A Deformable Object Tracking Algorithm Based on the Boundary Element Method that is Robust to Occlusions and Spurious Edges,” Int. J. of Computer Vision, pp. 29-45, 2008.
  17. [17] M. A. Greminger, Y. Sun, and B. J. Nelson, “Boundary Element Deformable Object Tracking with Equilibrium Constraints,” Proc. of 2004 IEEE Int. Conf. on Robotics and Automation, pp. 3896-3901, 2004.
  18. [18] Y. Pan, J. Birdwell, and S. Djouadi, “Efficient Implementation of the Chan-Vese Models Without Solving PDEs,” Proc. of IEEE 8th Workshop on Multimedia Signal Processing, pp. 350-354, 2006.
  19. [19] X. Renbo, L. Weijun, W. Yuechao, and W. Xiaojun, “Fast initialization of level set method and an improvement to Chan-Vese model,” Proc. of The Fourth Int. Conf. on Computer and Information Technology, pp. 18-23, 2004.
  20. [20] R. Fahmi and A. Farag, “A fast level set algorithm for shape-based segmentation with multiple selective priors,” Proc. of IEEE Int. Conf. on Image Processing, pp. 1073-1076, 2008.
  21. [21] N. Chronis, M. Zimmer, and C. Bargmann, “Microfluidics for in vivo imaging of neuronal and behavioral activity in Caenorhabditis elegans,” Nature Methods, 4, pp. 727-731, 2007.
  22. [22] A. Kuhara, M. Okumura, T. Kimata, Y. Tanizawa, R. Takano, K. D. Kimura, H. Inada, K. Matsumoto, and I. Mori, “Temperature Sensing by an Olfactory Neuron in a Circuit Controlling Behavior of C. elegans,” Science, pp. 803-807, 2008.
  23. [23] X. Fei, Y. Igarashi, and K. Hashimoto, “2D tracking of single paramecium by using parallel level set method and visual servoing,” Proc. of the 2008 IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics, pp. 752-757, 2008.
  24. [24] T. Obara, Y. Igarashi, D. Wako, H. Tsubokawa, Y. Nakaoka, and K. Hashimoto, “Fluorescent and tracking microscope system,” Proc. of the Focus on Microscopy 2008, p. 23, 2008.
  25. [25] N. Ogawa, H. Oku, K. Hashimoto, and M. Ishikawa, “Microrobotic Visual Control of Motile Cells using High-Speed Tracking System,” IEEE Trans. of Robotics, 21, pp. 704-712, 2005.
  26. [26] N. Ogawa, H. Oku, K. Hashimoto, and M. Ishikawa, “A physical model for galvanotaxis of paramecium cell,” J. of Theoretical Biology, 242, pp. 314-328, 2006.
  27. [27] N. Ogawa, H. Oku, K. Hashimoto, and M. Ishikawa, “Trajectory Planning of Motile Cell for Microrobotic Application,” J. of Robotics and Mechatronics, 19, pp. 190-197, 2007.
  28. [28] L. Dong and B. Nelson, “Robotics in the small Part 2: Nanorobotics,“ IEEE Robotics and Automation Magazine, 14, pp. 111-121, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024