single-au.php

IJAT Vol.15 No.6 pp. 794-803
doi: 10.20965/ijat.2021.p0794
(2021)

Paper:

Surrounding Structure Estimation Using Ambient Light

Bilal Ahmed Mir*, Tohru Sasaki**,†, Yusuke Nagahata*, Eri Yamabe*, Naoya Miwa*, and Kenji Terabayashi**

*Graduate School of Science and Engineering for Education, University of Toyama
3190 Gofuku, Toyama, Toyama 930-8555, Japan

**Department of Mechanical and Intellectual Systems Engineering, University of Toyama, Toyama, Japan

Corresponding author

Received:
March 26, 2021
Accepted:
July 7, 2021
Published:
November 5, 2021
Keywords:
image sensing, visual affordance, ambient light, structure estimation
Abstract

Image measurement technology – widely used in present society – has made substantial progress. It involves processes such as image input, target extraction, and measurement of the extracted region to obtain information from an image. These processes are computationally intensive because they require a large amount of information such as complex features, which is often an obstacle to improving and speeding up image processing tasks. In contrasts, living organisms easily recognize their own surroundings in real time. In cognitive science research studies, for example, visual affordance studies have shown that organisms perceive and recognize their surrounding environment and objects from ambient light, which is formed by reflected and scattered light in the environment. By applying this natural mechanism to image measurement technology, it is possible to obtain the information necessary to recognize the surrounding environment by observing ambient light without necessarily detecting or recognizing the object. In this study, we propose a direct method of assessing the surrounding environment by capturing ambient light as luminance.

Cite this article as:
B. Mir, T. Sasaki, Y. Nagahata, E. Yamabe, N. Miwa, and K. Terabayashi, “Surrounding Structure Estimation Using Ambient Light,” Int. J. Automation Technol., Vol.15 No.6, pp. 794-803, 2021.
Data files:
References
  1. [1] J. J. Gibson, “The Senses Considered as Perceptual Systems,” Houghton Mifflin, 1966.
  2. [2] J. J. Gibson, “The ecological Approach to Visual Perception,” Lawrence Erlbaum Assoc Inc., 1979.
  3. [3] G. A. Kaplan, “Kinetic disruption of optical texture: The perception of depth at an edge,” Perception and Psychophysics, Vol.6, pp. 193-198, 1969.
  4. [4] J. J. Gibson, G. A. Kaplan, H. N. Reynolds, and K. Wheeler, “The change from visible to invisible,” Perception and Psychophysics, Vol.5, pp. 113-116, 1969.
  5. [5] A. Parker, “Birth of the eye,” tlanslated by M. Watanabe and Y. Imanishi, Soshisha, 2009.
  6. [6] A. G. Shapiro and J. P. Charles, “Mallory Shear-Heyman: Visual illusions based on single-field contrast asynchronies,” J. of Vision, Vol.5, Issue 10, 2, 2005.
  7. [7] K. Kitagawa, J. Nishio, and H. Takahashi, “Consideration of pictorial cue for depth perception in the room,” Trans. of AIJ, J. of Architecture, Planning and Environmental Engineering, Vol.73, No.627, pp. 987-994, 2008.
  8. [8] E. Ugur, M. R. Dogar, M. Cakmak, and E. Sahin, “The learning and use of traversability affordance using range images on a mobile robot,” Proc. of 2007 IEEE Int. Conf. on Robotics and Automation (ICRA 2007), pp. 1721-1726, 2007.
  9. [9] D. Kim, J. Sun, S. M. Oh, J. M. Rehg, and A. Bobick, “Traversability classification using unsupervised on-line visual learning for outdoor robot navigation,” Proc. of 2006 IEEE Int. Conf. on Robotics and Automation (ICRA 2006), pp. 518-525, 2006.
  10. [10] A. Chemero and M. T. Turvey, “Gibsonian Affordances for Roboticists,” Adaptive Behavior, Vol.15, No.4, pp. 473-480, 2007.
  11. [11] G. Pezzulo and P. Cisek, “Navigating the affordance landscape: feedback control as a process model of behavior and cognition,” Trends Cogn. Sci., Vol.20, pp. 414-424, 2013.
  12. [12] M. Hassanin, S. Khan, and M. Tahtali, “Visual affordance and function understanding: a survey,” J. of Latex Class Files, Vol.6, No.1, 2018.
  13. [13] G. Fritz, L. Paletta, M. Kumar, G. Dorffner, R. Breithaup, and E. Rome, “Visual learning of affordance-based cues,” Proc. of the 9th Int. Conf. on From Animals to Animats: Simulation of Adaptive Behavior (SAB’06), pp. 52-64, 2006.
  14. [14] M. Wang, R. Luo, A. O. Onol, and T. Padir, “Affordance-Based Mobile Robot Navigation Among Movable Obstacles,” Proc. of 2020 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2734-2740, 2020.
  15. [15] M. Naseer, S. H. Khan, and F. Porikli, “Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey,” IEEE Access, Vol.7, No.1, pp. 1859-1887, 2019.
  16. [16] H. Grabner, J. Gall, and L. V. Gool, “What makes a chair a chair?,” Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2011), pp. 1529-1536, 2011.
  17. [17] V. Seib, N. Wojke, M. Knauf, and D. Paulus, “Detecting Fine-Grained Affordances with an Anthropomorphic Agent Model,” L. Agapito, M. Bronstein, and C. Rother (Eds.), “Computer Vision – ECCV 2014 Workshops,” Springer, pp. 413-419, 2014.
  18. [18] T. Hermans, J. M. Rehg, and A. Bobick, “Affordance Prediction via Learned Object Attributes,” Proc. of 2011 IEEE Int. Conf. on Robotics and Automation (ICRA 2011), pp. 1-8, 2011.
  19. [19] A. Myers, C. L. Teo, C. Fermüller, and Y. Aloimonos, “Affordance detection of tool parts from geometric features,” Proc. of 2015 IEEE Int. Conf. on Robotics and Automation (ICRA 2015), pp. 1374-1381, 2015.
  20. [20] S. Akizuki, M. Iizuka, K. Kozai, and M. Hashimoto, “Functional attribute estimation of daily necessities based on the likelihood integration of identification results by local features,” J. of the Japan Society of Precision Engineering, Vol.84, No.7, pp. 658-663, 2018.
  21. [21] R. C. Arkin, “Behavior-Based Robotics,” MIT Press, 1998.
  22. [22] Y. Tazaki, “Navigation of mobile robots using topological map representation,” J. of Robotics Society of Japan, Vol.33, No.10, pp. 773-778, 2015.
  23. [23] Y. Nagahata, E. Yamabe, T. Sasaki, and K. Terabayashi, “Proposal of environmental structure estimation method using ambient light array,” The Japan Society for Precision Engineering Hokuriku Shinetsu Branch Academic Lecture Summary, C32, 2019.
  24. [24] Y. Nagahata, E. Yamabe, N. Miwa, B. A. Mir, T. Sasaki, and K. Terabayashi, “Proposal of Surrounding Structure Estimation Method Using Ambient Light,” Proc. of the 18th Int. Conf. on Precision Engineering, C000172, 2020.
  25. [25] J. MacQueen, “Some methods for classification and analysis of multivariate observations,” Proc. of the 5th Berkeley Symp. on Mathematical Statistics and Probability, Vol.1, No.14, pp. 281-297, 1967.
  26. [26] S. Lloyd, “Least squares quantization in PCM,” IEEE Trans. on Information Theory, Vol.28, No.2, pp. 129-137, 1982.
  27. [27] D. Pelleg and A. Moore, “X-means: Extending k-means with efficient estimation of the number of clusters,” Proc. of the 17th Int. Conf. on Machine Learning, pp. 727-734, 2000.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024