Location Detection of Informative Bright Region in Tunnel Scenes Using Lighting and Traffic Lane Cues
Jiajun Lu*, Fangyan Dong**, and Kaoru Hirota*
*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology
G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan
**Education Academy of Computational Life Sciences, Tokyo Institute of Technology
J3-141, 4259 Nagatsuta, Midori-ku, Yokohama 226-8501, Japan
To locate Informative Bright Region (IBR) in which visual information is missing owing to limited dynamic range of image sensor, an algorithm is proposed that combines the geometric properties of visual cues into a confidence map. The location of an IBR in a road tunnel scene is estimated in real-time under the condition in which most of the vision information inside the IBR is lost. The algorithm is evaluated by comparing the estimated location of IBR with that annotated by multiple human observers in a self-built tunnel scene video dataset recorded by a car-mounted camera, and the algorithm achieves a running time of 10 ms for each frame. The algorithm aims to provide control timing of imaging sensor on a low-cost platform such as a vehicle driving recorder to enhance the visual contents captured in over-exposed regions.
-  Y. Wang, E. K. Teoh, and D. Shen, “Lane Detection using B-snake,” IEEE Int. Conf. on Information, Intelligence, and Systems (ICIIS’99), Washington, USA, pp. 438-443, 1999.
 Z. Kim, “Robust Lane Detection and Tracking in Challenging Scenarios,” IEEE Trans. on Intelligent Transportation Systems, Vol.9, No.1, pp. 16-26, 2008.
 C. Bahlmann, Y. Zhu, V. Ramesh, M. Pellkofer, and T. Koehler, “A System for Traffic Sign Detection, Tracking, and Recognition using Color, Shape, and Motion Information,” Proc. of IEEE Intelligent Vehicles Symposium, Las Vegas, USA, pp. 255-260, 2005.
 A. Ruta, Y. Li, and X. Liu, “Real-time Traffic Sign Recognition from Video by Class-specific Discriminative Features,” Pattern Recognition, Vol.43, No.1, pp. 416-430, 2010.
 P. Stone, P. Beeson, T. Meriçli, and R. Madigan,
“DARPA Urban Challenge Technical Report -- Austin Robot Technology,”
[Accessed April 2014]
 P. E. Debevec and J. Malik, “Recovering High Dynamic Range Radiance Maps from Photographs,” Proc. of SIGGRAPH 97, Los Angeles, USA, pp. 369-378, 1997.
 F. Durand and J. Dorsey, “Fast Bilateral Filtering for the Display of High-Dynamic-Range Images,” ACM Trans. on Graphics, Vol.21, No.3, pp. 257-266, 2002.
 W. Zhang, X. An, and S. Pan, “An Improved Recursive Retinex for Rendering High Dynamic Range Photographs,” 2011 Int. Conf. on Wavelet Analysis and Pattern Recognition (ICWAPR’11), Guilin, China, pp. 121-125, 2011.
 J. Lu, F. Dong, and K. Hirota, “Gradient-Related Non-Photorealistic Rendering for High Dynamic Range Images,” J. of Advanced Computational Intelligence and Intelligent Informatics, Vol.17, No.4, pp. 628-636, 2013.
 S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High Dynamic Range Video,” ACM Trans. on Graphics, Vol.22, No.3, pp. 319-325, 2003.
 M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, “A Versatile HDR Video Production System,” ACM Trans. on Graphics, Vol.30, No.4, pp. 41:1-41:10, 2011.
 Y. Tamura and N. Mikami, “Study on Traffic Accidents on Tunnel Zone,” Yamaguchi University Faculty of Engineering Research Report, Vol.55, No.2, pp. 73-78, 2004 (in Japanese).
 J. Matas, C. Galambos, and J. Kittler, “Robust Detection of Lines Using the Progressive Probabilistic Hough Transform,” Computer Vision and Image Understanding, Vol.78, No.1, pp. 119-137, 2000.