single-jc.php

JACIII Vol.17 No.4 pp. 628-636
doi: 10.20965/jaciii.2013.p0628
(2013)

Paper:

Gradient-Related Non-Photorealistic Rendering for High Dynamic Range Images

Jiajun Lu, Fangyan Dong, and Kaoru Hirota

Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan

Received:
December 25, 2012
Accepted:
May 14, 2013
Published:
July 20, 2013
Keywords:
image processing, rendering, high dynamic range, non-photorealistic
Abstract
A non-photorealistic rendering (NPR) method based on elements, usually strokes, is proposed for rendering high dynamic range (HDR) images to mimic the visual perception of human artists and designers. It enables strokes generated in the rendering process to be placed accurately on account of improvements in computing gradient values especially in regions having particularly high or low luminance. Experimental results using a designed pattern show that angles of gradient values obtained from HDR images have a reduction in averaged error of up to 57.5% in comparison to that of conventional digital images. A partial experiment on incorporating HDR images into other NPR styles, such as dithering, shows the wide compatibility of HDR images in providing source information for NPR processes.
Cite this article as:
J. Lu, F. Dong, and K. Hirota, “Gradient-Related Non-Photorealistic Rendering for High Dynamic Range Images,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.4, pp. 628-636, 2013.
Data files:
References
  1. [1] T. Strothotte and S. Schlechtweg, “Non-Photorealistic Computer Graphics: Modeling, Rendering and Animation,” San Francisco: Morgan Kaufmann, 2002.
  2. [2] A. Hertzmann, “Painterly Rendering with Curved Brush Strokes of Multiple Sizes,” Proc. of SIGGRAPH ’98, pp. 453-460, 1998.
  3. [3] G. Winkenbach and D. H. Salesin, “Computer-Generated Pen-and-Ink Illustration,” Proc. of SIGGRAPH ’94, pp. 91-100, 1994.
  4. [4] W. Zhang, X. An, and S. Pan, “An Improved Recursive Retinex for Rendering High Dynamic Range Photographs,” 2011 Int. Conf. on Wavelet Analysis and Pattern Recognition (ICWAPR 2011), pp. 121-125, 2011.
  5. [5] P. E. Debevec and J. Malik, “Recovering High Dynamic Range Radiance Maps from Photographs,” Proc. of SIGGRAPH ’97, pp. 369-378, 1997.
  6. [6] F. Durand and J. Dorsey, “Fast Bilateral Filtering for the Display of High-Dynamic-Range Images,” ACM Trans. on Graphics, Vol.21, No.3, pp. 257-266, 2002.
  7. [7] C. Schlick, “Quantization Techniques for Visualization of High Dynamic Range Pictures,” Proc. of the 5th Eurographics Rendering Workshop, pp. 7-20, 1994.
  8. [8] R. Fattal, D. Lischinski, and M. Werman, “Gradient Domain High Dynamic Range Compression,” ACM Trans. on Graphics, Vol.21, No.3, pp. 249-256, 2002.
  9. [9] M. Colbert, E. Reinhard, and C. E. Hughes, “Painting in High Dynamic Range,” J. Visual Comm. and Image R., Vol.18, pp. 387-396, 2007.
  10. [10] R. Vergne and X. Granier, “Stylisation d’objets éclairés par des cartes d’environnement HDR,” Journées de l’Association Francophone d’Informatique Graphique, 2007.
  11. [11] K. Smith and G. Krawczyk, “NPR for HDR: Stylizing with High Dynamic Range Photographs,” Proc. of the 4th Int. Symposium on Non-Photorealistic Animation and Rendering (NPAR 2006), 2006.
  12. [12] A. Hausner, “Simulating Decorative Mosaic,” Proc. of SIGGRAPH ’01, pp. 573-578, 2001.
  13. [13] H. Graf, C. Harendt, T. Engelhardt, C. Scherjon, K. Warkentin, H. Richter, and J. N. Burghartz, “High Dynamic Range CMOS Imager Technologies for Biomedical Applications,” IEEE J. of Solid-State Circuits, Vol.1, No.1, pp. 281-289, 2009.
  14. [14] M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, “A Versatile HDR Video Production System,” ACM Trans. on Graphics, Vol.30, No.4, Article No.41, 2011.
  15. [15] K. Hirakawa and P. M. Simon, “Single-Shot High Dynamic Range Imaging with Conventional Camera Hardware,” 2011 IEEE Int. Conf. on Computer Vision (ICCV 2011), pp. 1339-1346, 2011.
  16. [16] A. Santella and D. DeCarlo, “Visual Interest and NPR: an Evaluation and Manifesto,” Proc. of the 4th Int. Symposium on Non-Photorealistic Animation and Rendering (NPAR 2004), pp. 71-78, 2004.
  17. [17] A. Borji and L. Itti, “State-of-the-Art in Visual Attention Modeling,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.35, No.1, pp. 185-207, 2013.
  18. [18] D. Walther and C. Koch, “Modeling attention to salient protoobjects,” Neural Networks, Vol.19, pp. 1395-1407, 2006.
  19. [19] SaliencyToolbox.
    http://www.saliencytoolbox.net/ [Accessed 2008]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024