single-au.php

IJAT Vol.12 No.3 pp. 348-355
doi: 10.20965/ijat.2018.p0348
(2018)

Paper:

Application of Stochastic Point-Based Rendering to Laser-Scanned Point Clouds of Various Cultural Heritage Objects

Kyoko Hasegawa*, Liang Li*, Naoya Okamoto*, Shu Yanai*, Hiroshi Yamaguchi**, Atsushi Okamoto***, and Satoshi Tanaka*,†

*College of Information Science and Engineering, Ritsumeikan University
1-1-1 Noji-higashi, Kusatsu-shi, Shiga 525-8577, Japan

Corresponding author

**Nara National Research Institute for Cultural Properties, Nara, Japan

***History Research Institute, Otemae University, Hyogo, Japan

Received:
August 23, 2017
Accepted:
March 16, 2018
Online released:
May 1, 2018
Published:
May 5, 2018
Keywords:
laser-scanned point cloud, transparent rendering, cultural heritage objects
Abstract

Recently, we proposed stochastic point-based rendering, which enables precise and interactive-speed transparent rendering of large-scale laser-scanned point clouds. This transparent visualization method does not suffer from rendering artifact and realizes correct depth feel in the created 3D image.

In this paper, we apply the method to several kinds of large-scale laser-scanned point clouds of cultural heritage objects and prove its wide applicability.

In addition, we prove better image quality is realized by properly eliminating points to realize better distributional uniformity of points. Here, the distributional uniformity means uniformity of inter-point distances between nearest-neighbor points.

We also demonstrate that highlighting feature regions, especially edges, in the transparent visualization helps us understand 3D internal structures of complex laser-scanned objects. The feature regions are highlighted by properly increasing local opacity of the regions.

Cite this article as:
K. Hasegawa, L. Li, N. Okamoto, S. Yanai, H. Yamaguchi, A. Okamoto, and S. Tanaka, “Application of Stochastic Point-Based Rendering to Laser-Scanned Point Clouds of Various Cultural Heritage Objects,” Int. J. Automation Technol., Vol.12 No.3, pp. 348-355, 2018.
Data files:
References
  1. [1] S. F. El-Hakim, J.-A. Beraldin, L. Gonzo, E. Whiting, M. Jemtrud, and V. Valzano, “A hierarchical 3d reconstruction approach for documenting complex heritage sites,” Proc. CIPA XX Int. Symp., pp. 790-795, 2005.
  2. [2] G. Guidi, B. Frischer, M. D. Simone, A. Cioci, A. Spinetti, L. Carosso, L. L. Micoli, M. Russo, and T. Grasso, “Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome,” Proc. SPIE Videometrics VIII, Vol.5665, pp. 119-133, 2005.
  3. [3] K. Ikeuchi, T. Oishi, J. Takamatsu, R. Sagawa, A. Nakazawa, R. Kurazume, K. Nishino, M. Kamakura, and Y. Okamoto, “The Great Buddha Project:Digitally Archiving, Restoring, and Analyzing Cultural Heritage Objects,” Int. J. of Computer Vision, Vol.75, No.1, pp. 189-208, 2007.
  4. [4] K. Ikeuchi, T. Oishi, and J. Takamatsu, “Digital Bayon Temple – e-monumentalization of large-scale cultural-heritage objects –,” Proc. ASIAGRAPH 2007, Vol.1, No.2, pp. 99-106, 2007.
  5. [5] R. G. Laycock, D. Drinkwater, and A. M. Day, “Exploring Cultural Heritage Sites through Space and Time,” ACM J. on Computing and Cultural Heritage, Vol.1, No.2, Article No.11, 2008.
  6. [6] F. Remondino, S. Girardi, A. Rizzi, and L. Gonzo, “3D Modeling of Complex and Detailed Cultural Heritage Using Multi-Resolution Data,” ACM J. on Computing and Cultural Heritage, Vol.2, No.1, Article No.2, 2009.
  7. [7] D. Koller, B. Frischer, and G. Humphreys, “Research Challenges for Digital Archives of 3D Cultural Heritage Models,” ACM J. on Computing and Cultural Heritage, Vol.2, No.3, Article No.7, 2009.
  8. [8] K. Dylla, B. Frischer, P. Mueller, A. Ulmer, and S. Haegler, “Rome Reborn 2.0: A Case Study of Virtual City Reconstruction Using Procedural Modeling Techniques,” Proc. CAA 2009, pp. 62-66, 2009.
  9. [9] T. P. Kersten, F. Keller, J. Saenger, and J. Schiewe, “Automated Generation of an Historic 4D City Model of Hamburg and Its Visualisation with the GE Engine,” Progress in Cultural Heritage Preservation (Lecture Notes in Computer Science 7616), pp. 55-65, 2012.
  10. [10] M. H. Gross and H. Pfister, editors, “Point-Based Graphics. Series in Computer Graphics,” Morgan Kaufmann Publishers, 2007.
  11. [11] M. Sainz and R. Pajarola, “Point-based rendering techniques,” Computers & Graphics, Vol.28, No.6, pp. 869-879, 2004.
  12. [12] L. Kobbelt and M. Botsch, “A survey of point-based techniques in computer graphics,” Computers & Graphics, Vol.28, No.6, pp. 801-814, 2004.
  13. [13] S. Tanaka, K. Hasegawa, N. Okamoto, R. Umegaki, S. Wang, M. Uemura, A. Okamoto, and K. Koyamada, “See-Through Imaging of Laser-scanned 3D Cultural Heritage Objects based on Stochastic Rendering of Large-Scale Point Clouds,” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci, Vol.III, No.5, pp. 73-80, 2016.
  14. [14] M. N. Gamito and S. C. Mddock, “Accurate multidimensional Poisson-disk sampling,” ACM Trans. on Graphics, Vol.29, No.1, Article No.8, 2009.
  15. [15] M. S. Ebeida, A. A. Davidson, A. Patney, P. M. Knupp, S. A. Michell, and J. D. Owens, “Efficient Maximal Poisson-Disk Sampling,” ACM Trans. on Graphics, Vol.30, No.4, Article No.49, 2011.
  16. [16] T. Hotta and M. Iwakiri, “A Characterizing 3D Point Cloud Based On Relative Gradient Method and Its Efficiency Evaluation,” J. of the Institute of Image Electronics Engineers of Japan, Vol.43, No.4, pp. 550-558, 2014.
  17. [17] T. Hotta and M. Iwakiri, “Structural Edge Detection Method from 3D Point Cloud and Its Capability,” J. of the Institute of Image Electronics Engineers of Japan, Vol.43, No.3, pp. 292-299, 2014.
  18. [18] M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification of point-sampled surfaces,” Proc. IEEE VIS 2002, pp. 163-170, 2002.
  19. [19] H. Gross and U. Thoennessen, “Extraction of lines from laser point clouds,” Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.36, No.3, pp. 86-91, 2006.
  20. [20] M. Weinmann, B. Jutzi, and C. Mallet, “Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features,” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci, Vol.II, No.3, pp. 181-188, 2014.
  21. [21] R. Miyazaki, M. Yamamoto, and K. Harada, “Line-Based Planar Structure Extraction from a Point Cloud with an Anisotropic Distribution,” Int. J. of Automation Technol., Vol.11, No.4, p. 657, 2017.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024