JACIII Vol.17 No.4 pp. 504-510
doi: 10.20965/jaciii.2013.p0504


Nearest Prototype and Nearest Neighbor Clustering with Twofold Memberships Based on Inductive Property

Satoshi Takumi and Sadaaki Miyamoto

Department of Risk Engineering, School of Systems and Information Engineering, University of Tsukuba, Ibaraki 305-8573, Japan

February 28, 2013
April 12, 2013
July 20, 2013
hierarchical clustering, nearest neighbor classification, K-means, inductive clustering, twofold memberships
The aim of this paper is to study methods of twofold membership clustering using the nearest prototype and nearest neighbor. The former uses the K-means, whereas the latter extends the single linkage in agglomerative hierarchical clustering. The concept of inductive clustering is moreover used for the both methods, which means that natural classification rules are derived as the results of clustering, a typical example of which is the Voronoi regions in K-means clustering. When the rule of nearest prototype allocation in K-means is replaced by nearest neighbor classification, we have inductive clustering related to the single linkage in agglomerative hierarchical clustering. The former method uses K-means or fuzzy c-means with noise clusters, whereby twofold memberships are derived; the latter method also derives two memberships in a different manner. Theoretical properties of the both methods are studied. Illustrative examples show implications and significances of this concept.
Cite this article as:
S. Takumi and S. Miyamoto, “Nearest Prototype and Nearest Neighbor Clustering with Twofold Memberships Based on Inductive Property,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.4, pp. 504-510, 2013.
Data files:
  1. [1] O. Chapelle, B. Schölkopf, and A. Zien (Eds.), “Semi-Supervised Learning,” MIT Press, Cambridge, Massachusetts, 2006.
  2. [2] X. Zhu and A. B. Goldberg, “Introduction to Semi-Supervised Learning,” Morgan and Claypool, 2009.
  3. [3] S.Miyamoto and A. Terami, “Inductive vs. Transductive Clustering Using Kernel Functions and Pairwise Constraints,” Proc. of 11th Intern. Conf. on Intelligent Systems Design and Applications (ISDA 2011), Nov. 22-24, 2011, Cordoba, Spain, pp. 1258-1264, 2011.
  4. [4] K. T. Atanassov, “Intuitionistic fuzzy sets,” Fuzzy Sets and Systems, Vol.20, pp. 87-96, 1986.
  5. [5] Z. Pawlak, “Rough Sets,” Int. J. of Parallel Programming, Vol.11, No.5, pp. 341-356, 1982.
  6. [6] Z. Pawlak, “Rough Sets,” Kluwer, Dordrecht, 1991.
  7. [7] P. Lingras and C. West, “Interval set clustering of web users with rough K-means,” J. of Intel. Informat. Sci., Vol.23, No.1, pp. 5-16, 2004.
  8. [8] P. Lingras and G. Peters, “Rough clustering, WIREs Data Mining Knowl. Discov. 2011,” Wiley pp. 64-72, 2011.
  9. [9] Y. Endo, A. Heki, and Y. Hamasuna, “On rough set based non metric model,” In V. Torra et al. (Eds.), Proc. of MDAI 2012, LNAI 7647, pp. 394-407, 2012.
  10. [10] J. C. Bezdek, “Pattern Recognition with Fuzzy Objective Function Algorithms,” Plenum Press, New York, 1981.
  11. [11] S. Miyamoto, H. Ichihashi, and K. Honda, “Algorithms for Fuzzy Clustering,” Springer, 2008.
  12. [12] B. S. Everitt, “Cluster Analysis,” 3rd Ed., Arnold, London, 1993.
  13. [13] S. Miyamoto, “Fuzzy Sets in Information Retrieval and Cluster Analysis,” Kluwer, Dordrecht, 1990.
  14. [14] T. Kohonen, “Self-Organization and Associative Memory,” Springer-Verlag, Heiderberg, 1989.
  15. [15] R. N. Davé, “Characterization and detection of noise in clustering,” Pattern Recognition Letters, Vol.12, pp. 657-664, 1991.
  16. [16] R. N. Davé and R. Krishnapuram, “Robust clustering methods: a unified view,” IEEE Trans. on Fuzzy Systems, Vol.5, pp. 270-293, 1997.
  17. [17] G. J. McLachlan and T. Krishnan, “The EM algorithms and Extensions,” Wiley, New York, 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 19, 2024