single-jc.php

JACIII Vol.19 No.6 pp. 759-765
doi: 10.20965/jaciii.2015.p0759
(2015)

Paper:

On a Family of New Sequential Hard Clustering

Yukihiro Hamasuna* and Yasunori Endo**

*Department of Informatics, School of Science and Engineering, Kindai University
3-4-1 Kowakae, Higashi-osaka, Osaka 577-8502, Japan

**Faculty of Engineering, Information and Systems, University of Tsukuba
1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan

Received:
April 27, 2015
Accepted:
July 29, 2015
Published:
November 20, 2015
Keywords:
hard c-means, hard c-medoids, sequential cluster extraction, kernel function, noise parameter
Abstract
This paper presents a new algorithm of sequential cluster extraction based on hard c-means and hard c-medoids clustering. Sequential cluster extraction means that the algorithm extracts ‘one cluster at a time.’ A characteristic parameter, called a noise parameter, is used in noise clustering based sequential clustering. We propose a novel sequential clustering method called new sequential clustering, extracts an arbitrary number of objects as one cluster by considering the noise parameter as a variable to be optimized. Experimental results with four data sets confirm the effectiveness of our proposal. These results also show that classification results strongly depend on parameter ν and that our proposal is applicable to the first stage in a two-stage clustering algorithm.
Cite this article as:
Y. Hamasuna and Y. Endo, “On a Family of New Sequential Hard Clustering,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.6, pp. 759-765, 2015.
Data files:
References
  1. [1] A. K. Jain, “Data clustering: 50 years beyond K-means,” Pattern Recognition Letters, Vol.31, No.8, pp. 651-666, 2010.
  2. [2] J. C. Bezdek, “Pattern Recognition with Fuzzy Objective Function Algorithms,” Plenum Press, New York, 1981.
  3. [3] S. Miyamoto, H. Ichihashi, and K. Honda, “Algorithms for Fuzzy Clustering,” Springer, Heidelberg, 2008.
  4. [4] R. Krishnapuram and J. M. Keller, “A possibilistic approach to clustering,” IEEE Trans. on Fuzzy Systems, Vol.1, No.2, pp. 98-110, 1993.
  5. [5] R. N. Davé and R. Krishnapuram, “Robust clustering methods: A unified view,” IEEE Trans. on Fuzzy Systems, Vol.5, No.2, pp. 270-293, 1997.
  6. [6] S. Miyamoto, Y. Kuroda, and K. Arai, “Algorithms for Sequential Extraction of Clusters by Possibilistic Method and Comparison with Mountain Clustering,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.12, No.5, pp. 448-453, 2008.
  7. [7] R. N. Davé, “Characterization and detection of noise in clustering,” Pattern Recognition Letters, Vol.12, No.11, pp. 657-664, 1991.
  8. [8] W. Wang and Y. Zhang, “On fuzzy cluster validity indices,” Fuzzy Sets and Systems, Vol.158, No.19, pp. 2095-2117, 2007.
  9. [9] R. R. Yager and D. Filev, “Approximate clustering via the mountain method,” IEEE Trans., on Systems, Man, and Cybernetics, Vol.24, No.8, pp. 1279-1284, 1994.
  10. [10] Y. Hamasuna and Y. Endo, “Sequential Extraction By Using Two Types of Crisp Possibilistic Clustering,” Proc. of the IEEE Int. Conf. on Systems, Man, and Cybernetics (IEEE SMC 2013), pp. 3505-3510, 2013.
  11. [11] R. Krishnapuram, A. Joshi, O. Nasraoui, and L. Yi, “Low-complexity fuzzy relational clustering algorithms for Web mining,” IEEE Trans. on Fuzzy Systems, Vol.9, No.4, pp. 595-607, 2001.
  12. [12] S. Miyamoto and K. Arai, “Different Sequential Clustering Algorithms and Sequential Regression Models,” Proc. of 2009 IEEE Int. Conf. on Fuzzy Systems (FUZZ-IEEE 2009), pp. 1107-1112, 2009.
  13. [13] L. Kaufman and P. J. Rousseeuw, “Finding Groups in Data: An Introduction to Cluster Analysis,” Wiley, New York, 1990.
  14. [14] M. P. Windham, “Numerical classification of proximity data with assignment measures,” J. of Classification, Vol.2, pp. 157-172, 1985.
  15. [15] M. Girolami, “Mercer kernel-based clustering in feature space,” IEEE Trans. Neural networks, Vol.13, No.3, pp. 780-784, 2002.
  16. [16] Y. Endo, H. Haruyama, and T. Okubo, “On some hierarchical clustering algorithms using kernel functions,” Proc. of IEEE Int. Conf. on Fuzzy Systems (FUZZ-IEEE2004), pp. 1513-1518, 2004.
  17. [17] S. Miyamoto and Y. Nakayama, “Algorithms of Hard c-Means Clustering Using Kernel Functions in Support Vector Machines,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.7, No.1, pp. 19-24, 2003.
  18. [18] S. Miyamoto and A. Terami, “Semi-supervised agglomerative hierarchical clustering algorithms with pairwise constraints,” Proc. of 2010 IEEE Int. Conf. on Fuzzy Systems (FUZZ-IEEE 2010), 2010. [uci] M. Lichman, UCI Machine Learning Repository,
    http://archive.ics.uci.edu/ml [Accessed April 1, 2015],
    Irvine, CA: University of California, School of Information and Computer Science, 2013.
  19. [19] W. M. Rand, “Objective criteria for the evaluation of clustering methods,” J. of the American Statistical Association, Vol.66, No.336, pp. 846-850, 1971.
  20. [20] Y. Tamura and S. Miyamoto, “Two-stage clustering using one-pass K-medoids and medoid-based agglomerative hierarchical algorithms,” Joint 7th Int. Conf. on Soft Computing and Intelligent Systems and 15th Int. Symp. on Advanced Intelligent Systems (SCIS&ISIS2014), pp. 484-488, 2014.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024