Special Issue on Recent Methodological Developments in Fuzzy Clustering and Related Topics
Professer Emeritus, University of Tsukuba, Japan
Various applications of data analysis and their effects have been reported recently. With the remarkable progress in classification methods, one example being support vector machines, clustering as the main method of unsupervised classification has also been studied extensively. Consequently, fuzzy methods of clustering is becoming a standard technique. However, unsolved theoretical and methodological problems in fuzzy clustering remain and have to be studied more deeply. This issue collects five papers concerned with fuzzy clustering and related fields, and in all of them the main interest is methodology. Kondo and Kanzawa consider fuzzy clustering with a new objective function using q-divergence, which is a generalization of the well-known Kullback-Leibler divergence. Among different data types, they focus on categorical data. They also show the relations of different methods of fuzzy c-means. Thus, this study tends to further generalize methods of fuzzy clustering, trying to find the methodological boundaries of the capabilities of fuzzy clustering models. Kitajima, Endo, and Hamasuna propose a method of controlling cluster sizes so that the resulting clusters have an even size, which is different from the optimizing of cluster sizes dealt with in other studies. This technique enhances application fields of clustering in which cluster sizes are more important than cluster shapes. Hamasuna et al. study the validity measures of clusters for network data. Cluster validity measures are generally proposed for points in Euclidean spaces, but the authors consider the application of validity measures to network data. Several validity measures are modified and adapted to network data, and their effectiveness is examined using simple network examples. Ubukata et al. propose a new method of c-means related to rough sets, a method based on a different idea from well-known rough c-means by Lingras. Finally, Kusunoki, Wakou, and Tatsumi study the maximum margin model for the nearest prototype classifier that leads to the optimization of the difference of convex functions. All papers include methodologically important ideas that have to be further investigated and applied to real-world problems.