Adaptive Vector Quantization with Creation and Reduction Grounded in the Equinumber Principle
Michiharu Maeda*,Noritaka Shigei**, and Hiromi Miyajima**
*Kurume National College of Technology, Kurume 830-8555, Japan
**Kagoshima University, Kagoshima 890-0065, Japan
This paper concerns the constitution of unit structures in neural networks for adaptive vector quantization. Partition errors are mutually equivalent when the number of inputs in a partition space is mutually equal, and average distortion is asymptotically minimized. This is termed the equinumber principle, in which two types of adaptive vector quantization are presented to avoid the initial dependence of reference vectors. Conventional techniques, such as structural learning with forgetting, have the same number of output units from start to finish. Our approach explicitly changes the number of output units to reach a predetermined number without neighboring relations equalling the numbers of inputs in a partition space. First, output units are sequentially created based on the equinumber principle in the learning process. Second, output units are sequentially deleted to reach a prespecified number. Experimental results demonstrate the effectiveness of these techniques in average distortion. These approaches are applied to image data and their feasibility was confirmed in image coding.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.