Translated Multiplicative Neuron: An Extended Multiplicative Neuron that can Translate Decision Surfaces
Eduardo Masato Iyoda, Hajime Nobuhara, and Kaoru Hirota
Department of Computational Intelligence and Systems Science, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan
A multiplicative neuron model called translated multiplicative neuron (πt-neuron) is proposed. Compared to the traditional π-neuron, the πt-neuron presents 2 advantages: (1) it can generate decision surfaces centered at any point of its input space; and (2) πt-neuron has a meaningful set of adjustable parameters. Learning rules for πt-neurons are derived using the error backpropagation procedure. It is shown that the XOR and N-bit parity problems can be perfectly solved using only 1 πt-neuron, with no need for hidden neurons. The πt-neuron is also evaluated in Hwang’s regression benchmark problems, in which neural networks composed of πt-neurons in the hidden layer can perform better than conventional multilayer perceptrons (MLP) in almost all cases: Errors are reduced an average of 58% using about 33% fewer hidden neurons than MLP.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.