Paper:
Views over last 60 days: 328
Solution Space and BP Learning Behavior of Multilayer Networks Whose Units Are Different in Polarity
Hiromu Gotanda, Yoshihiro Ueda and Hiroshi Siratsuchi
Faculty of Engineering, Kinki University in Kyushu, 11-6, Kayanomori, lizuka-shi, Fukuoka, 820 Japan
Received:May 22, 1995Accepted:June 2, 1995Published:August 20, 1995
Keywords:Multilayer network, Back propagation learning, Sigmoid’s polarity, Solution space size, Convergence
Abstract
This paper studies the size of solution space and the convergence behavior of BP learning for unipolar networks with units activating from 0 to 1 and for bipolar networks with units activating from -0.5 to 0.5. It points out that both networks exhibit the same error characteristics for presentation patterns if the input spaces for each layer are divided geometrically in an equivalent manner by separation hyperplanes and active regions. This implies that their solution spaces are of equal size irrespective of the network's polarity. . However, even if learning commences under such equivalent separation, their convergence behavior differs. On the other hand, two networks of which the hidden units are of identical polarity but the output units are of different polarity yield the same convergence behavior when their initial weights and biases are equal. From simulation results, it is found for large-size networks that the bipolar networks tend to yield better convergence than the unipolar networks even if their input spaces are equivalently separated at the outset of learning.
Cite this article as:H. Gotanda, Y. Ueda, and H. Siratsuchi, “Solution Space and BP Learning Behavior of Multilayer Networks Whose Units Are Different in Polarity,” J. Robot. Mechatron., Vol.7 No.4, pp. 336-343, 1995.Data files: