Paper:
Views over last 60 days: 451
Computing Higher Order Derivatives in Universal Learning Networks
Kotaro Hirasawa, Jinglu Hu, Masanao Ohbayashi, and Junichi Murata
Department of Electrical and Electronic Systems Engineering, Kyushu University. Hakozaki, Fukuoka 812-81, Japan
Received:October 27, 1997Accepted:March 3, 1998Published:April 20, 1998
Keywords:learning network, neural network, higher order derivative, backward propagation, forward propagation
Abstract
This paper discusses higher order derivative computing for universal learning networks that form a super set of all kinds of neural networks. Two computing algorithms, backward and forward propagation, are proposed. Using a technique called "local description" expresses the proposed algorithms very simply. Numerical simulations demonstrate the usefulness of higher order derivatives in neural network training.
Cite this article as:K. Hirasawa, J. Hu, M. Ohbayashi, and J. Murata, “Computing Higher Order Derivatives in Universal Learning Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.2 No.2, pp. 47-53, 1998.Data files: