JACIII Vol.2 No.2 pp. 47-53
doi: 10.20965/jaciii.1998.p0047


Computing Higher Order Derivatives in Universal Learning Networks

Kotaro Hirasawa, Jinglu Hu, Masanao Ohbayashi, and Junichi Murata

Department of Electrical and Electronic Systems Engineering, Kyushu University. Hakozaki, Fukuoka 812-81, Japan

October 27, 1997
March 3, 1998
April 20, 1998
learning network, neural network, higher order derivative, backward propagation, forward propagation

This paper discusses higher order derivative computing for universal learning networks that form a super set of all kinds of neural networks. Two computing algorithms, backward and forward propagation, are proposed. Using a technique called “local description” expresses the proposed algorithms very simply. Numerical simulations demonstrate the usefulness of higher order derivatives in neural network training.

Cite this article as:
K. Hirasawa, J. Hu, M. Ohbayashi, and J. Murata, “Computing Higher Order Derivatives in Universal Learning Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.2, No.2, pp. 47-53, 1998.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 04, 2022