single-jc.php

JACIII Vol.2 No.2 pp. 47-53
doi: 10.20965/jaciii.1998.p0047
(1998)

Paper:

Computing Higher Order Derivatives in Universal Learning Networks

Kotaro Hirasawa, Jinglu Hu, Masanao Ohbayashi, and Junichi Murata

Department of Electrical and Electronic Systems Engineering, Kyushu University. Hakozaki, Fukuoka 812-81, Japan

Received:
October 27, 1997
Accepted:
March 3, 1998
Published:
April 20, 1998
Keywords:
learning network, neural network, higher order derivative, backward propagation, forward propagation
Abstract
This paper discusses higher order derivative computing for universal learning networks that form a super set of all kinds of neural networks. Two computing algorithms, backward and forward propagation, are proposed. Using a technique called "local description" expresses the proposed algorithms very simply. Numerical simulations demonstrate the usefulness of higher order derivatives in neural network training.
Cite this article as:
K. Hirasawa, J. Hu, M. Ohbayashi, and J. Murata, “Computing Higher Order Derivatives in Universal Learning Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.2 No.2, pp. 47-53, 1998.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024