Memoirs of the Graduate Schools of Engineering and System Informatics Kobe University, No. 6, pp. 13-17, 2014

A Fast Incremental Learning Algorithm for Feed-forward Neural Networks Using Resilient Propagation

Annie anak Joseph and Seiichi Ozawa

Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University

(Received May 27, 2014; Accepted June 24, 2014; Online published June 30, 2014)

Keywords: Incremental Learning, Online classifier, gradient descent, Feed-forward Neural Network, Back-propagation

Fast learning under incremental learning environments is very important in real situations as data are generated rapidly over time. However, the input-output relationships that are trained before tend to be destroyed when new data are received. Therefore, the information on the previous data tends to be lost when the data are drawn from the different data distribution. This phenomenon is called “interference” or catastrophic forgetting. To solve this problem, Resource Allocating Network with Long Term Memory (RAN-LTM) is proposed by Kobayashi et al. in order to suppress the interference. In the original RAN-LTM, both new training data and memory items are trained based on the gradient descent method. However, gradient descent method usually leads to the slow learning even for simple problems. On the other hand, Resilient Back-propagation (R-prop) performs a direct adaptation for the step size of the weight update based on local gradient information. The principal idea of R-prop is that the signs of two consecutive partial derivatives are used to determine how connection weights are updated. When the signs of two consecutive partial derivatives coincide with each other, the update-value is increased in order to accelerate the learning in the shallow regions. In contrast, when the signs of two consecutive partial derivatives are changed, the update value is decreased by a decrease value. By conducting these procedures, the number of learning steps is significantly reduced compared to the original gradient descent method. Considering the advantages of R-prop, we propose a fast incremental learning algorithm for feed-forward neural network where the gradient descent method in RAN-LTM is accelerated based on R-prop. The performance of the proposed method is evaluated for several data sets and the results demonstrated that learning time of the extended version of RAN-LTM is greatly reduced compared to the original RAN-LTM.

[Full text] (PDF 1.1 MB)