Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2014.050317
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 5 Issue 3, 2014.
Abstract: The tangent plane algorithm for real time recurrent learning (TPA-RTRL) is an effective online training method for fully recurrent neural networks. TPA-RTRL uses the method of approaching tangent planes to accelerate the learning processes. Compared to the original gradient descent real time recurrent learning algorithm (GD-RTRL) it is very fast and avoids problems like local minima of the search space. However, the TPA-RTRL algorithm actively encourages the formation of large weight values that can be harmful to generalization. This paper presents a new TPA-RTRL variant that encourages small weight values to decay to zero by using a weight elimination procedure built into the geometry of the algorithm. Experimental results show that the new algorithm gives good generalization over a range of network sizes whilst retaining the fast convergence speed of the TPA-RTRL algorithm.
P May, E Zhou and C. W. Lee, “Improved Generalization in Recurrent Neural Networks Using the Tangent Plane Algorithm” International Journal of Advanced Computer Science and Applications(IJACSA), 5(3), 2014. http://dx.doi.org/10.14569/IJACSA.2014.050317