single-jc.php

JACIII Vol.21 No.5 pp. 778-784
doi: 10.20965/jaciii.2017.p0778
(2017)

Paper:

Robustness Analyses and Optimal Sampling Gap of Recurrent Neural Network for Dynamic Matrix Pseudoinversion

Bolin Liao* and Qiuhong Xiang**

*College of Information Science and Engineering, Jishou University
Jishou, Hunan 416000, China

**College of Mathematics and Statistics, Jishou University
Jishou, Hunan 416000, China

Received:
January 8, 2017
Accepted:
May 29, 2017
Published:
September 20, 2017
Keywords:
performance analysis, robustness, optimal sampling gap, Zhang neural network (ZNN), dynamic matrix pseudoinverse
Abstract

This study analyses the robustness and convergence characteristics of a neural network. First, a special class of recurrent neural network (RNN), termed a continuous-time Zhang neural network (CTZNN) model, is presented and investigated for dynamic matrix pseudoinversion. Theoretical analysis of the CTZNN model demonstrates that it has good robustness against various types of noise. In addition, considering the requirements of digital implementation and online computation, the optimal sampling gap for a discrete-time Zhang neural network (DTZNN) model under noisy environments is proposed. Finally, experimental results are presented, which further substantiate the theoretical analyses and demonstrate the effectiveness of the proposed ZNN models for computing a dynamic matrix pseudoinverse under noisy environments.

References
  1. [1] S. Chountasis, V. Katsikis, and D. Pappas, “Digital image reconstruction in the spectral domain utilizing the Moore-Penrose inverse,” Mathematical Problems in Engineering, 2010(1024-123X), pp. 242-256, 2010.
  2. [2] A. Veen, S. Talwar, and A. Paulraj, “A subspace approach to blind space-time signal processing for wireless communication systems,” IEEE Trans. on Signal Processing, Vol.45, No.1, pp. 173-190, 1997.
  3. [3] J. Lin, C. Lin, and H. Lo, “Pseudo-inverse Jacobian control with grey relation alanalysis for robot manipulators mounted on 2009 oscillatory bases,” J. of Sound and Vibration, Vol.326, No.3-5, pp. 421-437, 2009.
  4. [4] B. Zhang, H. Zhang, and S. Ge, “Face recognition by applying wavelet subband representation and kernel associative memory,” IEEE Trans. on Neural Networks, Vol.15, No.1, pp. 166-177, 2004.
  5. [5] M. Perković and P. Stanimirović, “Iterative method for computing the Moore-Penrose inverse besed on Penrose equations,” J. of Computational and Applied Mathematics, Vol.235, No.6, pp. 1604-1613, 2011.
  6. [6] P. Courrieu, “Fast computation of Moore-Penrose inverse matrices,” Neural Information Processing-Letters and Reviews, Vol.8, No.2, pp. 25-29, 2005.
  7. [7] W. Guo and T. Huang, “Method of elementary transformation to compute Moore-Penrose inverse,” Applied Mathematics and Computation, Vol.216, No.5, pp. 1614-1617, 2010.
  8. [8] M. B. Tasić, P. S. Stanimirović, and M. D. Petković, “Symbolic computation of weighted Moore-Penrose inverse using partitioning method,” Applied Mathematics and Computation, Vol.189, No.1, pp. 615-640, 2007.
  9. [9] F. Huang and X. Zhang, “An improved Newton iteration for the weighted Moore-Penrose inverse,” Applied Mathematics and Computation, Vol.174, No.2, pp. 1460-1486, 2006.
  10. [10] S. Li, Y. Li, and Z. Wang, “A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application,” Neural Network, Vol.39, pp. 27-39, 2013.
  11. [11] B. Liao, L. Xiao, J. Jin, L. Ding, and M. Liu, “Novel complex-valued neural network for dynamic complex-valued matrix inversion,” J. Adv. Comput. Intell. Intell. Inform. (JACIII), Vol.20, No.1, pp. 132-138, 2016.
  12. [12] A. Hosseinia, J. Wang, and S. Hosseinia, “A recurrent neural network for solving a class of generalized convex optimization problems,” Neural Network, Vol.44, pp. 78-86, 2013.
  13. [13] L. Xiao, “A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation,” Neurocomputing, Vol.167, pp. 254-259, 2015.
  14. [14] L. Jin and Y. Zhang, “Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation,” IEEE Trans. on Neural Networks and Learning Systems, Vol.26, No.7, pp. 1525-1531, 2015.
  15. [15] D. Guo and Y. Zhang, “ZNN for solving online time-varying linear matrix-vector inequality via equality conversion,” Applied Mathematics and Computation, Vol.259, pp. 327-338, 2015.
  16. [16] S. Li, S. Chen, and B. Liu, “Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function,” Neural Processing Letters, Vol.37, No.2, pp. 189-205, 2013.
  17. [17] L. Xiao and R. Lu, “Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function,” Neurocomputing, Vol.151, pp. 246-251, 2015.
  18. [18] B. Liao and Y. Zhang, “From different ZFs to different ZNN models accelerated via Li activation functions to finite-time convergence for time-varying matrix pseudoinversion,” Neurocomputing, Vol.133, No.8, pp. 512-522, 2014.
  19. [19] L. Jin and Y. Zhang, “Discrete-time Zhang neural network of O(τ3) pattern for time-varying matrix pseudoinversion with application to manipulator motion generation,” Neurocomputing, Vol.142, pp. 165-173, 2014.
  20. [20] B. Liao, Y. Zhang, and L. Jin, “Taylor O(h3) discretization of ZNN models for dynamic equality-constrained quadratic programming with application to manipulators,” IEEE Trans. on Neural Networks and Learning Systems, Vol.27, No.2, pp. 225-237, 2016.
  21. [21] L. Jin, S. Li, L. Xiao, R. Lu, and B. Liao, “Cooperative motion generation in a distributed network of redundant robot manipulators with noises,” IEEE Trans. on Systems, Man, and Cybernetics: Systems, in press with DOI 10.1109/TSMC.2017.2693400.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Oct. 16, 2017