single-jc.php

JACIII Vol.24 No.2 pp. 185-198
doi: 10.20965/jaciii.2020.p0185
(2020)

Paper:

Self-Structured Cortical Learning Algorithm by Dynamically Adjusting Columns and Cells

Sotetsu Suzugamine, Takeru Aoki, Keiki Takadama, and Hiroyuki Sato

Graduate School of Information and Engineering Sciences, The University of Electro-Communications
1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan

Received:
June 3, 2019
Accepted:
December 6, 2019
Published:
March 20, 2020
Keywords:
time-series data prediction, cortical learning algorithm, self-structuring algorithm
Abstract

The cortical learning algorithm (CLA) is a type of time-series data prediction algorithm based on the human neocortex. CLA uses multiple columns to represent an input data value at a timestep, and each column has multiple cells to represent the time-series context of the input data. In the conventional CLA, the numbers of columns and cells are user-defined parameters. These parameters depend on the input data, which can be unknown before learning. To avoid the necessity for setting these parameters beforehand, in this work, we propose a self-structured CLA that dynamically adjusts the numbers of columns and cells according to the input data. The experimental results using the time-series test inputs of a sine wave, combined sine wave, and logistic map data demonstrate that the proposed self-structured algorithm can dynamically adjust the numbers of columns and cells depending on the input data. Moreover, the prediction accuracy is higher than those of the conventional long short-term memory and CLAs with various fixed numbers of columns and cells. Furthermore, the experimental results on a multistep prediction of real-world power consumption show that the proposed self-structured CLA achieves a higher prediction accuracy than the conventional long short-term memory.

Cite this article as:
S. Suzugamine, T. Aoki, K. Takadama, and H. Sato, “Self-Structured Cortical Learning Algorithm by Dynamically Adjusting Columns and Cells,” J. Adv. Comput. Intell. Intell. Inform., Vol.24 No.2, pp. 185-198, 2020.
Data files:
References
  1. [1] J. Hawkins and S. Blakeslee, “On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines,” Times Books, 2005.
  2. [2] J. Hawkins, S. Ahmad, and D. Dubinsky, “Hierarchical Temporal Memory including HTM Cortical Learning Algorithms,” Technical report, Numenta, Inc., 2010.
  3. [3] J. Hawkins and S. Ahmad, “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex,” Frontiers in Neural Circuits, Vol.10, Article 23, pp. 1-13, 2016.
  4. [4] J. L. Elman, “Finding Structure in Time,” Cognitive Science, Vol.14, Issue 2, pp. 179-211, 1990.
  5. [5] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, Vol.9, No.8, pp. 1735-1780, 1997.
  6. [6] Y. Cui, S. Ahmad, and J. Hawkins, “Continuous Online Sequence Learning with an Unsupervised Neural Network Model,” Neural Computation, Vol.28, No.11, pp. 2474-2504, 2016.
  7. [7] S. Suzugamine, T. Aoki, K. Takadama, and H. Sato, “A Study on a Cortical Learning Algorithm Dynamically Adjusting Columns and Cells,” Proc. of 2018 Joint 10th Int. Conf. on Soft Computing and Intelligent Systems and 19th Int. Symp. on Advanced Intelligent Systems (SCIS&ISIS2018), pp. 267-274, 2018.
  8. [8] TensorFlow, https://www.tensorflow.org/ [accessed May 1, 2018]
  9. [9] Caffe, http://caffe.berkeleyvision.org/ [accessed May 1, 2018]
  10. [10] Keras, https://github.com/keras-team/keras [accessed May 1, 2018]
  11. [11] S. Wang and J. Jiang, “Learning Natural Language Inference with LSTM,” ArXiv: 1512.08849, 2015.
  12. [12] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” Proc. of the 27th Int. Conf. on Neural Information Processing Systems (NIPS’14), Vol.2, pp. 3104-3112, 2014.
  13. [13] H. Soltau, H. Liao, and H. Sak, “Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition,” arXiv: 1610.09975, 2016.
  14. [14] H. Sak, A. Senior, and F. Beaufays, “Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling,” Proc. of the 15th Annual Conf. of the Int. Speech Communication Association (INTERSPEECH 2014), pp. 338-342, 2014.
  15. [15] K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, Vol.4, Issue 2, pp. 251-257, 1991.
  16. [16] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv: 1412.6980, 2014.
  17. [17] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, and B. Hodjat, “Chapter 15: Evolving Deep Neural Networks,” R. Kozma, C. Alippi, Y. Choe, and F. Morabito (Eds.), “Artificial Intelligence in the Age of Neural Networks and Brain Computing,” pp. 293-312, Academic Press, 2019.
  18. [18] nupic.core: Implementation of Core NuPIC Algorithms, https://github.com/numenta/nupic.core [accessed May 1, 2018]
  19. [19] J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proc. of the 1995 IEEE Int. Conf. on Neural Networks (ICNN’95), Vol.4, pp. 1942-1948, 1995.
  20. [20] N. Liang, G. Huang, P. Saratchandran, and N. Sundararajan, “A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks,” IEEE Trans. on Neural Networks, Vol.17, No.6, pp. 1411-1423, 2006.
  21. [21] J. Durbin and S. J. Koopman, “Time Series Analysis by State Space Methods,” 2nd Edition, Oxford Statistical Science Series 38, Oxford University Press, 2012.
  22. [22] H. Jaeger and H. Haas, “Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication,” Science, Vol.304, Issue 5667, pp. 78-80, 2004.
  23. [23] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using time-delay neural networks,” IEEE Trans. on Acoustics, Speech, and Signal Processing, Vol.37, No.3, pp. 328-339, 1989.
  24. [24] S. Ahmad, A. Lavin, S. Purdy, and Z. Agha, “Unsupervised real-time anomaly detection for streaming data,” Neurocomputing, Vol.262, pp. 134-147, 2017.
  25. [25] A. Lavin and S. Ahmad, “Evaluating Real-Time Anomaly Detection Algorithms – The Numenta Anomaly Benchmark,” Proc. of the 2015 IEEE 14th Int. Conf. on Machine Learning and Applications (ICMLA 2015), pp. 38-44, 2015.
  26. [26] J. Hawkins, M. Lewis, M. Klukas, S. Purdy, and S. Ahmad, “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex,” Frontiers in Neural Circuits, Vol.12, Article 121, 2019.
  27. [27] J. Hawkins, S. Ahmad, and Y. Cui, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World,” Frontiers in Neural Circuits, Vol.11, Article 81, 2017.
  28. [28] Y. Cui, S. Ahmad, and J. Hawkins, “The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding,” Frontiers in Neuroscience, Vol.11, Article 111, 2017.
  29. [29] S. Ahmad and J. Hawkins, “How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites,” arXiv: 1601.00720, 2016.
  30. [30] S. Ahmad and J. Hawkins, “Properties of Sparse Distributed Representations and their Application to Hierarchical Temporal Memory,” arXiv: 1503.07469, 2015.
  31. [31] UCI Machine Learning Repository, Electricity Load Diagrams 2011-2014 Data Set, https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 [accessed October 9, 2019]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 13, 2024