single-jc.php

JACIII Vol.15 No.7 pp. 831-837
doi: 10.20965/jaciii.2011.p0831
(2011)

Paper:

Adaptive Nutrient Water Supply Control of Plant Factory System by Reinforcement Learning

Takumi Wakahara* and Sadayoshi Mikami**

*Graduate School, Future University Hakodate, 116-2 Kamedanakano-cho Hakodate, Hokkaido 041-8655, Japan

**Department of Complex and Intelligent Systems, Future University Hakodate, 116-2 Kamedanakano-cho Hakodate, Hokkaido 041-8655, Japan

Received:
March 5, 2011
Accepted:
May 9, 2011
Published:
September 20, 2011
Keywords:
reinforcement learning, intelligent control, plant growth, plant factory
Abstract
An adaptive nutrient control method for a plant factory is proposed. The method is based on a Reinforcement Learning (RL) modified for a target in which the same state never comes back during a single episode and a reward is given after a very long delay. In application such as plant growth control, one episode takes a very long time period, and a rapid convergence to a prospective control solution is essential while an extensive exploration is needed since there is usually no precise model available. A method like reinforcement learning is useful for a problem having no reference model. But a necessity of exploration does not match the need for rapid convergence, and a new balancing method is needed. In this research, an average reward distribution method is proposed, which is similar to the profit sharing method but effects more extensively on finding much prospective early solutions, while guaranteeing to converge into a rational solution in a long run. An experiment is conducted in a simple plant factory system, which shows that at least standard reinforcement learning is insufficient for this type of problem. Computer simulations show that the method has good effects on acquiring prospective control policy at early stage comparing to a standard reinforcement learning and a profit sharing method.
Cite this article as:
T. Wakahara and S. Mikami, “Adaptive Nutrient Water Supply Control of Plant Factory System by Reinforcement Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.15 No.7, pp. 831-837, 2011.
Data files:
References
  1. [1] M. Takatuji, “Theory of Plant Factory,” SHITA TECHNOLOGY, No.1, 1993. (in Japanese)
  2. [2] M. Takatuji, “Basic and Practice of Plant Factory,” Eikoubou, 1996. (in Japanese)
  3. [3] M. Takatuji, “Present and Future of Complete Controlled Type Plant Factory,” SHITA REPORT, No.23, 2006. (in Japanese)
  4. [4] R. S. Sutton and A. G. Barto, “Reinforcement Learning,” The MIT Press, 1999.
  5. [5] S. Singh and D. Bertsekas, “Reinforcement Learning for Dynamic Channel Allocation in Cellular Telephone Systems,” Advances in Neural Information Processing Systems: Proc. of the 1996 Conf., pp. 974-980, 1997.
  6. [6] J. A. Boyan and M. L. Littman, “Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach,” Advances in Neural Information Processing Systems, Vol.6, pp. 671-678, 1994.
  7. [7] P. L. Kaelbling, “Learning in Embedded Systems,” The MIT Press, 2010.
  8. [8] H. Aoki, N. Umetu, and S. Ono, “Theory and Practice of Hydroponic Soil Cultivation,” Seibundo Shinkosha, 2001. (in Japanese)
  9. [9] A. J. Lack and D. E. Evans, “Instant Notes in Plant Biology,” BIOS Scientific Publishers Limited, 2001.
  10. [10] K. Toshimi, “Kewpie TS Farm,” SHITA REPORT, No.6, 1993. (in Japanese)
  11. [11] “Eco saku,
    ” http://www.ecosaku-yasai.com/index.html, (access at 29/03/2010)
  12. [12] W. Larcher, “Physiological Plant Ecology,” Springer, 2003.
  13. [13] L. Baird, “Residual Algorithms: Reinforcement Learning with Function Approximation,” Proc. of the Twelfth Int. Conf. on Machine Learning, pp. 30-37, 1995.
  14. [14] C. J. C. H. Watkins, “Learning from Delay Rewards,” Ph.D. thesis, Cambridge University, 1989.
  15. [15] K. Miyazaki and S. Kobayashi, “Profit Sharing Based Reinforcement Learning Systems in Continuous State Spaces,” SCIS & ISIS 2006, pp. 1105-1110, 2006.
  16. [16] K. Miyazaki, T. Terada, and H. Kobayashi, “Generating Cooperative Behavior by Multi Agent Profit Sharing on the Soccer Game,” ISIS 2003, pp. 116-169, 2003.
  17. [17] Y. Ueda, H. Narita, N. Kato, K. Hayashi, H. Nambo, and H. Kimura, “An Automatic Email Distribution by Using Text Mining and Reinforcement Learning,” Denshi Joho Tsushin Gakkai Ronbunshi, Vol.J87-D-I, No.10, pp. 887-898, 2004.
  18. [18] S. Kato and H. Matsuo, “A Theory of Profit Sharing in Dynamic Environment,” Lecture Notes in Computer Science, Vol.1886, pp. 115-124, 2000.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 04, 2024