single-jc.php

JACIII Vol.28 No.1 pp. 94-102
doi: 10.20965/jaciii.2024.p0094
(2024)

Research Paper:

Perceptual Features of Abstract Images for Metaphor Generation

Natsuki Yamamura*1, Junichi Chikazoe*2 ORCID Icon, Takaaki Yoshimoto*2 ORCID Icon, Koji Jimura*3 ORCID Icon, Norihiro Sadato*4 ORCID Icon, and Asuka Terai*5,† ORCID Icon

*1Hokkaido NS Solutions Corporation
Nihon Seimei Kitamonkan Building 10F, 5-1-3 Kita Shijo Nishi, Chuo-ku, Sapporo-shi, Hokkaido 820-8502, Japan

*2Araya Inc.
Sanpo Sakuma Building 6F, 1-11 Kanda Sakuma, Chiyoda, Tokyo 101-0025, Japan

*3Gunma University
4-2 Aramaki-machi, Maebashi, Gunma 371-8510, Japan

*4Ritsumeikan University
1-1-1 Noji-Higashi, Kusatsu, Shiga 525-8577, Japan

*5Future University Hakodate
116-2 Kamedanakano, Hakodate, Hokkaido 041-8655, Japan

Corresponding author

Received:
May 20, 2023
Accepted:
August 16, 2023
Published:
January 20, 2024
Keywords:
metaphor generation, convolutional neural network, object recognition, fine tuning
Abstract

In this study, the roles of shape and color features in metaphor generation for abstract images were investigated through simulations using retrained convolutional neural network (CNN) models based on the pretrained CNN model, AlexNet. A computational experiment was conducted using five types of retrained object recognition models: an object recognition model using the cleaned ILSVRC-2012 training dataset, one to recognize more shape features using edge-detected images, one to recognize fewer shape features using blurred images, one to recognize fewer color features using grayscale images, and one to recognize only shape features using Canny edge-detected images. The metaphors generated for abstract images were collected from behavioral data obtained in a psychological experiment aimed at investigating the neural mechanisms of metaphor generation for abstract images. In the computational experiment, the simulation results of the five models for abstract images were compared to examine how well they predicted the objects used in the metaphors generated for abstract images in the psychological experiment. The edge-only model using Canny edge-detected images and the color-inhibited model using grayscale images exhibited better performance in metaphor recognition for abstract images than the control condition. This indicates that shape features play a more important role than color features in metaphor generation for abstract images. Furthermore, because the Canny edge detection technique extracts only object outlines that can be regarded as the caricaturization of objects, the caricatured images, based on the shape features of the abstract images, likely influence object recognition for metaphor generation.

Cite this article as:
N. Yamamura, J. Chikazoe, T. Yoshimoto, K. Jimura, N. Sadato, and A. Terai, “Perceptual Features of Abstract Images for Metaphor Generation,” J. Adv. Comput. Intell. Intell. Inform., Vol.28 No.1, pp. 94-102, 2024.
Data files:
References
  1. [1] N. Hadjikhani, K. Kveraga, P. Naik, and S. P. Ahlfors, “Early (N170) activation of face-specific cortex by face-like objects,” Neuroreport, Vol.20, No.4, pp. 403-407, 2009. https://doi.org/10.1097/WNR.0b013e328325a8e1
  2. [2] B. F. Bowdle and D. Gentner, “The Career of Metaphor,” Psychological Review, Vol.112, No.1, pp. 193-216, 2005. https://doi.org/10.1037/0033-295X.112.1.193
  3. [3] G. M. Gottfried, “Comprehending compounds: Evidence for metaphoric skill?” J. of Child Language, Vol.24, No.1, pp. 163-186, 1997. https://doi.org/10.1017/S0305000996002942
  4. [4] K. A. Pierce and B. Gholson, “Surface similarity and relational similarity in the development of analogical problem solving: Isomorphic and nonisomorphic transfer,” Developmental Psychology, Vol.30, No.5, pp. 724-737, 1994. https://doi.org/10.1037/0012-1649.30.5.724
  5. [5] K. J. Holyoak and K. Koh, “Surface and structural similarity in analogical transfer,” Memory & Cognition, Vol.15, No.4, pp. 332-340, 1987. https://doi.org/10.3758/BF03197035
  6. [6] B. Indurkhya and S. Ogawa, “An Empirical Study on the Mechanisms of Creativity in Visual Arts,” Proc. of the Annual Meeting of the Cognitive Science Society, Vol.34, No.34, 2012.
  7. [7] B. Indurkhya, “Emergent representations, interaction theory and the cognitive force of metaphor,” New Ideas in Psychology, Vol.24, No.2, pp. 133-162, 2006. https://doi.org/10.1016/j.newideapsych.2006.07.004
  8. [8] B. Indurkhya and A. Ojha, “An Empirical Study on the Role of Perceptual Similarity in Visual Metaphors and Creativity,” Metaphor and Symbol, Vol.28, No.4, pp. 233-253, 2013. https://doi.org/10.1080/10926488.2013.826554
  9. [9] B. Indurkhya, K. Kattalay, A. Ojha, and P. Tandon, “Experiments with a Creativity-Support System based on Perceptual Similarity,” New Trends in Software Methodologies, Tools and Techniques Proc. of the Seventh SoMeT 2008, pp. 316-327, 2008. https://doi.org/10.3233/978-1-58603-916-5-316
  10. [10] K. Takahashi, T. Sasada, T. Funatomi, and S. Mori, “Generating Metaphors by a Shape Feature Dictionary,” IPSJ SIG Technical Report, Vol.2015-NL-223, No.14, 2015 (in Japanese).
  11. [11] H. Ozawa, H, Okamoto, and H. Saito, “Iro/Keijo Joho wo Mochiita Hiyu Seisei (Metaphor Generation Using Color/Shape Information),” Proc. of The 13th Annual Meeting of The Association for Natural Language Processing, 2007 (in Japanese).
  12. [12] J. Okamoto and S. Ishizaki, “Construction of associative concept dictionary with distance information, and comparison with electronic concept dictionary,” J. of Natural Language Processing, Vol.8, No.4, pp. 37-54, 2001 (in Japanese). https://doi.org/10.5715/jnlp.8.4_37
  13. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, Vol.60, No.6, pp. 84-90, 2017. https://doi.org/10.1145/3065386
  14. [14] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014. https://doi.org/10.48550/arXiv.1409.1556
  15. [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. https://doi.org/10.1109/CVPR.2016.90
  16. [16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” Int. J. of Computer Vision, Vol.115, No.3, pp. 211-252, 2015. https://doi.org/10.1007/s11263-015-0816-y
  17. [17] A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky, “Neural codes for image retrieval,” Proc. European Conf. on Computer Vision (ECCV-2014), pp. 584-599, 2014.
  18. [18] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel, “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness,” arXiv:1811.12231, 2018. https://doi.org/10.48550/arXiv.1811.12231
  19. [19] A. Terai, N. Yamamura, J. Chikazoe, T. Yoshimoto, N. Sadato, and K. Jimura, “On the role of shape features in metaphor generation for abstract images,” 2022 Joint 12th Int. Conf. on Soft Computing and Intelligent Systems and 23rd Int. Symp. on Advanced Intelligent Systems (SCIS&ISIS), 2022. https://doi.org/10.1109/SCISISIS55246.2022.10002047
  20. [20] L. Beyer, O. J. Hénaff, A. Kolesnikov, X. Zhai, and A. van den Oord, “Are we done with ImageNet?,” arXiv:2006.0715, 2020. https://doi.org/10.48550/arXiv.2006.07159
  21. [21] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR-2009), pp. 248-255, 2009. https://doi.org/10.1109/CVPR.2009.5206848
  22. [22] T. Asari, S. Konishi, K. Jimura, J. Chikazoe, N. Nakamura, and Y. Miyashita, “Right temporopolar activation associated with unique perception,” NeuroImage, Vol.41, No.1, pp. 145-152, 2008. https://doi.org/10.1016/j.neuroimage.2008.01.059
  23. [23] Shutterstock. https://www.shutterstock.com [Accessed October 16, 2017]
  24. [24] G. A. Miller, “WordNet: A lexical database for English,” Communications of the ACM, Vol.38, No.11, pp. 39-41, 1995. https://doi.org/10.1145/219717.219748
  25. [25] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, T. Duerig, and V. Ferrari, “The Open Images Dataset V4,” Int. J. of Computer Vision, Vol.128, No.7, pp. 1956-1981, 2020. https://doi.org/10.1007/s11263-020-01316-z

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024