IJAT Vol.16 No.6 pp. 807-813
doi: 10.20965/ijat.2022.p0807


Extraction and Design of Favorite Products Through Analyzing Customer Latent Preferences

Ryosui Koga and Hideki Aoyama

Keio University
3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa 223-8522, Japan

Corresponding author

March 19, 2022
September 28, 2022
November 5, 2022
product design, latent preference, behavior observation, neural network

In the wake of rapid advances in design and production technologies, differentiating products based on their quality has become a challenge. Against this backdrop, design has become an important factor in determining product value. Design is a creative activity influenced by the experience and sensitivity of designers, who are required to understand the preferences and needs of customers and reflect them in their designs. Accordingly, there is a need to efficiently determine customer preferences. Although it is possible to extract customers’ apparent preferences through interviews and questionnaires, these may be arbitrary. Additionally, to respond to the recent diversification of customer preferences, it is not enough to understand apparent preferences; latent preferences must also be extracted. However, they are vague and cannot be expressed in words by the customers. Unfortunately, a practical method for extracting latent preferences has not yet been developed. In this study, we propose a method for extracting latent customer preferences. We develop a system for recommending products that customers are likely to prefer from among existing products, and develop a system for creating original product designs that customers are expected to prefer. We experimentally verify the usefulness of this method.

Cite this article as:
R. Koga and H. Aoyama, “Extraction and Design of Favorite Products Through Analyzing Customer Latent Preferences,” Int. J. Automation Technol., Vol.16 No.6, pp. 807-813, 2022.
Data files:
  1. [1] T. Nishino, M. Nagamachi, M. Sakawa, K. Kato, and H. Tanaka, “A Comparative Study on Approximations of Decision Class and Rule Acquisition by Rough Sets Model an Application to the Design of Children Shoes,” Kansei Engineering Int., Vol.5, No.4, pp. 51-60, 2006.
  2. [2] T. Nishino, M. Nagamachi, and H. Tanaka, “Variable precision Bayesian rough sets model and its application to human evaluation data,” Proc. of Int. Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, pp. 294-303, 2005.
  3. [3] J. Sanui, “Visualization of users’ requirements: Introduction of the Evaluation Grid Method,” Proc. of the 3rd Design and Decision Support Systems in Architecture and Urban Planning Conf., Vol.1, pp. 365-374, 1996.
  4. [4] J. Gutman, “A Means-end Chain Model Based on Consumer Categorization Process,” J. of Marketing, Vol.46, pp. 60-72, 1982.
  5. [5] Y. Motumura and T. Kanade, “Quantitative Modeling for Personal Construct and Probabilistic Reasoning,” Technical Report of IEICE, NC2004-119, pp. 25-30, 2005.
  6. [6] T. Komatsu et al., “Latent Preference Estimation Method for Photo Viewers by Gaze Analysis,” IEICE Trans., Vol.J97-D, No.12, pp. 1689-1692, 2014.
  7. [7] H. Yanagisawa and S. Fukuda, “Development of Interactive Industrial Design support System Considering Customer’s Evaluation,” JSME Int. J., Series C, Vol.47, No.2, pp. 762-769, 2004.
  8. [8] H. Yanagisawa and S. Fukuda, “Interactive Reduct Evolutionary Computation for Aesthetic Design,” Trans. of the AMSE, J. of Computing and Information Science in Engineering, Vol.5, No.1, pp. 1-7, 2005.
  9. [9] H. Yanagisawa and S. Fukuda, “Global Feature Based IREC,” Trans. of Japan Society of Mechanical Engineers, Series C, Vol.70, No.699, pp. 322-330, 2004.
  10. [10] H. Yanagisawa, T. Murakami, and S. Fukuda, “Favored Feature Based Shape Generation: Externalization of Latent Preference,” Int. Design Engineering Technical Conf. and Computers and Information in Engineering Conf., pp. 119-128, 2006.
  11. [11] H. Yanagisawa and T. Murakami, “Favored Feature based Design Support System for Styling (Arousal of Implicit Kansei by Unconscious Self Attention),” Trans. of Japan Society of Mechanical Engineers, Series C, Vol.72, No.722, pp. 363-370, 2006.
  12. [12] H. Yanagisawa and T. Murakami, “A Method to Extract Customer’s Latent Viewpoints in Kansei Design and Its Effect (Mutual Evaluation of Extracted Perspectives),” Trans. of Japan Society of Mechanical Engineers, Series C, Vol.73, No.735, pp. 220-227, 2007.
  13. [13] S. Roland Hall, “Retail Advertising and Selling,” McGraw-Hill, 1924.
  14. [14] S. Sae-Ueng, S. Pinyapong, A. Ogino, and T. Kato, “Consumer- friendly Shopping Assistance by Personal Behavior Log Analysis on Ubiquitous Shop Space,” Proc. of the 2007 IEEE Asia-Pacific Services Computing Conf., pp. 496-503, 2007.
  15. [15] S. Sae-Ueng, S. Pinyapong, A. Ogino, and T. Kato, “Prediction Consumers’ Intension Through Their Behavior Observation in Ubiquitous Shop Space,” Kansei Engineering, Vol.7, No.2, pp. 189-195, 2008.
  16. [16] A. Ogino, T. Kobayashi, Y. Iida, and T. Kato, “Smart Store Understanding Consumer’s Preference through Behavior Logs,” Internationalization, Design and Global Development, pp. 385-392, 2011.
  17. [17] M. Paliwal and U. A. Kumar, “Neural networks and statistical techniques: A review of applications,” Expert Systems with Applications, Vol.36, No.1, pp. 2-17, 2009.
  18. [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image recognition,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  19. [19] R. R. Selvaraju, M. Congswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 618-626, 2017.
  20. [20] T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
  21. [21] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and Improving the Image Quality of StyleGAN,” Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 8110-8119, 2020.
  22. [22] S. Zhao, Z. Liu, J. Lin, J. Y. Zhu, and S. Han, “Differentiable Augmentation for Data-Efficient GAN Training,” arXiv:2006.10738, 2020.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 12, 2024