single-jc.php

JACIII Vol.27 No.6 pp. 1151-1158
doi: 10.20965/jaciii.2023.p1151
(2023)

Research Paper:

Entity and Entity Type Composition Representation Learning for Knowledge Graph Completion

Runyu Ni ORCID Icon, Hiroki Shibata, and Yasufumi Takama

Graduate School of Systems Design, Tokyo Metropolitan University
6-6 Ashigaoka, Hino, Tokyo 191-0065, Japan

Received:
April 20, 2023
Accepted:
July 20, 2023
Published:
November 20, 2023
Keywords:
representation learning, knowledge graph embedding, link prediction, entity types
Abstract

This paper proposes a simple knowledge graph embedding (KGE) framework that considers the entity type information without additional resources. The KGE is used to obtain vector representations of entities and relations by learning structured information in triples. The obtained vectors are used to predict the missing links in a knowledge graph (KG). Although many KGs contain entity type information, most of the existing methods ignored the potential of the entity type information for the link prediction task. The proposed framework, which is called entity and entity type composition representation learning (EETCRL), obtains vector representations of both entities and entity types, which are combined and used for link prediction. Experimental results on three datasets show that the EETCRL outperforms the baseline methods in most cases. Furthermore, the results obtained from tests with different model sizes show that the proposed framework can achieve high performance even with a small model size. This paper also discusses the effect of considering information about entity types on the link prediction task by analyzing the experimental results.

Cite this article as:
R. Ni, H. Shibata, and Y. Takama, “Entity and Entity Type Composition Representation Learning for Knowledge Graph Completion,” J. Adv. Comput. Intell. Intell. Inform., Vol.27 No.6, pp. 1151-1158, 2023.
Data files:
References
  1. [1] I. Balazevic, C. Allen, and T. Hospedales, “TuckER: Tensor factorization for knowledge graph completion,” Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. on Natural Language Processing (EMNLP-IJCNLP), pp. 5185-5194, 2019. https://doi.org/10.18653/v1/D19-1522
  2. [2] F. Mahdisoltani, J. Biega, and F. M. Suchanek, “YAGO3: A knowledge base from multilingual Wikipedias,” 7th Biennial Conf. on Innovative Data Systems Research (CIDR), 2015.
  3. [3] K. Bollacker et al., “Freebase: A collaboratively created graph database for structuring human knowledge,” Proc. of the 2008 ACM SIGMOD Int. Conf. on Management Data (SIGMOD’08), pp. 1247-1250, 2008. https://doi.org/10.1145/1376616.1376746
  4. [4] J. Lehmann et al., “DBpedia – A large-scale, multilingual knowledge base extracted from Wikipedia,” Semantic Web, Vol.6, No.2, pp. 167-195, 2015. https://doi.org/10.3233/SW-140134
  5. [5] A. Saxena, A. Tripathi, and P. Talukdar, “Improving multi-hop question answering over knowledge graphs using knowledge base embeddings,” Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4498-4507, 2020. https://doi.org/10.18653/v1/2020.acl-main.412
  6. [6] S. Cao et al., “Program transfer for answering complex questions over knowledge bases,” Proc. of the 60th Annual Meeting of the Association for Computational Linguistics, Vol.1, pp. 8128-8140, 2022. https://doi.org/10.18653/v1/2022.acl-long.559
  7. [7] T. Young et al., “Augmenting end-to-end dialogue systems with commonsense knowledge,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.32, No.1, pp. 4970-4977, 2018. https://doi.org/10.1609/aaai.v32i1.11923
  8. [8] S. Yang, R. Zhang, and S. Erfani, “GraphDialog: Integrating graph knowledge into end-to-end task-oriented dialogue systems,” Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP), pp. 1878-1888, 2020. https://doi.org/10.18653/v1/2020.emnlp-main.147
  9. [9] Y. Cao et al., “Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences,” Proc. of the World Wide Web Conf (WWW’19), pp. 151-161, 2019. https://doi.org/10.1145/3308558.3313705
  10. [10] H. Wang et al., “Multi-task feature learning for knowledge graph enhanced recommendation,” Proc. of the World Wide Web Conf. (WWW’19), pp. 2000-2010, 2019. https://doi.org/10.1145/3308558.3313411
  11. [11] Z. Cao et al., “Dual quaternion knowledge graph embeddings,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.35, No.8, pp. 6894-6902, 2021. https://doi.org/10.1609/aaai.v35i8.16850
  12. [12] Z. Li et al., “Efficient non-sampling knowledge graph embedding,” Proc. of the Web Conf. 2021 (WWW’21), pp. 1727-1736, 2021. https://doi.org/10.1145/3442381.3449859
  13. [13] K. Wang et al., “MulDE: Multi-teacher knowledge distillation for low-dimensional knowledge graph embeddings,” Proc. of the Web Conf. 2021 (WWW’21), pp. 1716-1726, 2021. https://doi.org/10.1145/3442381.3449898
  14. [14] T. Song, J. Luo, and L. Huang, “Rot-Pro: Modeling transitivity by projection in knowledge graph embedding,” Proc. of the 35th Conf. on Neural Information Processing Systems (NeurIPS), pp. 24695-24706, 2021.
  15. [15] L. Yao, C. Mao, and Y. Luo, “KG-BERT: BERT for knowledge graph completion,” arXiv: 1909.03193, 2019. https://doi.org/10.48550/arXiv.1909.03193
  16. [16] B. Wang et al., “Structure-augmented text representation learning for efficient knowledge graph completion,” Proc. of the Web Conf. 2021 (WWW’21), pp. 1737-1748, 2021. https://doi.org/10.1145/3442381.3450043
  17. [17] R. Xie, Z. Liu, and M. Sun, “Representation learning of knowledge graphs with hierarchical types,” Proc. of the 25th Int. Joint Conf. on Artificial Intelligence (IJCAI’16), pp. 2965-2971, 2016.
  18. [18] R. Zhang et al., “Embedding of hierarchically typed knowledge bases,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.32, No.1, pp. 2046-2053, 2018. https://doi.org/10.1609/aaai.v32i1.11548
  19. [19] K. Toutanova et al., “Representing text for joint embedding of text and knowledge bases,” Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing, pp. 1499-1509, 2015. https://doi.org/10.18653/v1/D15-1174
  20. [20] R. Ni, H. Shibata, and Y. Takama, “Learning framework of entity and entity type composition representation for knowledge graph completion,” 2022 IEEE Int. Conf. on Big Data, pp. 3687-3691, 2022. https://doi.org/10.1109/BigData55660.2022.10020261
  21. [21] A. Bordes et al., “Translating embeddings for modeling multi-relational data,” Proc. of the 26th Int. Conf. on Neural Information Processing Systems (NIPS’13), pp. 2787-2795, 2013.
  22. [22] Z. Wang et al., “Knowledge graph embedding by translating on hyperplanes,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.28, No.1, pp. 1112-1119, 2014. https://doi.org/10.1609/aaai.v28i1.8870
  23. [23] Y. Lin et al., “Learning entity and relation embeddings for knowledge graph completion,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.29, No.1, pp. 2181-2187, 2015. https://doi.org/10.1609/aaai.v29i1.9491
  24. [24] Z. Sun et al., “RotatE: Knowledge graph embedding by relational rotation in complex space,” 7th Int. Conf. on Learning Representations (ICLR 2019), 2019.
  25. [25] M. Nickel, V. Tresp, and H.-P. Kriegel, “A three-way model for collective learning on multi-relational data,” Proc. of the 28th Int. Conf. on Machine Learning (ICML’11), pp. 809-816, 2011.
  26. [26] B. Yang et al., “Embedding entities and relations for learning and inference in knowledge bases,” 3rd Int. Conf. on Learning Representations (ICLR 2015).
  27. [27] T. Trouillon et al., “Complex embeddings for simple link prediction,” Proc. of the 33rd Int. Conf. on Machine Learning, pp. 2071-2080, 2016.
  28. [28] L. R. Tucker, “The extension of factor analysis to three-dimensional matrices,” N. Frederiksen and H. Gulliksen (Eds.), “Contributions to Mathematical Psychology,” pp. 109-127, Holt, Rinehart and Winston, Inc., 1964.
  29. [29] S. Ma et al., “TransT: Type-based multiple embedding representations for knowledge graph completion,” Proc. of the European Conf. on Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2017), pp. 717-733, 2017. https://doi.org/10.1007/978-3-319-71249-9_43
  30. [30] P. Jain et al., “Type-sensitive knowledge base inference without explicit type supervision,” Proc. of the 56th Annual Meeting of the Association for Computational Linguistics, Vol.2, pp. 75-80, 2018. https://doi.org/10.18653/v1/P18-2013
  31. [31] G. Niu et al., “AutoETER: Automated entity type representation for knowledge graph embedding,” Findings of the Association for Computational Linguistics (EMNLP 2020), pp. 1172-1181, 2020. https://doi.org/10.18653/v1/2020.findings-emnlp.105
  32. [32] T. Dettmers et al., “Convolutional 2D knowledge graph embeddings,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.32, No.1, pp. 1811-1818, 2018. https://doi.org/10.1609/aaai.v32i1.11573
  33. [33] J. Hao et al., “Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts,” Proc. of the 25th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining (KDD’19), pp. 1709-1719, 2019. https://doi.org/10.1145/3292500.3330838

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 24, 2023