JACIII Vol.25 No.4 pp. 442-449
doi: 10.20965/jaciii.2021.p0442


Improved Chinese Sentence Semantic Similarity Calculation Method Based on Multi-Feature Fusion

Liqi Liu, Qinglin Wang, and Yuan Li

School of Automation, Beijing Institute of Technology
5 South Zhongguancun Street, Haidian District, Beijing 100081, China

Corresponding author

December 27, 2018
May 10, 2021
July 20, 2021
LSTM, semantic similarity, syntactic component, relative position embedding

In this paper, an improved long short-term memory (LSTM)-based deep neural network structure is proposed for learning variable-length Chinese sentence semantic similarities. Siamese LSTM, a sequence-insensitive deep neural network model, has a limited ability to capture the semantics of natural language because it has difficulty explaining semantic differences based on the differences in syntactic structures or word order in a sentence. Therefore, the proposed model integrates the syntactic component features of the words in the sentence into a word vector representation layer to express the syntactic structure information of the sentence and the interdependence between words. Moreover, a relative position embedding layer is introduced into the model, and the relative position of the words in the sentence is mapped to a high-dimensional space to capture the local position information of the words. With this model, a parallel structure is used to map two sentences into the same high-dimensional space to obtain a fixed-length sentence vector representation. After aggregation, the sentence similarity is computed in the output layer. Experiments with Chinese sentences show that the model can achieve good results in the calculation of the semantic similarity.

Cite this article as:
L. Liu, Q. Wang, and Y. Li, “Improved Chinese Sentence Semantic Similarity Calculation Method Based on Multi-Feature Fusion,” J. Adv. Comput. Intell. Intell. Inform., Vol.25 No.4, pp. 442-449, 2021.
Data files:
  1. [1] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed Representations of Words and Phrases and their Compositionality,” arXiv preprints, arXiv:1310.4546, 2013.
  2. [2] P. Neculoiu, M. Versteegh, and M. Rotaru, “Learning Text Similarity with Siamese Recurrent Networks,” Association for Computational Linguistics, pp. 147-158, 2016.
  3. [3] Z. Yan and Y. Wu, “A Neural N-Gram Network for Text Classification,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.3, pp. 380-386, doi: 10.20965/jaciii.2018.p0380, 2018.
  4. [4] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” arXiv preprints, arXiv:1409.3215, 2014.
  5. [5] W. Pei, D. Tax, and V. Laurens, “Modeling Time Series Similarity with Siamese Recurrent Networks,” arXiv preprints, arXiv:1603.04713, 2016.
  6. [6] H. Palangi, L. Deng, Y. Shen et al, “Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval,” IEEE/ACM Trans. on Audio Speech and Language Processing, Vol.24, No.4, pp. 694-707, 2016.
  7. [7] A. Graves, N. Jaitly, and A. Mohamed, “Hybrid speech recognition with Deep Bidirectional LSTM,” 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 273-278, 2013.
  8. [8] K. Tai, R. Socher, and C. Manning, “Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks,” arXiv preprints, arXiv:1503.00075, 2015.
  9. [9] K. Wang, Z. Ming, and T. Chua, “A Syntactic Tree Matching Approach to Finding Similar Questions in Community-based QA Services,” Proc. of the 32nd Int. ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR ’09), pp. 187-194, 2009.
  10. [10] K. Papineni, S. Roukos, T. Ward, and W. Zhu, “Bleu: A Method for Automatic Evaluation of Machine Translation,” Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL ’02), pp. 311-318, 2002.
  11. [11] S. Hartrumpf, “Question Answering Using Sentence Parsing and Semantic Network Matching,” Workshop of the Cross-language Evaluation Forum for European Languages (CLEF 2004), pp. 512-521, 2004.
  12. [12] K. Chen, X. Fan, J. Liu et al, “Study on Chinese Question Parsing of Restricted-domain Question Answering System,” Computer Engineering, Vol.34, pp. 25-27, 2008 (in Chinese and English abstract).
  13. [13] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” Int. Workshop on Similarity-Based Pattern Recognition (SIMBAD 2015), pp. 84-92, 2015.
  14. [14] E. Agirre, M. Diab, D. Cer, and A. Gonzalez-Agirre, “A pilot on semantic textual similarity,” Proc. of the 6th Int. Workshop on Semantic Evaluation (SemEval 2012), pp. 385-393, 2012.
  15. [15] F. Ye and Z. Qin, “Research on Pattern Representation Based on Keyword and Word Embedding in Chinese Entity Relation Extraction,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.4, doi: 10.20965/jaciii.2018.p0475, pp. 475-482, 2018.
  16. [16] E. Agirre, C. Baneab, D. Cerd et al., “Semantic textual similarity, monolingual and cross-lingual evaluation,” Proc. of the 10th Int. Workshop on Semantic Evaluation (SemEval 2016), pp. 497-511, 2016.
  17. [17] J. Mueller and A. Thyagarajan, “Siamese Recurrent Architectures for Learning Sentence Similarity,” Proc. of the 30th AAAI Conf. on Artificial Intelligence (AAAI’16), pp. 2786-2792, 2016.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 03, 2024