JACIII Vol.27 No.3 pp. 421-430
doi: 10.20965/jaciii.2023.p0421

Research Paper:

Causality Extraction Cascade Model Based on Dual Labeling

Fengxiao Yan ORCID Icon, Bo Shen ORCID Icon, and Chenyang Dai

Key Laboratory of Communication and Information Systems, Beijing Municipal Commission of Education, Beijing Jiaotong University
3 Shangyuancun, Haidian District, Beijing 100044, China

Corresponding author

June 1, 2022
January 17, 2023
May 20, 2023
causality extraction, named entity recognition, BiLSTM, ACNN

Causal relation extraction is a crucial task in natural language processing. Current extraction methods have problems, including low accuracy of causal-event division and incorrect extraction of important semantic features. This study uses the bidirectional long short-term memory (BiLSTM) and attentive convolutional neural network (ACNN) models to construct a cascaded causal relationship extraction model to improve the precision of the extraction. The model uses two kinds of labels and then divides the causal event boundary after determining the relationship between the front and rear causal events. It automatically learns semantic features from sentences, reducing the dependence on external knowledge and improving the precision of extraction. The experimental results demonstrate that the precision of causality extraction can reach 81.67% and the F1 value can reach 83.2%.

Cite this article as:
F. Yan, B. Shen, and C. Dai, “Causality Extraction Cascade Model Based on Dual Labeling,” J. Adv. Comput. Intell. Intell. Inform., Vol.27 No.3, pp. 421-430, 2023.
Data files:
  1. [1] T. Liu, “From knowledge graph to event evolutionary graph,” Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, 2017.
  2. [2] T. N. de Silva et al., “Causal relation identification using convolutional neural networks and knowledge based features,” Int. J. of Computer and System Engineering, Vol.11, No.6, pp. 697-702, 2017.
  3. [3] J.-H. Oh et al., “Multi-column convolutional neural networks with causality-attention for why-question answering,” Proc. of the 10th ACM Int. Conf. on Web Search and Data Mining (WSDM’17), pp. 415-424, 2017.
  4. [4] C. S. G. Khoo et al., “Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing,” Literary and Linguistic Computing, Vol.13, No.4, pp. 177-186, 1998.
  5. [5] H. Kim, J. Joung, and K. Kim, “Semi-automatic extraction of technological causality from patents,” Computers & Industrial Engineering, Vol.115, pp. 532-542, 2018.
  6. [6] S. Zhao et al., “Event causality extraction based on connectives analysis,” Neurocomputing, Vol.173, Part 3, pp. 1943-1950, 2016.
  7. [7] J. Zhong et al., “Causal relation extraction of Uyghur emergency events based on cascaded model,” Acta Automatica Sinica, Vol.40, No.4, pp. 771-779, 2014 (in Chinese).
  8. [8] P. Li, Y. Huang, and Q. Zhu, “Global optimization to recognize causal relations between events,” J. of Tsinghua University (Science and Technology), Vol.57, No.10, pp. 1042-1047, 2017 (in Chinese).
  9. [9] S. Tian et al., “Causal relation extraction of Uyghur events based on bidirectional long short-term memory model,” J. of Electronics & Information Technology, Vol.40, No.1, pp. 200-208, 2018 (in Chinese).
  10. [10] D. Nadeau and S. Sekine, “A survey of named entity recognition and classification,” Lingvisticæ Investigationes, Vol.30, No.1, pp. 3-26, 2007.
  11. [11] E. B. Yerkes and R. C. Rink, “Chapter 36: Surgical management of female genital anomalies, disorders of sexual development, urogenital sinus, and cloacal anomalies,” J. P. Gearhart, R. C. Rink, and P. D. E. Mouriquand (Eds.), “Pediatric urology,” 2nd Edition, pp. 476-499, Elsevier, 2010.
  12. [12] X. Ma and E. Hovy, “End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF,” arXiv: 1603.01354v4, 2016.
  13. [13] M. Mintz et al., “Distant supervision for relation extraction without labelled data,” Proc. of the Joint Conf. of the 47th Annual Meeting of the ACL and the 4th Int. Joint Conf. on Natural Language Processing of the AFNLP, pp. 1003-1011, 2009.
  14. [14] M. Miwa and M. Bansal, “End-to-end relation extraction using LSTMs on sequences and tree structures,” arXiv: 1601.00770, 2016.
  15. [15] Y. Xu et al., “Classifying relations via long short term memory networks along shortest dependency paths,” Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing (EMNLP), pp. 1785-1794, 2015.
  16. [16] S. Zheng et al., “Joint extraction of entities and relations based on a novel tagging scheme,” Proc. of the 55th Annual Meeting of the Association for Computational Linguistics (Vol.1: Long Papers), pp. 1227-1236. 2017.
  17. [17] Z. Wang et al., “Back to prior knowledge: Joint event causality extraction via convolutional semantic infusion,” Proc. of the 25th Pacific-Asia Conf. on Advances in Knowledge Discovery and Data Mining (PAKDD 2021), pp. 346-357, 2021.
  18. [18] Z. Li et al., “Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings,” Neurocomputing, Vol.423, pp. 207-219, 2021.
  19. [19] X. Hou et al., “Classifying relation via bidirectional recurrent neural network based on local information,” Proc. of the 18th Asia-Pacific Web Conf. (APWeb 2016), Part 1, pp. 420-430, 2016.
  20. [20] T. Dasgupta et al., “Automatic extraction of causal relations from text using linguistically informed deep neural networks,” Proc. of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pp. 306-316, 2018.
  21. [21] S. Zhang et al., “Bidirectional long short-term memory networks for relation classification,” Proc. of the 29th Pacific Asia Conf. on Language, Information and Computation, pp. 73-78, 2015.
  22. [22] Z. Huang, W. Xu, and K. Yu, “Bidirectional LSTM-CRF models for sequence tagging,” arXiv: 1508.01991, 2015.
  23. [23] D. Zeng et al., “Relation classification via convolutional deep neural network,” Proc. of the 25th Int. Conf. on Computational Linguistics (COLING 2014), pp. 2335-2344, 2014.
  24. [24] L. Wang et al., “Relation classification via multi-level attention CNNs,” Proc. of the 54th Annual Meeting of the Association for Computational Linguistics, Vol.1, pp. 1298-1307, 2016.
  25. [25] W. Yin et al., “ABCNN: Attention-based convolutional neural network for modeling sentence pairs,” arXiv: 1512.05193, 2015.
  26. [26] J. Xu et al., “Causal relation extraction based on graph attention networks,” J. of Computer Research and Development, Vol.57, No.1, pp. 159-174, 2020 (in Chinese).
  27. [27] I. Hendrickx et al., “SemEval-2010 Task 8: Multi-way classification of semantic relations between pairs of nominals,” Proc. of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW’09), pp. 94-99, 2009.
  28. [28] J. Aguilar et al., “A comparison of the events and relations across ACE, ERE, TAC-KBP, and FrameNet annotation standards,” Proc. of the 2nd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pp. 45-53, 2014.
  29. [29] I. Hendrickx et al., “SemEval-2010 Task 8: Multi-way classification of semantic relations between pairs of nominals,” Proc. of the 5th Int. Workshop on Semantic Evaluation (SemEval’10), pp. 33-38, 2010.
  30. [30] S. Sumathipala et al., “Protein entity name recognition using orthographic, morphological and proteinhood features,” J. Adv. Comput. Intell. Intell. Inform., Vol.19, No.6, pp. 843-851, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 07, 2023