Research Paper:
A CNN-BiGRU-GNN Model for Sentiment Analysis of Complex English Texts
Na Lin
School of Humanities and International Education, Xi’an Peihua University
No.888 Changning Street, Chang’an, Xi’an, Shaanxi 710125, China
Current deep-learning approaches often encounter significant challenges in English literary analysis, including low accuracy, information loss, and semantic ambiguity. Therefore, this study introduces a novel model combining convolutional neural networks (CNNs), bidirectional gated recurrent units (BiGRUs), and graph neural networks (GNNs). The CNN component is utilized for efficient feature extraction from the input text, whereas the BiGRU layer captures bidirectional dependencies, enabling a more comprehensive understanding of the textual context. Additionally, GNN is employed to model the global dependencies within the text, which is crucial for tasks like sentiment analysis and topic recognition. To further enhance feature extraction, Bidirectional Encoder Representations from Transformers (BERT) is integrated into the model, leveraging its deep contextual representations. For sequence labeling tasks, conditional random fields are utilized to improve prediction accuracy by capturing interdependencies between labels. To optimize the GNN hyperparameters, particle swarm optimization algorithm is applied, ensuring the model’s fine-tuning for better performance. Experimental evaluation on the Microsoft Academic Research Corpus dataset demonstrates the efficacy of the proposed model. Compared to baseline models such as BERT-BiLSTM, BERT-CNN-BiGRU, and BERT-CNN-BiLSTM-ATT, the CNN-BiGRU-GNN model yields significant improvements in accuracy, with gains of 4.57%, 4.11%, and 3.01%, respectively. These results highlight the ability of the model to effectively address the complexities of English literary analysis.
- [1] M. Bahja, “Natural language processing applications in business,” R. M. X. Wu and M. Mircea (Eds.), “E-Business: Higher Education and Intelligence Applications,” pp. 53-68, IntechOpen, 2020. https://doi.org/10.5772/intechopen.92203
- [2] J. P. Bharadiya, “A comprehensive survey of deep learning techniques in natural language processing,” European J. of Technology, Vol.7, No.1, pp. 58-66, 2023. https://doi.org/10.47672/ejt.1473
- [3] D. Khurana, A. Koli, K. Khatter, and S. Singh, “Natural language processing: State of the art, current trends and challenges,” Multimedia Tools and Applications, Vol.82, No.3, pp. 3713-3744, 2023. https://doi.org/10.1007/s11042-022-13428-4
- [4] M. Bayer et al., “Data augmentation in natural language processing: A novel text generation approach for long and short text classifiers,” Int. J. of Machine Learning and Cybernetics, Vol.14, No.1, pp. 135-150, 2023. https://doi.org/10.1007/s13042-022-01553-3
- [5] D. Wang, J. Su, and H. Yu, “Feature extraction and analysis of natural language processing for deep learning English language,” IEEE Access, Vol.8, pp. 46335-46345, 2020. https://doi.org/10.1109/ACCESS.2020.2974101
- [6] I. Lauriola, A. Lavelli, and F. Aiolli, “An introduction to deep learning in natural language processing: Models, techniques, and tools,” Neurocomputing, Vol.470, pp. 443-456, 2022. https://doi.org/10.1016/j.neucom.2021.05.103
- [7] B. Khemani et al., “Sentimatrix: Sentiment analysis using GNN in healthcare,” Int. J. of Information Technology, Vol.16, No.8, pp. 5213-5219, 2024. https://doi.org/10.1007/s41870-024-02142-z
- [8] H. Wang, C. Ren, and Z. Yu, “Multimodal sentiment analysis based on cross-instance graph neural networks,” Applied Intelligence, Vol.54, No.4, pp. 3403-3416, 2024. https://doi.org/10.1007/s10489-024-05309-0
- [9] R. M. Samant, M. R. Bachute, S. Gite, and K. Kotecha, “Framework for deep learning-based language models using multi-task learning in natural language understanding: A systematic literature review and future directions,” IEEE Access, Vol.10, pp. 17078-17097, 2022. https://doi.org/10.1109/ACCESS.2022.3149798
- [10] S. Park, “Analysis of the status of natural language processing technology based on deep learning,” The Korea J. of Bigdata, Vol.6, No.1, pp. 63-81, 2021 (in Korean). https://doi.org/10.36498/kbigdt.2021.6.1.63
- [11] M. R. Kounte, P. K. Tripathy, Pramod P., and H. Bajpai, “Analysis of intelligent machines using deep learning and natural language processing,” 4th Int. Conf. on Trends in Electronics and Informatics, pp. 956-960, 2020. https://doi.org/10.1109/ICOEI48184.2020.9142886
- [12] J. Li, M. Liu, B. Qin, and Ting Liu, “A survey of discourse parsing,” Frontiers of Computer Science, Vol.16, No.5, Article No.165329, 2022. https://doi.org/10.1007/s11704-021-0500-z
- [13] P. P. Rao and S. P. Kumar, “Rule-based translation surface for Telugu nouns,” Int. J. of Cloud Computing, Vol.11, No.4, pp. 356-372, 2022. https://doi.org/10.1504/IJCC.2022.124799
- [14] E. Saveleva, V. Petukhova, M. Mosbach, and D. Klakow, “Discourse-based argument segmentation and annotation,” Proc. of the 17th Joint ACL-ISO Workshop on Interoperable Semantic Annotation, pp. 41-53, 2021. https://aclanthology.org/2021.isa-1.5/
- [15] H. Liu, L. Cui, J. Liu, and Y. Zhang, “Natural language inference in context—investigating contextual reasoning over long texts,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.35, No.15, pp. 13388-13396, 2021. https://doi.org/10.1609/aaai.v35i15.17580
- [16] F. Gholami, Z. Rahmati, A. Mofidi, and M. Abbaszadeh, “On enhancement of text classification and analysis of text emotions using graph machine learning and ensemble learning methods on non-English datasets,” Algorithms, Vol.16, No.10, Article No.470, 2023. https://doi.org/10.3390/a16100470
- [17] S. Tsai and L. Shi, “Chinese text sentiment classification model based on FastText and multi-scale deep pyramid convolutional neural network,” 2022 Int. Conf. on Computation, Big-Data and Engineering, pp. 75-77, 2022. https://doi.org/10.1109/ICCBE56101.2022.9888185
- [18] X. Feng et al., “Sentence level fine-grained emotion computation based on dependency syntax improvement dictionary,” Proc. of the 2022 Int. Conf. on Artificial Intelligence, Internet of Things and Cloud Computing Technology, pp. 106-114, 2022.
- [19] I. D. Mienye, T. G. Swart, and G. Obaido, “Recurrent neural networks: A comprehensive review of architectures, variants, and applications,” Information, Vol.15, No.9, Article No.517, 2024. https://doi.org/10.3390/info15090517
- [20] R. Ramakrishnan, P. Thangamuthu, A. Nguyen, and J. Gao, “Revolutionizing campus communication: NLP-powered university chatbots,” Int. J. of Advanced Computer Science and Applications, Vol.15, No.6, pp. 38-49, 2024. https://doi.org/10.14569/IJACSA.2024.0150606
- [21] J. Cheng, M. Sadiq, O. A. Kalugina, S. A. Nafees, and Q. Umer, “Convolutional neural network based approval prediction of enhancement reports,” IEEE Access, Vol.9, pp. 122412-122424, 2021. https://doi.org/10.1109/ACCESS.2021.3108624
- [22] V. Mingote, A. Miguel, A. Ortega, and E. Lleida, “Log-likelihood-ratio cost function as objective loss for speaker verification systems,” Proc. of Interspeech 2021, pp. 2361-2365, 2021. https://doi.org/10.21437/Interspeech.2021-1085
- [23] D. Zhukov and J. Perova, “A model for analyzing user moods of self-organizing social network structures based on graph theory and the use of neural networks,” 3rd Int. Conf. on Control Systems, Mathematical Modeling, Automation and Energy Efficiency, pp. 319-322, 2021. https://doi.org/10.1109/SUMMA53307.2021.9632203
- [24] Y. Dong, “Application research on classification and integration model of innovation and entrepreneurship education resources based on GNN-PSO algorithm,” Systems and Soft Computing, Vol.7, Article No.200326, 2025. https://doi.org/10.1016/j.sasc.2025.200326
- [25] D. Su et al., “Read before generate! Faithful long form question answering with machine reading,” arXiv:2203.00343, 2022. https://doi.org/10.48550/arXiv.2203.00343
- [26] M. Singh and M. Shrivastava, “BRR-QA: Boosting ranking and reading in open-domain question answering,” Proc. of the 6th Joint Int. Conf. on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD), pp. 62-69, 2023. https://doi.org/10.1145/3570991.3571018
- [27] M. Santos Teixeira, “Automating the generation of goal-oriented dialogue managers for healthcare,” Ph.D. thesis, University of Trento, 2022. https://doi.org/10.15168/11572_361402
- [28] H. T. Phan and N. T. Nguyen, “A fuzzy graph convolutional network model for sentence-level sentiment analysis,” IEEE Trans. on Fuzzy Systems, Vol.32, No.5, pp. 2953-2965, 2024. https://doi.org/10.1109/TFUZZ.2024.3364694
- [29] X. Bi and T. Zhang, “Pedagogical sentiment analysis based on the BERT-CNN-BiGRU-attention model in the context of intercultural communication barriers,” PeerJ Computer. Science, Vol.10, Article No.e2166, 2024. https://doi.org/10.7717/peerj-cs.2166
- [30] A. He and M. Abisado, “Text sentiment analysis of Douban film short comments based on BERT-CNN-BiLSTM-Att model,” IEEE Access, Vol.12, pp. 45229-45237, 2024. https://doi.org/10.1109/ACCESS.2024.3381515
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.