single-jc.php

JACIII Vol.29 No.6 pp. 1552-1564
doi: 10.20965/jaciii.2025.p1552
(2025)

Research Paper:

On User’s Reception of Local Explanation: An Argumentation Analysis

Nguyen Duy Hung* ORCID Icon, Thanaruk Theeramunkong* ORCID Icon, and Van-Nam Huynh** ORCID Icon

*Sirindhorn International Institute of Technology, Thammasat University
99 Moo 18, Km. 41 on Paholyothin Highway, Khlong Luang, Pathum Thani 12120, Thailand

**Japan Advanced Institute of Science and Technology
1-1 Asahidai, Nomi, Ishikawa 923-1292, Japan

Received:
April 9, 2025
Accepted:
August 7, 2025
Published:
November 20, 2025
Keywords:
local explanation, analogical arguments, user-centered XAI, argumentation analysis
Abstract

A local explanation method (LE) in explainable artificial intelligence (XAI) is basically a two-step procedure: first construct a naively explainable model approximating the black-box model in need of explanations; then extract an explanation from the approximate model. Since an expert user knows that the extracted explanation aims to be just analogous to the target/ideal explanation, the expert user has to use analogical arguments to transfer certain properties observed on the former to the latter. In this paper, assuming an expert user whose knowledge satisfies certain conjectures, we reconstruct the structures “reason therefore conclusion” of these analogical arguments and study conditions for ensuring the truth of the reason, conditions for ensuring that the conclusion follows necessarily from the reason, as well as counter-arguments the user has to consider. It is argued that the presented findings shed light on the internal reasoning of an expert user at the end of User-LE dialogue. Broadly speaking, the paper suggests a promising direction to extend existing explanation methods, which are system-centered (focusing on generating explanations), to user-centered XAI which must attend to user’s receptions as well.

Cite this article as:
N. Hung, T. Theeramunkong, and V. Huynh, “On User’s Reception of Local Explanation: An Argumentation Analysis,” J. Adv. Comput. Intell. Intell. Inform., Vol.29 No.6, pp. 1552-1564, 2025.
Data files:
References
  1. [1] Z. C. Lipton, “The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.” Queue, Vol.16, No.3, pp. 31-57, 2018. https://doi.org/10.1145/3236386.3241340
  2. [2] N. Potyka, X. Yin, and F. Toni, “Towards a theory of faithfulness: Faithful explanations of differentiable classifiers over continuous data,” arXiv:2205.09620, 2022. https://doi.org/10.48550/arXiv.2205.09620
  3. [3] A. Vassiliades, N. Bassiliades, and T. Patkos, “Argumentation and explainable artificial intelligence: A survey,” The Knowledge Engineering Review, Vol.36, Article No.e5, 2021. https://doi.org/10.1017/S0269888921000011
  4. [4] S. Wachter, B. D. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the GDPR,” arXiv:1711.00399, 2017. https://doi.org/10.48550/arXiv.1711.00399
  5. [5] M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High-precision model-agnostic explanations,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.32, No.1, pp. 1527-1535, 2018. https://doi.org/10.1609/aaai.v32i1.11491
  6. [6] D. Baehrens et al., “How to explain individual classification decisions,” J. of Machine Learning Research, Vol.11, No.61, pp. 1803-1831, 2010.
  7. [7] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should I trust you?’: Explaining the predictions of any classifier,” Proc. of the 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD’16), pp. 1135-1144, 2016. https://doi.org/10.1145/2939672.2939778
  8. [8] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Proc. of the 31st Int. Conf. on Neural Information Processing Systems (NIPS’17), pp. 4768-4777, 2017.
  9. [9] A. Bloniarz, A. Talwalkar, B. Yu, and C. Wu, “Supervised neighborhoods for distributed nonparametric regression,” Proc. of the 19th Int. Conf. on Artificial Intelligence and Statistics, pp. 1450-1459, 2016.
  10. [10] G. Plumb, D. Molitor, and A. Talwalkar, “Model agnostic supervised local explanations,” Proc. of the 32nd Int. Conf. on Neural Information Processing Systems (NIPS’18), pp. 2520-2529, 2018.
  11. [11] P. M. Dung, “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games,” Artificial Intelligence, Vol.77, No.2, pp. 321-357, 1995. https://doi.org/10.1016/0004-3702(94)00041-X
  12. [12] M. Demollin, Q.-U.-A. Shaheen, K. Budzynska, and C. Sierra, “Argumentation theoretical frameworks for explainable artificial intelligence,” 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pp. 44-49, 2020.
  13. [13] K. Čyras, A. Rago, E. Albini, P. Baroni, and F. Toni, “Argumentative XAI: A survey,” arXiv:2105.11266, 2021. https://doi.org/10.48550/arXiv.2105.11266
  14. [14] N. D. Hung and V.-N. Huynh, “User-centred argumentation analysis of local explanations in explainable AI,” Proc. of the 21st Int. Conf. on Modeling Decisions for Artificial Intelligence (MDAI 2024), pp. 127-139, 2024. https://doi.org/10.1007/978-3-031-68208-7_11
  15. [15] F. Toni, “A tutorial on assumption-based argumentation,” Argument & Computation, Vol.5, No.1, pp. 89-117, 2014. https://doi.org/10.1080/19462166.2013.869878
  16. [16] P. M. Dung, R. A. Kowalski, and F. Toni, “Assumption-based argumentation,” G. Simari and I. Rahwan (Eds.), “Argumentation in Artificial Intelligence,” pp. 199-218, Springer, 2009. https://doi.org/10.1007/978-0-387-98197-0_10
  17. [17] S. Modgil and H. Prakken, “The ASPIC+ framework for structured argumentation: A tutorial,” Argument & Computation, Vol.5, No.1, pp. 31-62, 2014. https://doi.org/10.1080/19462166.2013.869766
  18. [18] B. Walliser, D. Zwirn, and H. Zwirn, “Analogical reasoning as an inference scheme,” Dialogue: Canadian Philosophical Review, Vol.61, No.2, pp. 203-223, 2022. https://doi.org/10.1017/s0012217321000226
  19. [19] G. Polya, “Mathematics and plausible reasoning, Vol. 1: Induction and analogy in mathematics,” Princeton University Press, 1954. https://doi.org/10.2307/j.ctv14164db
  20. [20] N. Barbot, L. Miclet, and H. Prade, “Analogy between concepts,” Artificial Intelligence, Vol.275, pp. 487-539, 2019. https://doi.org/10.1016/j.artint.2019.06.008
  21. [21] M. A. Neerincx, J. van der Waa, F. Kaptein, and J. van Diggelen, “Using perceptual and cognitive explanations for enhanced human-agent team performance,” Proc. of the 14th Int. Conf. on Engineering Psychology and Cognitive Ergonomics (EPCE 2018), pp. 204-214, 2018. https://doi.org/10.1007/978-3-319-91122-9_18
  22. [22] A. Rago, O. Cocarascu, and F. Toni, “Argumentation-based recommendations: Fantastic explanations and how to find them,” Proc. of the 27th Int. Joint Conf. on Artificial Intelligence (IJCAI-18), pp. 1949-1955, 2018. https://doi.org/10.24963/ijcai.2018/269
  23. [23] A. Rago, O. Cocarascu, C. Bechlivanidis, and F. Toni, “Argumentation as a framework for interactive explanations for recommendations,” Proc. of the 17th Int. Conf. on Principles of Knowledge Representation and Reasoning, pp. 805-815, 2020. https://doi.org/10.24963/kr.2020/83
  24. [24] N. Sendi, N. Abchiche-Mimouni, and F. Zehraoui, “A new transparent ensemble method based on deep learning,” Procedia Computer Science, Vol.159, pp. 271-280, 2019. https://doi.org/10.1016/j.procs.2019.09.182
  25. [25] A. Dejl et al., “Argflow: A toolkit for deep argumentative explanations for neural networks,” Proc. of the 20th Int. Conf. on Autonomous Agents and MultiAgent Systems (AAMAS’21), pp. 1761-1763, 2021.
  26. [26] S. T. Timmer, J.-J. C. Meyer, H. Prakken, S. Renooij, and B. Verheij, “A two-phase method for extracting explanatory arguments from Bayesian networks,” Int. J. of Approximate Reasoning, Vol.80, pp. 475-494, 2017. https://doi.org/10.1016/j.ijar.2016.09.002
  27. [27] E. Albini, P. Baroni, A. Rago, and F. Toni, “PageRank as an argumentation semantics,” Computational Models of Argument: Proc. of COMMA 2020, pp. 55-66, 2020. https://doi.org/10.3233/FAIA200492
  28. [28] Z. Zeng et al., “Context-based and explainable decision making with argumentation,” Proc. of the 17th Int. Conf. on Autonomous Agents and MultiAgent Systems (AAMAS’18), pp. 1114-1122, 2018.
  29. [29] Q. Zhong, X. Fan, X. Luo, and F. Toni, “An explainable multi-attribute decision model based on argumentation,” Expert Systems with Applications, Vol.117, pp. 42-61, 2019. https://doi.org/10.1016/j.eswa.2018.09.038
  30. [30] A. Arioua, N. Tamani, and M. Croitoru, “Query answering explanation in inconsistent datalog+/- knowledge bases,” Proc. of the 26th Int. Conf. on Database and Expert Systems Applications (DEXA 2015), pp. 203-219, 2015. https://doi.org/10.1007/978-3-319-22849-5_15
  31. [31] X. Fan, “On generating explainable plans with assumption-based argumentation,” Proc. of the 21st Int. Conf. on Principles and Practice of Multi-Agent Systems (PRIMA 2018), pp. 344-361, 2018. https://doi.org/10.1007/978-3-030-03098-8_21
  32. [32] K. Čyras, D. Letsios, R. Misener, and F. Toni, “Argumentation for explainable scheduling,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.33, No.1, pp. 2752-2759, 2019. https://doi.org/10.1609/aaai.v33i01.33012752
  33. [33] A. Collins, D. Magazzeni, and S. Parsons, “Towards an argumentation-based approach to explainable planning,” Proc. of the 2nd Int. Workshop on Explainable AI Planning (XAIP 2019), pp. 39-43, 2019.
  34. [34] T. Wakaki, K. Nitta, and H. Sawamura, “Computing abductive argumentation in answer set programming,” Proc. of the 6th Int. Workshop on Argumentation in Multi-Agent Systems (ArgMAS 2009), pp. 195-215, 2010. https://doi.org/10.1007/978-3-642-12805-9_12
  35. [35] C. Schulz and F. Toni, “Justifying answer sets using argumentation,” Theory and Practice of Logic Programming, Vol.16, No.1, pp. 59-110, 2016. https://doi.org/10.1017/S1471068414000702
  36. [36] L. Rolf, G. Kern-Isberner, and G. Brewka, “Argumentation-based explanations for answer sets using ADF,” Proc. of the 15th Int. Conf. on Logic Programming and Nonmonotonic Reasoning (LPNMR 2019), pp. 89-102, 2019. https://doi.org/10.1007/978-3-030-20528-7_8
  37. [37] C. Cayrol and M.-C. Lagasquie-Schiex, “Bipolar abstract argumentation systems,” G. Simari and I. Rahwan (Eds.), “Argumentation in Artificial intelligence,” pp. 65-84, Springer, 2009. https://doi.org/10.1007/978-0-387-98197-0_4
  38. [38] D. M. Gabbay, “Logical foundations for bipolar and tripolar argumentation networks: Preliminary results,” J. of Logic and Computation, Vol.26, No.1, pp. 247-292, 2016. https://doi.org/10.1093/logcom/ext027
  39. [39] K. Čyras et al., “Explanations by arbitrated argumentative dispute,” Expert Systems with Applications, Vol.127, pp. 141-156, 2019. https://doi.org/10.1016/j.eswa.2019.03.012
  40. [40] S. Modgil and M. Caminada, “Proof theories and algorithms for abstract argumentation frameworks,” G. Simari and I. Rahwan (Eds.), “Argumentation in Artificial Intelligence,” pp. 105-129, Springer, 2009. https://doi.org/10.1007/978-0-387-98197-0_6
  41. [41] M. A. Grando, L. Moss, D. Sleeman, and J. Kinsella, “Argumentation-logic for creating and explaining medical hypotheses,” Artificial Intelligence in Medicine, Vol.58, No.1, pp. 1-13, 2013. https://doi.org/10.1016/j.artmed.2013.02.003
  42. [42] A. J. García, C. I. Chesñevar, N. D. Rotstein, and G. R. Simari, “Formalizing dialectical explanation support for argument-based reasoning in knowledge-based systems,” Expert Systems with Applications, Vol.40, No.8, pp. 3233-3247, 2013. https://doi.org/10.1016/j.eswa.2012.12.036
  43. [43] R. Booth, D. Gabbay, S. Kaci, T. Rienstra, and L. van der Torre, “Abduction and dialogical proof in argumentation and logic programming,” Proc. of the 21st European Conf. on Artificial Intelligence (ECAI 2014), pp. 117-122, 2014. https://doi.org/10.3233/978-1-61499-419-0-117
  44. [44] X. Fan and F. Toni, “On computing explanations in argumentation,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.29, No.1, pp. 1496-1502, 2015. https://doi.org/10.1609/aaai.v29i1.9420
  45. [45] A. Arioua, P. Buche, and M. Croitoru, “Explanatory dialogues with argumentative faculties over inconsistent knowledge bases,” Expert Systems with Applications, Vol.80, pp. 244-262, 2017. https://doi.org/10.1016/j.eswa.2017.03.009
  46. [46] Z. Zeng, C. Miao, C. Leung, Z. Shen, and J. J. Chin, “Computing argumentative explanations in bipolar argumentation frameworks,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.33, No.1, pp. 10079-10080, 2019. https://doi.org/10.1609/aaai.v33i01.330110079
  47. [47] Z. G. Saribatur, J. P. Wallner, and S. Woltran, “Explaining non-acceptability in abstract argumentation,” Proc. of the 24th European Conf. on Artificial Intelligence (ECAI 2020), pp. 881-888, 2020. https://doi.org/10.3233/FAIA200179
  48. [48] B. Liao and L. van der Torre, “Explanation semantics for abstract argumentation,” Computational Models of Argument: Proc. of COMMA 2020, pp. 271-282, 2020. https://doi.org/10.3233/FAIA200511
  49. [49] J. G. T. Peters, F. J. Bex, and H. Prakken, “Justifications derived from inconsistent case bases using authoritativeness,” Proc. of 1st Int. Workshop on Argumentation for Explainable AI (ArgXAI 2022), 2022.
  50. [50] W. van Woerkom, D. Grossi, H. Prakken, and B. Verheij, “Justification in case-based reasoning,” Proc. of 1st Int. Workshop on Argumentation for Explainable AI (ArgXAI 2022), 2022.
  51. [51] G. Paulino-Passos and F. Toni, “On monotonicity of dispute trees as explanations for case-based reasoning with abstract argumentation,” Proc. of 1st Int. Workshop on Argumentation for Explainable AI (ArgXAI 2022), 2022.
  52. [52] K. Čyras, T. Kampik, and Q. Weng, “Dispute trees as explanations in quantitative (bipolar) argumentation,” Proc. of 1st Int. Workshop on Argumentation for Explainable AI (ArgXAI 2022), 2022.
  53. [53] T. Rienstra, J. Heyninck, G. Kern-Isberner, K. Skiba, and M. Thimm, “Explaining argument acceptance in ADFs,” Proc. of 1st Int. Workshop on Argumentation for Explainable AI (ArgXAI 2022), 2022.
  54. [54] P. M. Thang, P. M. Dung, and N. D. Hung, “Towards a common framework for dialectical proof procedures in abstract argumentation,” J. of Logic and Computation, Vol.19, No.6, pp. 1071-1109, 2009. https://doi.org/10.1093/logcom/exp032
  55. [55] P. M. Thang, P. M. Dung, and N. D. Hung, “Towards argument-based foundation for sceptical and credulous dialogue games,” Computational Models of Argument: Proc. of COMMA 2012, pp. 398-409, 2012. https://doi.org/10.3233/978-1-61499-111-3-398
  56. [56] D. Walton, “A new dialectical theory of explanation,” Philosophical Explorations, Vol.7, No.1, pp. 71-89, 2004. https://doi.org/10.1080/1386979032000186863

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 19, 2025