Enhancing Transparency in Recommender Systems: An Explainable AI Approach Using MovieLens
DOI:
https://doi.org/10.70062/globalscience.v1i4.190Keywords:
Explainable AI, Recommender Systems, Context Awareness, MovieLens, User TransparencyAbstract
Recommender systems play a critical role in shaping user decisions across digital platforms; however, the increasing complexity of recommendation algorithms has raised serious concerns regarding transparency, trust, and accountability. This study focuses on enhancing the transparency of recommender systems by integrating Explainable Artificial Intelligence (XAI) techniques within a MovieLens-based recommendation framework. The primary problem addressed is the opacity of conventional recommendation models, which limits user understanding of why certain items are recommended and may reduce trust, perceived fairness, and system acceptance. Accordingly, the main objective of this research is to design and evaluate a hybrid explainable recommender system that balances predictive accuracy with human-understandable explanations. The proposed approach combines Matrix Factorization, feature-importance-aware neural networks, and knowledge graph embeddings to construct a robust recommendation model. To enhance explainability, multiple XAI strategies are integrated, including model-agnostic methods (LIME, SHAP, and CLIME), argumentation-based explanations, and context-aware personalized explanations. A comprehensive evaluation framework is employed, incorporating algorithmic metrics (accuracy, fidelity, robustness, counterfactual consistency, and fairness) alongside human-centered evaluations measuring trust, transparency, cognitive load, and perceived usefulness. Experimental results demonstrate that the knowledge graph–enhanced hybrid model achieves superior recommendation accuracy compared to baseline approaches. Moreover, context-aware explanations consistently outperform other methods in terms of fidelity, robustness, and user-perceived transparency, while argumentation-based explanations are found to be the most persuasive. CLIME offers a strong balance between technical stability and interpretability. The findings indicate that no single explainability technique is universally optimal; instead, hybrid and adaptive explanation strategies are most effective. In conclusion, this study confirms that human-centered, context-adaptive XAI significantly improves transparency and user trust in recommender systems, highlighting explainability as a fundamental component rather than an optional enhancement.
References
Al-Hazwani, I., Ahmed, N., El-Assady, M., & Bernard, J. (2024). Towards Personal Explanations for Recommender Systems: A Study on the Impact of Familiarity and Urgency. ACM International Conference Proceeding Series. https://doi.org/10.1145/3677045.3685430
Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics, 11(3). https://doi.org/10.3390/informatics11030058
Baklanov, M. (2024). CEERS: Counterfactual Evaluations of Explanations in Recommender Systems. RecSys 2024 - Proceedings of the 18th ACM Conference on Recommender Systems, 1323–1329. https://doi.org/10.1145/3640457.3688015
Bobek, S., Korycińska, P., Krakowska, M., Mozolewski, M., Rak, D., Zych, M., Wójcik, M., & Nalepa, G. J. (2025). User-centric evaluation of explainability of AI with and for humans: A comprehensive empirical study. International Journal of Human Computer Studies, 205. https://doi.org/10.1016/j.ijhcs.2025.103625
Borg Bruun, S., Maistro, M., & Lioma, C. (2025). Feature Attribution Explanations of Session-Based Recommendations. Lecture Notes in Computer Science, 15573 LNCS, 55–71. https://doi.org/10.1007/978-3-031-88711-6_4
Campos, J., & Shakhovska, N. (2025). Advancing XAI Development: An Agile Framework for Human-Centered and Explainable AI. Lecture Notes in Computer Science, 15819 LNAI, 22–40. https://doi.org/10.1007/978-3-031-93412-4_2
Confalonieri, R., & Alonso-Moral, J. M. (2024). An Operational Framework for Guiding Human Evaluation in Explainable and Trustworthy Artificial Intelligence. IEEE Intelligent Systems, 39(1), 18–28. https://doi.org/10.1109/MIS.2023.3334639
De, A., Gudipudi, S. S., Panchanan, S., & Desarkar, M. S. (2023). ComplAI: Framework for Multi-factor Assessment of Black-Box Supervised Machine Learning Models. Proceedings of the ACM Symposium on Applied Computing, 1096–1099. https://doi.org/10.1145/3555776.3577771
de Souza, L. S. (2024). Fairness Explanations in Recommender Systems. RecSys 2024 - Proceedings of the 18th ACM Conference on Recommender Systems, 1353–1354. https://doi.org/10.1145/3640457.3688020
Deldjoo, Y., Elahi, M., Quadrana, M., & Cremonesi, P. (2018). Using visual features based on MPEG-7 and deep learning for movie recommendation. International Journal of Multimedia Information Retrieval, 7(4), 207–219. https://doi.org/10.1007/s13735-018-0155-1
Donoso-Guzmán, I., Ooge, J., Parra, D., & Verbert, K. (2023). Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI. Communications in Computer and Information Science, 1903 CCIS, 183–204. https://doi.org/10.1007/978-3-031-44070-0_10
Felici, R., De Angelis, E., Ferrato, A., Proietti, M., Sansonetti, G., & Toni, F. (2025). Argumentation-based explainable recommender system with ARES. CEUR Workshop Proceedings, 4066, 8–19. https://www.scopus.com/inward/record.uri?eid=2-s2.0-105020381115&partnerID=40&md5=ddc818a85440936673fa538a253c845d
Gupta, P., Guha, D., & Chakraborty, D. (2026). Feature-Importance Aware Deep Neural Network Model for Explainable Recommender Systems. Lecture Notes in Computer Science, 16239 LNCS, 109–120. https://doi.org/10.1007/978-3-032-10489-2_10
Huo, Z., Liu, H., Li, Y., Xia, C., & Ge, B. (2025). Propagation Depth-aware knowledge graph neural networks for recommendation. Proceedings of SPIE - The International Society for Optical Engineering, 13794. https://doi.org/10.1117/12.3083392
Iferroudjene, M., Lonjarret, C., Robardet, C., Plantevit, M., & Atzmueller, M. (2023). Methods for explaining Top-N recommendations through subgroup discovery. Data Mining and Knowledge Discovery, 37(2), 833–872. https://doi.org/10.1007/s10618-022-00897-2
Jafari, E. (2025). Beyond Persuasion: Adaptive Warnings and Balanced Explanations for Informed Decision-Making in Recommender Systems. RecSys2025 - Proceedings of the 19th ACM Conference on Recommender Systems, 1463–1468. https://doi.org/10.1145/3705328.3748758
Kibria, M. G., Kucirka, L., & Mostafa, J. (2025). Assessing AI Explainability: A Usability Study Using a Novel Framework Involving Clinicians. Proceedings - 2025 IEEE 13th International Conference on Healthcare Informatics, ICHI 2025, 553–564. https://doi.org/10.1109/ICHI64645.2025.00069
Lei, F., Cao, Z., Yang, Y., Ding, Y., & Zhang, C. (2023). Learning the User’s Deeper Preferences for Multi-modal Recommendation Systems. ACM Transactions on Multimedia Computing, Communications and Applications, 19(3s). https://doi.org/10.1145/3573010
Lesota, O., Brandl, S., Wenzel, M., Melchiorre, A. B., Lex, E., Rekabsaz, N., & Schedl, M. (2022). Exploring Cross-group Discrepancies in Calibrated Popularity for Accuracy/Fairness Trade-off Optimization. CEUR Workshop Proceedings, 3268. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85142920773&partnerID=40&md5=f4bed6c33c405436d7518eb72a9dc267
Li, X., Yan, X., & Lai, H. (2025). The ethical challenges in the integration of artificial intelligence and large language models in medical education: A scoping review. PLOS ONE, 20(10 October). https://doi.org/10.1371/journal.pone.0333411
Liu, S., Ding, R., Lu, W., Wang, J., Yu, M., Shi, X., & Zhang, W. (2025). Coherency Improved Explainable Recommendation via Large Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 39(11), 12201–12209. https://doi.org/10.1609/aaai.v39i11.33329
Luo, H., Zhuang, F., Xie, R., Zhu, H., Wang, D., An, Z., & Xu, Y. (2024). A survey on causal inference for recommendation. The Innovation, 5(2), 100590. https://doi.org/https://doi.org/10.1016/j.xinn.2024.100590
Mersha, M. A., Yigezu, M. G., & Kalita, J. (2025). Evaluating the effectiveness of XAI techniques for encoder-based language models. Knowledge-Based Systems, 310. https://doi.org/10.1016/j.knosys.2025.113042
Mohammadi, A. R., Peintner, A., Müller, M., & Zangerle, E. (2025). Beyond Top-1: Addressing Inconsistencies in Evaluating Counterfactual Explanations for Recommender Systems. RecSys2025 - Proceedings of the 19th ACM Conference on Recommender Systems, 515–520. https://doi.org/10.1145/3705328.3748028
Parra, D. (2022). From User Control and Explainability in Recommendation Interfaces to Visual XAI. CEUR Workshop Proceedings, 3222, 1–2. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139976641&partnerID=40&md5=e002b4cc54a6c72a2a82d6b2bd887df3
Rago, A., Cocarascu, O., Bechlivanidis, C., Lagnado, D., & Toni, F. (2021). Argumentative explanations for interactive recommendations. Artificial Intelligence, 296. https://doi.org/10.1016/j.artint.2021.103506
Raj, S., Saha, S., Singh, B., & Pedanekar, N. (2025). Multimodal Movie Recommendation With Multitasking Architecture and Learning User–Movie Representation: An Empirical Study. IEEE Transactions on Computational Social Systems, 12(5), 2800–2813. https://doi.org/10.1109/TCSS.2025.3539884
Roberts, C., Elahi, E., & Chandrashekar, A. (2023). CLIME: Completeness-Constrained LIME. ACM Web Conference 2023 - Companion of the World Wide Web Conference, WWW 2023, 950–958. https://doi.org/10.1145/3543873.3587652
Rong, Y., Leemann, T., Nguyen, T.-T., Fiedler, L., Qian, P., Unhelkar, V., Seidel, T., Kasneci, G., & Kasneci, E. (2024). Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(4), 2104–2122. https://doi.org/10.1109/TPAMI.2023.3331846
Said, A. (2024). On explaining recommendations with Large Language Models: a review. Frontiers in Big Data, 7. https://doi.org/10.3389/fdata.2024.1505284
Segura-Tinoco, A. (2021). Argument-based generation and explanation of recommendations. RecSys 2021 - 15th ACM Conference on Recommender Systems, 845–850. https://doi.org/10.1145/3460231.3473894
Son, Y., & Bolton, M. L. (2025). Towards a Signal Detection Based Measure for Assessing Information Quality of Explainable Recommender Systems. Proceedings - 2025 IEEE Conference on Artificial Intelligence, CAI 2025, 203–208. https://doi.org/10.1109/CAI64502.2025.00039
Stefancova, E. (2025). Towards Transparent Recommender Systems via Argumentation Frameworks. CEUR Workshop Proceedings, 4078, 58–64. https://www.scopus.com/inward/record.uri?eid=2-s2.0-105021802033&partnerID=40&md5=c800766aee67d43fc8d4e79cfbfe278a
Tran, T. N. T., Atas, M., Le, M. V, Samer, R., & Stettinger, M. (2020). Social choice-based explanations: An approach to enhancing fairness and consensus aspects. Journal of Universal Computer Science, 26(3), 402–431. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85083587302&partnerID=40&md5=f7bba663c1fc15f36d6d2c234ac9e8b7
Van Kuijk, K., Mahmoudi, S. S., Wen, Y., Barile, F., & Rienstra, T. (2023). An Argumentative Framework for Generating Explainable Group Recommendations. UMAP 2023 - Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, 266–274. https://doi.org/10.1145/3563359.3597387
Walek, B., & Sládek, O. (2025). Comparison of Selected Algorithms in Movie Recommender System. Applied Sciences (Switzerland), 15(17). https://doi.org/10.3390/app15179518
Wardatzky, K., Inel, O., & Bernstein, A. (2025). Toward Operationalizing a Comprehensive Evaluation Framework for Recommender Systems Explanations. CEUR Workshop Proceedings, 4063. https://www.scopus.com/inward/record.uri?eid=2-s2.0-105019497574&partnerID=40&md5=b3299e66805bc33c96990c6f9be5bed3
Yüksel, K. E., & Üsküdarli, S. (2024). Incorporating Knowledge Graph Embeddings into Graph Neural Networks for Sequential Recommender Systems. UBMK 2024 - Proceedings: 9th International Conference on Computer Science and Engineering, 517–522. https://doi.org/10.1109/UBMK63289.2024.10773537
Zagranovskaia, A., & Mitura, D. (2021). Designing Hybrid Recommender Systems. ACM International Conference Proceeding Series. https://doi.org/10.1145/3487757.3490921
Zanon, A. L., da Rocha, L. C. D., & Manzato, M. G. (2024). Model-Agnostic Knowledge Graph Embedding Explanations for Recommender Systems. Communications in Computer and Information Science, 2154 CCIS, 3–27. https://doi.org/10.1007/978-3-031-63797-1_1
Zhong, J., & Negre, E. (2022). Context-Aware Explanations in Recommender Systems. Lecture Notes in Networks and Systems, 441 LNNS, 76–85. https://doi.org/10.1007/978-3-030-98531-8_8
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Global Science: Journal of Information Technology and Computer Science

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

