Cheruvalath, M., et al. (2025), Generating Explanations for Models Predicting Student Exam Performance.
Title
Cheruvalath, S. S., Laporte, M., Bombassei De Bona, F., Hassan, T., & Gjoreski, M. (2025, October). Generating Explanations for Models Predicting Student Exam Performance. In Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing (pp. 1679-1684).
Abstract
Understanding and improving student performance is a central concern in education, and predictive models can provide valuable insights—provided their decisions are transparent and explainable. However, many machine learning (ML) models used for this purpose lack interpretability, limiting their practical utility. In this work, we apply two complementary eXplainable AI (XAI) methods: SHapley Additive exPlanations (SHAP) and the Bayesian Counterfactual Generator (BayCon), to explain the predictions of an ML model trained on multimodal data to forecast student exam outcomes. SHAP is used to identify and visualise the most influential features contributing to each prediction, while BayCon generates actionable counterfactuals. These counterfactuals are then converted into natural language using a large language model (LLM), making them easier to understand. Our approach is designed to support both students and educators by offering clear, personalised insights into the factors affecting academic performance. We present the explanation generation pipeline and discuss its interpretability and potential benefits for educational contexts.