In recent years, Artificial Intelligence (AI) solutions for Modulation and Coding Scheme (MCS) selection have been predominantly characterized as black-box models, which suffer from limited interpretability and consequently hinder trust in these algorithms. Moreover, the majority of existing eXplainable AI (XAI) research primarily emphasizes enhancing explainability without concurrently improving the model's performance which makes performance and interpretability a tradeoff. This paper aims to address these issues by employing counterfactual and causal analysis to increase the interpretability and trustworthi-ness of black-box models. In particular, we propose CounterFac-tual Retrain (CF-Retrain), the first algorithm that utilizes coun-terfactual explanations to improve model performance and make the process of performance enhancement more interpretable. Additionally, we conduct a causal analysis and compare the results with those obtained from an analysis based on the SHapley Additive exPlanations (SHAP) value feature importance. This comparison leads to the proposal of novel hypotheses and insights for model optimization in future research.
Part of ISBN 9798350370218
QC 20240709