In recent years, artificial intelligence (AI) has made great progress. However, despite impressive results, modern data-driven AI systems are often very difficult to understand, challenging their use in software business and prompting the emergence of the explainable AI (XAI) field. This paper explores how machine learning (ML) students explain their models and draws implications for practice from this. Data was collected from ML master students, who were given a two-part assignment. First they developed a model predicting insurance claims based on an existing data set, then they received a request for explanation of insurance premiums in accordance with the GDPR right to meaningful information and had to come up with such an explanation. The students also peer-graded each other’s explanations. Analyzing this data set and comparing it to responses from actual insurance firms from a previous study illustrates some potential pitfalls—narrow technical focus and offering mere data dumps. There were also some promising directions—feature importance, graphics, and what-if scenarios—where the software business practice could benefit from being inspired by the students. The paper is concluded with a reflection about the importance of multiple kinds of expertise and team efforts for making the most of XAI in practice.
QC 20250414