kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How Do ML Students Explain Their Models and What Can We Learn from This?
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden, 164 29, Kista, Sweden.ORCID iD: 0000-0003-2017-7914
2025 (English)In: Software Business - 15th International Conference, ICSOB 2024, Proceedings, Springer Nature , 2025, p. 351-365Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, artificial intelligence (AI) has made great progress. However, despite impressive results, modern data-driven AI systems are often very difficult to understand, challenging their use in software business and prompting the emergence of the explainable AI (XAI) field. This paper explores how machine learning (ML) students explain their models and draws implications for practice from this. Data was collected from ML master students, who were given a two-part assignment. First they developed a model predicting insurance claims based on an existing data set, then they received a request for explanation of insurance premiums in accordance with the GDPR right to meaningful information and had to come up with such an explanation. The students also peer-graded each other’s explanations. Analyzing this data set and comparing it to responses from actual insurance firms from a previous study illustrates some potential pitfalls—narrow technical focus and offering mere data dumps. There were also some promising directions—feature importance, graphics, and what-if scenarios—where the software business practice could benefit from being inspired by the students. The paper is concluded with a reflection about the importance of multiple kinds of expertise and team efforts for making the most of XAI in practice.

Place, publisher, year, edition, pages
Springer Nature , 2025. p. 351-365
Keywords [en]
experiment, explainable AI, GDPR, insurance
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-362218DOI: 10.1007/978-3-031-85849-9_28ISI: 001476891400026Scopus ID: 2-s2.0-105001269309OAI: oai:DiVA.org:kth-362218DiVA, id: diva2:1951012
Conference
15th International Conference on Software Business, ICSOB 2024, Utrecht, Netherlands, Kingdom of the, Nov 18 2024 - Nov 20 2024
Note

QC 20250414

Available from: 2025-04-09 Created: 2025-04-09 Last updated: 2025-07-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Franke, Ulrik

Search in DiVA

By author/editor
Franke, Ulrik
By organisation
Media Technology and Interaction Design, MID
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 16 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf