kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Investigating the Contribution of Privileged Information in Knowledge Transfer LUPI by Explainable Machine Learning
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-4446-2800
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-8382-0300
2023 (English)In: Proceedings of the 12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023, ML Research Press , 2023, p. 470-484Conference paper, Published paper (Refereed)
Abstract [en]

Learning Under Privileged Information (LUPI) is a framework that exploits information that is available during training only, i.e., the privileged information (PI), to improve the classification of objects for which this information is not available. Knowledge transfer LUPI (KT-LUPI) extends the framework by inferring PI for the test objects through separate predictive models. Although the effectiveness of the framework has been thoroughly demonstrated, current investigations have provided limited insights only regarding what parts of the transferred PI contribute to the improved performance. A better understanding of this could not only lead to computational savings but potentially also to novel strategies for exploiting PI. We approach the problem by exploring the use of explainable machine learning through the state-of-the-art technique SHAP, to analyze the contribution of the transferred privileged information. We present results from experiments with five classification and three regression datasets, in which we compare the Shapley values of the PI computed in two different settings; one where the PI is assumed to be available during both training and testing, hence representing an ideal scenario, and a second setting, in which the PI is available during training only but is transferred to test objects, through KT-LUPI. The results indicate that explainable machine learning indeed has the potential as a tool to gain insights regarding the effectiveness of KT-LUPI.

Place, publisher, year, edition, pages
ML Research Press , 2023. p. 470-484
Keywords [en]
Knowledge Transfer, Learning Under Privileged Information, Shapley Value
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-340793ISI: 001221733900032Scopus ID: 2-s2.0-85178660180OAI: oai:DiVA.org:kth-340793DiVA, id: diva2:1819815
Conference
12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023, Limassol, Cyprus, Sep 13 2023 - Sep 15 2023
Note

QC 20231215

Available from: 2023-12-15 Created: 2023-12-15 Last updated: 2024-07-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Gauraha, NiharikaBoström, Henrik

Search in DiVA

By author/editor
Gauraha, NiharikaBoström, Henrik
By organisation
Software and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 58 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf