kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Transparent but incomprehensible: Investigating the relation between transparency, explanations, and usability in automated decision-making
KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.ORCID-id: 0000-0003-0738-2737
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Transparency is almost always seen as a desirable state of affairs. Governments should be more transparent towards their citizens, and corporations should be more transparent towards both public authorities and their customers. More transparency means more information which citizens can use to make decisions about their daily lives, and with increasing amounts of information in society, those citizens would be able to make more and more choices that align with their preferences. It is just that the story is slightly too good to be true. Instead, citizens are skeptical towards increased data collection, demand harsher transparency requirements and seem to lack both time and ability to properly engage with all the information available.

In this thesis the relation between transparency, explanations and usability is investigated within the context of automated decision-making. Aside from showing the benefits that transparency can have, it shows a wide array of different problems with transparency, and how transparency can be harder to accomplish than most assume. This thesis explores the explanations, which often make up the transparency, and their limitations, developments in automation and algorithmic decisions, as well as how society tends to regulate such things. It then applies these frameworks and investigates how human-computer interaction in general, and usability in particular, can help improve how transparency can bring the many benefits it promises.

Four papers are presented that study the topic from various perspectives. Paper I looks at how governments give guidance in achieving competitive advantages with ethical AI, while Paper II studies how insurance professionals view the benefits and limitations of transparency. Paper III and IV both study transparency in practice by use of requests for information according to GDPR. But while Paper III provides a comparative study of GDPR implementation in five countries, Paper IV instead shows and explores how transparency can fail and ponders why.

The thesis concludes by showing that while transparency does indeed have many benefits, it also has limitations. Companies and other actors need to be aware that sometimes transparency is simply not the right solution, and explanations have limitations for both automation and in humans. Transparency as a tool can reach certain goals, but good transparency requires good strategies, active choices and an awareness of what users need.

Abstract [sv]

Att något är transparent ses oftast som en önskvärd egenskap. Det offentliga ska vara transparent gentemot medborgaren, och företag ska vara transparenta mot såväl myndigheter som kunder. Mer transparens gör att mer information finns tillgängligt för medborgaren, så att hon kan göra egna och aktiva val i sitt liv, och i takt med att det finns mer och mer information i samhället så kan medborgaren också göra fler val som överensstämmer med hennes preferenser. Tyvärr är det en berättelse som är för bra för att vara sann. Oftare verkar medborgaren vara skeptisk mot ökad insamling av data, hon vill att såväl stat som företag ska bli mer transparenta och hon saknar såväl tid som färdigheter för att verkligen kunna förstå all den information som finns tillgänglig runtom henne. 

I denna avhandling undersöks förhållandet mellan transparens, förklaringar och användbarhet, med ett fokus på hur dessa fenomen tar sig uttryck när det rör sig om automatiserade beslut och algoritmer. Utöver att visa vilka fördelar transparens har, visar avhandlingen en mängd problem med transparens, och hur transparens kan vara svårare att omsätta i handling än vad många antar. Den utforska förklaringar, som transparens ofta består av, och dess begränsningar, utvecklingen inom automatisering och algoritmiskt beslutsfattande, samt hur samhället tenderar att reglera sådana fenomen. Avhandlingen använder sedan dessa modeller och tankefigurer för att undersöka hur människa-datorinteraktion i allmänhet och användbarhet i synnerhet kan användas för att förbättra transparens och realisera dess utlovade fördelar. 

Fyra studier presenteras som undersöker ämnet från olika perspektiv. Artikel I undersöker hur regeringar och myndigheter använder AI-strategier för att uppnå konkurrensfördelarna med etiskt hållbar AI, medan artikel II studerar hur försäkringsexperter förhåller sig till fördelarna och nackdelarna med transparens. Artiklarna III och IV undersöker båda praktiskt tillämpad transparens genom att begära ut förklaringar av automatiserade beslut, baserat på rättigheter i GDPR. Men, där artikel III jämför implementation i fem olika länder, visar artikel IV i stället hur transparens kan misslyckas och försöker förklara varför. 

Avhandlingen avslutas genom att visa att även om transparens visserligen har många fördelar, så finns där också begränsningar. Företag och andra aktörer måste vara medvetna om att transparens kanske inte allt är rätt lösning, och att förklaringar också har begränsad effekt i såväl maskiner som människor. Transparens är ett verktyg som kan användas för att nå vissa mål, men god transparens kräver goda strategier, aktiva val och en medvetenhet om vad användaren vill.

Ort, förlag, år, upplaga, sidor
KTH Royal Institute of Technology, 2022. , s. 97
Serie
TRITA-EECS-AVL ; 2022:44
Nyckelord [en]
Transparency, explanations, algorithms, automated decision-making, AI, HCI
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Människa-datorinteraktion
Identifikatorer
URN: urn:nbn:se:kth:diva-313825ISBN: 978-91-8040-294-1 (tryckt)OAI: oai:DiVA.org:kth-313825DiVA, id: diva2:1667908
Disputation
2022-09-16, F3, Lindstedtsvägen 26, Stockholm, 13:30 (Engelska)
Opponent
Handledare
Anmärkning

QC 20220613

Tillgänglig från: 2022-06-13 Skapad: 2022-06-11 Senast uppdaterad: 2022-10-04Bibliografiskt granskad
Delarbeten
1. Nordic lights? National AI policies for doing well by doing good
Öppna denna publikation i ny flik eller fönster >>Nordic lights? National AI policies for doing well by doing good
2020 (Engelska)Ingår i: Journal of Cyber Policy, ISSN 2373-8871, Vol. 5, s. 332-349Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Getting ahead on the global stage of AI technologies requires vast resources or novel approaches. The Nordic countries have tried to find a novel path, claiming that responsible and ethical AI is not only morally right but confers a competitive advantage. In this article, eight official AI policy documents from Denmark, Finland, Norway and Sweden are analysed according to the AI4People taxonomy, which proposes five ethical principles for AI: beneficence, non-maleficence, autonomy, justice and explicability. The principles are described in terms such as growth, innovation, efficiency gains, cybersecurity, malicious use or misuse of AI systems, data use, effects on labour markets, and regulatory environments. The authors also analyse how the strategies describe the link between ethical principles and a competitive advantage, and what measures are proposed to facilitate that link. Links such as a first-mover advantage and measures such as influencing international standards and regulations are identified. The article concludes by showing that while ethical principles are present, neither the ethical principles nor the links and measures are made explicit in the policy documents.

Ort, förlag, år, upplaga, sidor
Taylor & Francis, 2020
Nyckelord
National strategies; artificial intelligence; ethics; competition; AI governance
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-295000 (URN)10.1080/23738871.2020.1856160 (DOI)
Projekt
Transparenta algoritmer i försäkringsbranschen (TALFÖR)
Forskningsfinansiär
Länsförsäkringar AB, P4/18
Anmärkning

QC 20220620

Tillgänglig från: 2021-05-18 Skapad: 2021-05-18 Senast uppdaterad: 2022-06-25Bibliografiskt granskad
2. Transparency and insurance professionals: a study of Swedish insurance practice attitudes and future development
Öppna denna publikation i ny flik eller fönster >>Transparency and insurance professionals: a study of Swedish insurance practice attitudes and future development
2021 (Engelska)Ingår i: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440, Vol. 46, nr 4, s. 547-572Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The insurance industry is being challenged by increased adoption of automated decision-making. AI advances could conceivably automate everything: marketing, customer service, underwriting and claims management alike. However, such automation challenges consumer trust, as there is considerable public and scholarly debate over the ‘black box’ character of many algorithms. Insurance being a business of trust, this suggests a dilemma. One suggested solution involves adopting algorithms in a transparent manner. This article reports a study of how Swedish insurers deal with this dilemma, based on (i) eight interviews with insurance professionals representing four companies with a joint market share of 45–50% of the Swedish property insurance market and (ii) a questionnaire answered by 71 professionals in a Swedish insurance company. The results show that while transparency is seen as potentially valuable, most Swedish insurers do not use it to gain a competitive advantage or identify clear limits to transparency and are not using AI extensively.

Ort, förlag, år, upplaga, sidor
Springer Nature, 2021
Nyckelord
Transparency, Openness, Trust, Insurance, Competitive advantage, Sweden
Nationell ämneskategori
Interaktionsteknik
Identifikatorer
urn:nbn:se:kth:diva-312959 (URN)10.1057/s41288-021-00207-9 (DOI)000625812500001 ()33686323 (PubMedID)2-s2.0-85118075190 (Scopus ID)
Anmärkning

QC 20220530

Tillgänglig från: 2022-05-25 Skapad: 2022-05-25 Senast uppdaterad: 2023-06-26Bibliografiskt granskad
3. Explaining automated decision-making: a multinational study of the GDPR right to meaningful information
Öppna denna publikation i ny flik eller fönster >>Explaining automated decision-making: a multinational study of the GDPR right to meaningful information
Visa övriga...
2022 (Engelska)Ingår i: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The General Data Protection Regulation (GDPR) establishes a right for individuals to get access to information about automated decision-making based on their personal data. However, the application of this right comes with caveats. This paper investigates how European insurance companies have navigated these obstacles. By recruiting volunteering insurance customers, requests for information about how insurance premiums are set were sent to 26 insurance companies in Denmark, Finland, The Netherlands, Poland and Sweden. Findings illustrate the practice of responding to GDPR information requests and the paper identifies possible explanations for shortcomings and omissions in the responses. The paper also adds to existing research by showing how the wordings in the different language versions of the GDPR could lead to different interpretations. Finally, the paper discusses what can reasonably be expected from explanations in consumer oriented information.

Ort, förlag, år, upplaga, sidor
Springer Nature, 2022
Nyckelord
GDPR, Right of access, Meaningful information, Transparency, Insurance, Automated decision-making
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Identifikatorer
urn:nbn:se:kth:diva-312960 (URN)10.1057/s41288-022-00271-9 (DOI)000790193100003 ()2-s2.0-85129328785 (Scopus ID)
Anmärkning

QC 20220530

Tillgänglig från: 2022-05-25 Skapad: 2022-05-25 Senast uppdaterad: 2023-06-30Bibliografiskt granskad
4. Transparency hurdles: investigating explanations of automated decision-making in practice
Öppna denna publikation i ny flik eller fönster >>Transparency hurdles: investigating explanations of automated decision-making in practice
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

The study investigates how companies respond to transparency requests for right to access regarding automated decision-making. With increasing use of automated decision-making, the ability for consumers to understand how and why such decisions are made becomes increasingly important to achieve informed consent and maintain autonomy in the digital space. Transparency might be one way to achieve this. The article investigates responses to transparency requests in practice which, combined with a literature review, suggests that the right to access in the GDPR is hard to realize for consumers. The authors have made real requests for explanations about automated decision-making to 24 companies, using their rights as consumers as stipulated in the GDPR Article 15 (1)(h). The replies from the companies were analysed and reference interviews were conducted. Only two companies explained how they use automated decision-making, four claimed they had no such automation. Six had a different legal interpretation of the question and 12 failed to answer the question altogether. Based on the lackluster responses from the companies, the authors present nine hurdles that consumers face when requesting transparency. These hurdles explain why it is difficultto get adequate explanations regarding automated decision-making, and that there is much to be done in order to realize adequate transparency to consumers.

Nyckelord
Transparency, GDPR, right of access, meaningful information, automated decisionmaking, explanations
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Människa-datorinteraktion
Identifikatorer
urn:nbn:se:kth:diva-313580 (URN)
Anmärkning

QC 20220613

Tillgänglig från: 2022-06-07 Skapad: 2022-06-07 Senast uppdaterad: 2022-06-25Bibliografiskt granskad

Open Access i DiVA

Transparent but incomprehensible(6971 kB)696 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 6971 kBChecksumma SHA-512
34dfbd8ae669e817d39e9d91984a5c9ee34c31840ec4bbcea6da2d2b1b1b8a174aff4f61bb0b30863f8fef0ee9daa25bdad3b28d2e9a8276571c714b7fe8644a
Typ fulltextMimetyp application/pdf

Person

Dexe, Jacob

Sök vidare i DiVA

Av författaren/redaktören
Dexe, Jacob
Av organisationen
Medieteknik och interaktionsdesign, MID
Människa-datorinteraktion (interaktionsdesign)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 696 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 3450 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf