kth.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explaining automated decision-making: a multinational study of the GDPR right to meaningful information
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE - Research institutes of Sweden.ORCID iD: 0000-0003-0738-2737
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE - Research institutes of Sweden.ORCID iD: 0000-0003-2017-7914
Lund University.
University of Aalborg.
Show others and affiliations
2022 (English)In: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440Article in journal (Refereed) Published
Abstract [en]

The General Data Protection Regulation (GDPR) establishes a right for individuals to get access to information about automated decision-making based on their personal data. However, the application of this right comes with caveats. This paper investigates how European insurance companies have navigated these obstacles. By recruiting volunteering insurance customers, requests for information about how insurance premiums are set were sent to 26 insurance companies in Denmark, Finland, The Netherlands, Poland and Sweden. Findings illustrate the practice of responding to GDPR information requests and the paper identifies possible explanations for shortcomings and omissions in the responses. The paper also adds to existing research by showing how the wordings in the different language versions of the GDPR could lead to different interpretations. Finally, the paper discusses what can reasonably be expected from explanations in consumer oriented information.

Place, publisher, year, edition, pages
Springer Nature , 2022.
Keywords [en]
GDPR, Right of access, Meaningful information, Transparency, Insurance, Automated decision-making
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-312960DOI: 10.1057/s41288-022-00271-9ISI: 000790193100003Scopus ID: 2-s2.0-85129328785OAI: oai:DiVA.org:kth-312960DiVA, id: diva2:1661198
Note

QC 20220530

Available from: 2022-05-25 Created: 2022-05-25 Last updated: 2023-06-30Bibliographically approved
In thesis
1. Transparent but incomprehensible: Investigating the relation between transparency, explanations, and usability in automated decision-making
Open this publication in new window or tab >>Transparent but incomprehensible: Investigating the relation between transparency, explanations, and usability in automated decision-making
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Transparency is almost always seen as a desirable state of affairs. Governments should be more transparent towards their citizens, and corporations should be more transparent towards both public authorities and their customers. More transparency means more information which citizens can use to make decisions about their daily lives, and with increasing amounts of information in society, those citizens would be able to make more and more choices that align with their preferences. It is just that the story is slightly too good to be true. Instead, citizens are skeptical towards increased data collection, demand harsher transparency requirements and seem to lack both time and ability to properly engage with all the information available.

In this thesis the relation between transparency, explanations and usability is investigated within the context of automated decision-making. Aside from showing the benefits that transparency can have, it shows a wide array of different problems with transparency, and how transparency can be harder to accomplish than most assume. This thesis explores the explanations, which often make up the transparency, and their limitations, developments in automation and algorithmic decisions, as well as how society tends to regulate such things. It then applies these frameworks and investigates how human-computer interaction in general, and usability in particular, can help improve how transparency can bring the many benefits it promises.

Four papers are presented that study the topic from various perspectives. Paper I looks at how governments give guidance in achieving competitive advantages with ethical AI, while Paper II studies how insurance professionals view the benefits and limitations of transparency. Paper III and IV both study transparency in practice by use of requests for information according to GDPR. But while Paper III provides a comparative study of GDPR implementation in five countries, Paper IV instead shows and explores how transparency can fail and ponders why.

The thesis concludes by showing that while transparency does indeed have many benefits, it also has limitations. Companies and other actors need to be aware that sometimes transparency is simply not the right solution, and explanations have limitations for both automation and in humans. Transparency as a tool can reach certain goals, but good transparency requires good strategies, active choices and an awareness of what users need.

Abstract [sv]

Att något är transparent ses oftast som en önskvärd egenskap. Det offentliga ska vara transparent gentemot medborgaren, och företag ska vara transparenta mot såväl myndigheter som kunder. Mer transparens gör att mer information finns tillgängligt för medborgaren, så att hon kan göra egna och aktiva val i sitt liv, och i takt med att det finns mer och mer information i samhället så kan medborgaren också göra fler val som överensstämmer med hennes preferenser. Tyvärr är det en berättelse som är för bra för att vara sann. Oftare verkar medborgaren vara skeptisk mot ökad insamling av data, hon vill att såväl stat som företag ska bli mer transparenta och hon saknar såväl tid som färdigheter för att verkligen kunna förstå all den information som finns tillgänglig runtom henne. 

I denna avhandling undersöks förhållandet mellan transparens, förklaringar och användbarhet, med ett fokus på hur dessa fenomen tar sig uttryck när det rör sig om automatiserade beslut och algoritmer. Utöver att visa vilka fördelar transparens har, visar avhandlingen en mängd problem med transparens, och hur transparens kan vara svårare att omsätta i handling än vad många antar. Den utforska förklaringar, som transparens ofta består av, och dess begränsningar, utvecklingen inom automatisering och algoritmiskt beslutsfattande, samt hur samhället tenderar att reglera sådana fenomen. Avhandlingen använder sedan dessa modeller och tankefigurer för att undersöka hur människa-datorinteraktion i allmänhet och användbarhet i synnerhet kan användas för att förbättra transparens och realisera dess utlovade fördelar. 

Fyra studier presenteras som undersöker ämnet från olika perspektiv. Artikel I undersöker hur regeringar och myndigheter använder AI-strategier för att uppnå konkurrensfördelarna med etiskt hållbar AI, medan artikel II studerar hur försäkringsexperter förhåller sig till fördelarna och nackdelarna med transparens. Artiklarna III och IV undersöker båda praktiskt tillämpad transparens genom att begära ut förklaringar av automatiserade beslut, baserat på rättigheter i GDPR. Men, där artikel III jämför implementation i fem olika länder, visar artikel IV i stället hur transparens kan misslyckas och försöker förklara varför. 

Avhandlingen avslutas genom att visa att även om transparens visserligen har många fördelar, så finns där också begränsningar. Företag och andra aktörer måste vara medvetna om att transparens kanske inte allt är rätt lösning, och att förklaringar också har begränsad effekt i såväl maskiner som människor. Transparens är ett verktyg som kan användas för att nå vissa mål, men god transparens kräver goda strategier, aktiva val och en medvetenhet om vad användaren vill.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2022. p. 97
Series
TRITA-EECS-AVL ; 2022:44
Keywords
Transparency, explanations, algorithms, automated decision-making, AI, HCI
National Category
Human Computer Interaction
Research subject
Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-313825 (URN)978-91-8040-294-1 (ISBN)
Public defence
2022-09-16, F3, Lindstedtsvägen 26, Stockholm, 13:30 (English)
Opponent
Supervisors
Note

QC 20220613

Available from: 2022-06-13 Created: 2022-06-11 Last updated: 2022-10-04Bibliographically approved

Open Access in DiVA

fulltext(1002 kB)298 downloads
File information
File name FULLTEXT01.pdfFile size 1002 kBChecksum SHA-512
c95464843c5f49bb0e52205187fd9d0c83186e4a0ce3b977127a037d356bd017c588d86cf546e106084d8265b5c27d2d8fa216f0702a175e5f3bd7db08591f9b
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Dexe, JacobFranke, Ulrik

Search in DiVA

By author/editor
Dexe, JacobFranke, Ulrik
By organisation
Media Technology and Interaction Design, MID
In the same journal
Geneva papers on risk and insurance. Issues and practice
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 298 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 772 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf