kth.sePublications
Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Dexe, Jacob
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Transparent but incomprehensible: Investigating the relation between transparency, explanations, and usability in automated decision-making2022Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Transparency is almost always seen as a desirable state of affairs. Governments should be more transparent towards their citizens, and corporations should be more transparent towards both public authorities and their customers. More transparency means more information which citizens can use to make decisions about their daily lives, and with increasing amounts of information in society, those citizens would be able to make more and more choices that align with their preferences. It is just that the story is slightly too good to be true. Instead, citizens are skeptical towards increased data collection, demand harsher transparency requirements and seem to lack both time and ability to properly engage with all the information available.

    In this thesis the relation between transparency, explanations and usability is investigated within the context of automated decision-making. Aside from showing the benefits that transparency can have, it shows a wide array of different problems with transparency, and how transparency can be harder to accomplish than most assume. This thesis explores the explanations, which often make up the transparency, and their limitations, developments in automation and algorithmic decisions, as well as how society tends to regulate such things. It then applies these frameworks and investigates how human-computer interaction in general, and usability in particular, can help improve how transparency can bring the many benefits it promises.

    Four papers are presented that study the topic from various perspectives. Paper I looks at how governments give guidance in achieving competitive advantages with ethical AI, while Paper II studies how insurance professionals view the benefits and limitations of transparency. Paper III and IV both study transparency in practice by use of requests for information according to GDPR. But while Paper III provides a comparative study of GDPR implementation in five countries, Paper IV instead shows and explores how transparency can fail and ponders why.

    The thesis concludes by showing that while transparency does indeed have many benefits, it also has limitations. Companies and other actors need to be aware that sometimes transparency is simply not the right solution, and explanations have limitations for both automation and in humans. Transparency as a tool can reach certain goals, but good transparency requires good strategies, active choices and an awareness of what users need.

    Download full text (pdf)
    Transparent but incomprehensible
  • 2.
    Dexe, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden.
    Eriksson, Magnus
    Knaving, Kristina
    Transparency hurdles: investigating explanations of automated decision-making in practiceManuscript (preprint) (Other academic)
    Abstract [en]

    The study investigates how companies respond to transparency requests for right to access regarding automated decision-making. With increasing use of automated decision-making, the ability for consumers to understand how and why such decisions are made becomes increasingly important to achieve informed consent and maintain autonomy in the digital space. Transparency might be one way to achieve this. The article investigates responses to transparency requests in practice which, combined with a literature review, suggests that the right to access in the GDPR is hard to realize for consumers. The authors have made real requests for explanations about automated decision-making to 24 companies, using their rights as consumers as stipulated in the GDPR Article 15 (1)(h). The replies from the companies were analysed and reference interviews were conducted. Only two companies explained how they use automated decision-making, four claimed they had no such automation. Six had a different legal interpretation of the question and 12 failed to answer the question altogether. Based on the lackluster responses from the companies, the authors present nine hurdles that consumers face when requesting transparency. These hurdles explain why it is difficultto get adequate explanations regarding automated decision-making, and that there is much to be done in order to realize adequate transparency to consumers.

  • 3.
    Dexe, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden.
    Franke, Ulrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden.
    Nordic lights? National AI policies for doing well by doing good2020In: Journal of Cyber Policy, ISSN 2373-8871, Vol. 5, p. 332-349Article in journal (Refereed)
    Abstract [en]

    Getting ahead on the global stage of AI technologies requires vast resources or novel approaches. The Nordic countries have tried to find a novel path, claiming that responsible and ethical AI is not only morally right but confers a competitive advantage. In this article, eight official AI policy documents from Denmark, Finland, Norway and Sweden are analysed according to the AI4People taxonomy, which proposes five ethical principles for AI: beneficence, non-maleficence, autonomy, justice and explicability. The principles are described in terms such as growth, innovation, efficiency gains, cybersecurity, malicious use or misuse of AI systems, data use, effects on labour markets, and regulatory environments. The authors also analyse how the strategies describe the link between ethical principles and a competitive advantage, and what measures are proposed to facilitate that link. Links such as a first-mover advantage and measures such as influencing international standards and regulations are identified. The article concludes by showing that while ethical principles are present, neither the ethical principles nor the links and measures are made explicit in the policy documents.

  • 4.
    Dexe, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden Kista Sweden.
    Franke, Ulrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden Kista Sweden.
    Nöu, A. A.
    Rad, A.
    Towards increased transparency with value sensitive design2020In: Lecture Notes in Computer Science book series (LNCS, volume 12217), Springer Nature , 2020, p. 3-15Conference paper (Refereed)
    Abstract [en]

    In the past few years, the ethics and transparency of AI and other digital systems have received much attention. There is a vivid discussion on explainable AI, both among practitioners and in academia, with contributions from diverse fields such as computer science, human-computer interaction, law, and philosophy. Using the Value Sensitive Design (VSD) method as a point of departure, this paper explores how VSD can be used in the context of transparency. More precisely, it is investigated (i) if the VSD Envisioning Cards facilitate transparency as a pro-ethical condition, (ii) if they can be improved to realize ethical principles through transparency, and (iii) if they can be adapted to facilitate reflection on ethical principles in large groups. The research questions are addressed through a two-fold case study, combining one case where a larger audience participated in a reduced version of VSD with another case where a smaller audience participated in a more traditional VSD workshop. It is concluded that while the Envisioning Cards are effective in promoting ethical reflection in general, the realization of ethical values through transparency is not always similarly promoted. Therefore, it is proposed that a transparency card be added to the Envisioning Card deck. It is also concluded that a lightweight version of VSD seems useful in engaging larger audiences. The paper is concluded with some suggestions for future work.

  • 5.
    Dexe, Jacob
    et al.
    RISE - Research institutes of Sweden.
    Franke, Ulrik
    RISE - Research institutes of Sweden.
    Rad, Alexander
    RISE - Research institutes of Sweden.
    Transparency and insurance professionals: a study of Swedish insurance practice attitudes and future development2021In: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440, Vol. 46, no 4, p. 547-572Article in journal (Refereed)
    Abstract [en]

    The insurance industry is being challenged by increased adoption of automated decision-making. AI advances could conceivably automate everything: marketing, customer service, underwriting and claims management alike. However, such automation challenges consumer trust, as there is considerable public and scholarly debate over the ‘black box’ character of many algorithms. Insurance being a business of trust, this suggests a dilemma. One suggested solution involves adopting algorithms in a transparent manner. This article reports a study of how Swedish insurers deal with this dilemma, based on (i) eight interviews with insurance professionals representing four companies with a joint market share of 45–50% of the Swedish property insurance market and (ii) a questionnaire answered by 71 professionals in a Swedish insurance company. The results show that while transparency is seen as potentially valuable, most Swedish insurers do not use it to gain a competitive advantage or identify clear limits to transparency and are not using AI extensively.

  • 6.
    Dexe, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE - Research institutes of Sweden.
    Franke, Ulrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE - Research institutes of Sweden.
    Söderlund, Kasia
    Lund University.
    van Berkel, Niels
    University of Aalborg.
    Jensen, Rikke Hagensby
    University of Aalborg.
    Lepinkäinen, Nea
    Turku University.
    Vaiste, Juho
    Turku University.
    Explaining automated decision-making: a multinational study of the GDPR right to meaningful information2022In: Geneva papers on risk and insurance. Issues and practice, ISSN 1018-5895, E-ISSN 1468-0440Article in journal (Refereed)
    Abstract [en]

    The General Data Protection Regulation (GDPR) establishes a right for individuals to get access to information about automated decision-making based on their personal data. However, the application of this right comes with caveats. This paper investigates how European insurance companies have navigated these obstacles. By recruiting volunteering insurance customers, requests for information about how insurance premiums are set were sent to 26 insurance companies in Denmark, Finland, The Netherlands, Poland and Sweden. Findings illustrate the practice of responding to GDPR information requests and the paper identifies possible explanations for shortcomings and omissions in the responses. The paper also adds to existing research by showing how the wordings in the different language versions of the GDPR could lead to different interpretations. Finally, the paper discusses what can reasonably be expected from explanations in consumer oriented information.

    Download full text (pdf)
    fulltext
  • 7.
    Dexe, Jacob
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE .
    Ledendal, Jonas
    Lunds universitet.
    Franke, Ulrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden AB, Kista, Sweden.
    An Empirical Investigation of the Right to Explanation Under GDPR in Insurance2020In: Lecture Notes in Computer Science, Springer Nature , 2020, Vol. 12395, p. 125-139Conference paper (Refereed)
    Abstract [en]

    The GDPR aims at strengthening the rights of data subjects and to build trust in the digital single market. This is manifested by the introduction of a new principle of transparency. It is, however, not obvious what this means in practice: What kind of answers can be expected to GDPR requests citing the right to “meaningful information”? This is the question addressed in this article. Seven insurance companies, representing 90–95% of the Swedish home insurance market, were asked by consumers to disclose information about how premiums are set. Results are presented first giving descriptive statistics, then characterizing the pricing information given, and lastly describing the procedural information offered by insurers as part of their answers. Overall, several different approaches to answering the request can be discerned, including different uses of examples, lists, descriptions of logic, legal basis as well as data related to the process of answering the requests. Results are analyzed in light of GDPR requirements. A number of potential improvements are identified—at least three responses are likely to fail the undue delay requirement. The article is concluded with a discussion about future work. 

  • 8.
    Franke, Ulrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Res Inst Sweden, POB 1263, S-16429 Kista, Sweden..
    Helgesson Hallström, Celine
    KTH. KTH Royal Inst Technol, S-10044 Stockholm, Sweden..
    Artman, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Dexe, Jacob
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Res Inst Sweden, POB 1263, S-16429 Kista, Sweden..
    Requirements on and Procurement of Explainable Algorithms-A Systematic Review of the Literature2024In: NEW TRENDS IN DISRUPTIVE TECHNOLOGIES, TECH ETHICS, AND ARTIFICIAL INTELLIGENCE, DITTET 2024 / [ed] DeLaIglesia, DH Santana, JFD Rivero, AJL, Springer Nature , 2024, Vol. 1459, p. 40-52Conference paper (Refereed)
    Abstract [en]

    Artificial intelligence is making progress, enabling automation of tasks previously the privilege of humans. This brings many benefits but also entails challenges, in particular with respect to 'black box' machine learning algorithms. Therefore, questions of transparency and explainability in these systems receive much attention. However, most organizations do not build their software from scratch, but rather procure it from others. Thus, it becomes imperative to consider not only requirements on but also procurement of explainable algorithms and decision support systems. This article offers a first systematic literature review of this area. Following construction of appropriate search queries, 503 unique items from Scopus, ACM Digital Library, and IEEE Xplore were screened for relevance. 37 items remained in the final analysis. An overview and a synthesis of the literature is offered, and it is concluded that more research is needed, in particular on procurement, human-computer interaction aspects, and different purposes of explainability.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf