kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
First- and Second-Level Bias in Automated Decision-making
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. RISE Research Institutes of Sweden, SE-164 29, Kista, Sweden.ORCID iD: 0000-0003-2017-7914
2022 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, Vol. 35, no 2, article id 21Article in journal (Refereed) Published
Abstract [en]

Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains. 

Place, publisher, year, edition, pages
Springer Nature , 2022. Vol. 35, no 2, article id 21
Keywords [en]
Arbitrariness, Bias, Decision-support, Discrimination, Explainable artificial intelligence (XAI)
National Category
Computer Sciences Philosophy
Identifiers
URN: urn:nbn:se:kth:diva-322395DOI: 10.1007/s13347-022-00500-yScopus ID: 2-s2.0-85127109812OAI: oai:DiVA.org:kth-322395DiVA, id: diva2:1718924
Note

QC 20221214

Available from: 2022-12-14 Created: 2022-12-14 Last updated: 2022-12-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Franke, Ulrik

Search in DiVA

By author/editor
Franke, Ulrik
By organisation
Media Technology and Interaction Design, MID
In the same journal
Philosophy & Technology
Computer SciencesPhilosophy

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 63 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf