kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The invisible work of AI alignment and its historical foundations
KTH, School of Architecture and the Built Environment (ABE), Philosophy and History, History of Science, Technology and Environment.ORCID iD: 0000-0002-5566-503X
2023 (English)In: Panel: Shaping Sustainable Socio-Technical Work Futures: Cultures, Practices, and Imaginaries of Digital Professionals, 2023Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

The history of technological automation – its ideas and implementations – is brimming with examples of how human efforts have been made invisible or unimportant in order for technology to come across as self-sufficient. As a field, artificial intelligence (AI) has contributed widely to the downplaying of labor involved at development as well as production stages of systems. Given the grand visions of AI – cognitive performance on par with or beyond that of its makers and the professed technological unemployment resulting from such accomplishments – the strategic invisibility of human work is here particularly salient. This paper investigates these matters by studying the policies of prominent actors in contemporary AI labs and situating them in a longer history of making certain forms of work disappear. Specifically, the paper addresses two kinds of labor of critical importance to the current paradigm of machine learning. First, the manual labor required to classify and tag data in a process known as "labeling" and, second, the work of assessing and tuning model output during so-called reinforcement learning with human feedback (RLHF). Moreover, before models have any data to train their patterning and predictions on, Internet users have invested in the production of digital data for a variety of purposes which were subsequently exploited in pre-training processing. In addition to being principally veiled, this paper understands these types of work – the toil of labeling and the correcting of model output by RLHF – as disciplinary in nature. They limit the range of available categories to which human-made information can belong and they punish undesired behavior. The ultimate goal of these efforts, this paper argues, is to remove the kinds of mistakes that users of AI-models might find incoherent or offensive while masking that such labor was carried out, all in the pursuit of creating the impression of an artificial intelligence reliably performing in an automatic and autonomous manner.

Place, publisher, year, edition, pages
2023.
Keywords [en]
artificial intelligence, history of labor, hidden labor, crowdsourcing
National Category
Technology and Environmental History
Research subject
History of Science, Technology and Environment
Identifiers
URN: urn:nbn:se:kth:diva-347025OAI: oai:DiVA.org:kth-347025DiVA, id: diva2:1861329
Conference
4S – Society for Social Studies of Science: Sea, Land, Endangered Ecologies, Solidarities, Honolulu, Nov. 8–11, 2023
Note

QC 20240528

Available from: 2024-05-28 Created: 2024-05-28 Last updated: 2025-02-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Fredrikzon, Johan

Search in DiVA

By author/editor
Fredrikzon, Johan
By organisation
History of Science, Technology and Environment
Technology and Environmental History

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 133 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf