The Past as Error: Archival Regimes in Cybernetics and AI
2024 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]
Systems of machine learning (ML) are increasingly being used as ”archives”, replacing the conventional search engine interface to depositories of collected knowledge. Yet, because these systems were never intended to be used for information retrieval, their widespread use in this capacity hints toward a new archival regime. To understand the historical circumstances which have shaped how current ML systems retain and serve back data to users, this chapter revisits the fundamental training methods of cybernetics and connectionism: negative feedback and backpropagation, respectively. By turning to these mechanisms, the investigation highlights the importance of error in the history of artificial intelligence, yet not the sorts of error that have typically been recognized and embraced by humanistic scholarship – deviations, wanderings, nonconformities – but, rather, types of error more often studied in psychology and engineering.
In doing so, it makes the argument that these error-correction techniques – applied in the effort to optimize a learning outcome in the system – effectively treats the past as a series of errors whose only value is to steer a machine or model toward a predefined goal. Critically, this line of inquiry enables the chapter to ask about the effects that standardized approaches to make predictions have on how the past is conceived. The suggestion is made that by implementing systems based on cultures of prediction to act as “archives”, we arrive at an order where information is organized according to the principle of statistical likelihood. Whatever falls outside of the most probable pattern, tends to disappear as error. This is a core property of machine learning. If digital computers introduced a problematic conflation between memory and storage (this was intensified with database technology), artificial intelligence forces us to consider this dynamic again. After all, large language models were designed with inspiration from neural networks, not records management. What is desirable for a computational learning task is quite different from what we expect in a database or an archive. The chapter argues that this ongoing shift in architecture for how data about the world are accessed shapes our informational landscape and, as a consequence, our ability to form an image of the past.
Place, publisher, year, edition, pages
2024.
Keywords [en]
ai, artificial intelligence, error-correction, error, archive, past, history, cybernetics
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
URN: urn:nbn:se:kth:diva-362585OAI: oai:DiVA.org:kth-362585DiVA, id: diva2:1953345
Conference
Questioning History in the Age of Artificial Intelligence, symposium at UC Berkeley, April 11–12, 2024
Funder
Swedish Research Council, 2022-00352_VR
Note
QC 20250428
2025-04-212025-04-212025-04-28Bibliographically approved