With generative artificial intelligence being applied in everyday operations, e.g. as an alternative to web search, the question of models as sites of knowledge must be considered. As a source of information, an AI model is neither an ordered archive nor a database but a statistical engine. In this lecture, I discuss the relevance of error in the production of knowledge in cybernetic and AI systems and how it relates to specific uses of the past. While error-correction is crucial to the operation of language models, false outcomes e.g. hallucinations, can hardly be considered errors or mistakes in a epistemological sense because the concepts of truth and falsity are beyond the model architecture. This situation can be compared to an earlier cybernetic principle of negative feedback as a method to regulate and control a system. Such an approach has been suggested as being the opposite of a traditional archive effectively producing a very instrumental use of the past: input to steer toward a more desired outcome. Nevertheless, there have been approaches in social and human sciences which drew on cybernetic ideas for how to make use of accumulated knowledge. Comparing cybernetic principles with current AI regimes – especially their conceptions of errors – this lecture asks: what uses of history (broadly conceived) are made possible by these paradigms respectively?
QC 20250428