In our current generative AI paradigm, so-called hallucinations are typically seen as a kind of nuisance that will eventually be swept away as the technology improves. There are several reasons to question this assumption. One of them is that the very phenomenon is the result of deliberate business decisions by corporations invested in delivering diverse sentence structures through deep learning and generative pretrained transformers (GPTs). This article urges a fresh view on “hallucinations” by arguing that, rather than being errors in any conventional sense, “hallucinations” are evidence of a probabilistic system incapable of dealing with questions of knowledge. These systems are epistemologically indifferent. Yet, by presenting as errors to users of generative AI, “hallucinations” can function as practical reminders of and indexes to the limits of this kind of machine learning. Viewed this way, “hallucinations” remind us that every time you get something reasonable-seeming from a system such as OpenAI’s ChatGPT, you might as well have been given something quite outrageous; from the machine’s perspective it’s all the same.
QC 20250428