kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 51) Show all publications
Fredrikzon, J. (2025). Designing Paranoid Machines: Kenneth Colby and the Tensions Between Error and Intelligence in 20th-century AI. In: : . Paper presented at Stanford University, Center for Spatial and Textual Analysis (CESTA), invited lecture series, May 13 2025.. Stanford University, Center for Spatial and Textual Analysis (CESTA)
Open this publication in new window or tab >>Designing Paranoid Machines: Kenneth Colby and the Tensions Between Error and Intelligence in 20th-century AI
2025 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

In the history of conceptualizing intelligence in machines, errors have played a key role. In mid 20th century cybernetics and computer science, leading figures insisted that the possibility of independent reasoning and creativity in machines hinged on a system’s ability to deviate or break from its original design and programming. A similar idea was espoused in 1980s critique of artificial intelligence (AI) but, this time, deviance, or error as a warrant of intelligence was placed with humans. The way humans err, it was claimed, testify to a type of mental performance permanently out of reach of machines.

Using these conflicting discourses as framing, this talk turns to the historical period between them and a series of experiments aimed at creating paranoid computers (essentially software with delusional traits). In focus here are the studies carried out by medical doctor, psychologist, and AI pioneer Kenneth Colby working at the Stanford Artificial Intelligence Lab (SAIL) in the 1960s and 70s. I argue that Colby’s undertakings capture essential aspects of both traditions mentioned, raising questions about what constitutes an error in a machine made to simulate human behavior. In highlighting the entanglements between error and intelligence in the history of AI, the talk notes their relevance for our current sociotechnical landscape. This includes not only the employment of chatbots to act as therapists for human subjects but also conceiving of AI systems themselves as objects worthy of analysis by “AI psychologists” – specialists in interpreting the otherwise inscrutable, alleged, “inner lives” of models.

Place, publisher, year, edition, pages
Stanford University, Center for Spatial and Textual Analysis (CESTA): , 2025
Series
@CESTAStanford : Video channel for the Center for Spatial and Textual Analysis (CESTA)
Keywords
ai, artificial intelligence, chatbot, paranoia, error, psychiatry, Kenneth Colby, SAIL, Stanford AI Lab
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-369475 (URN)
Conference
Stanford University, Center for Spatial and Textual Analysis (CESTA), invited lecture series, May 13 2025.
Note

QC 20250919

Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-09-19Bibliographically approved
Fredrikzon, J. (2025). Prompting the Dead: Technological Spiritualism in the Age of Machine Learning. In: : . Paper presented at AI and Media Roundtable, Digital Aesthetics and Media Studies Colloquium, Stanford Humanities Center, Stanford University, May 20, 2025..
Open this publication in new window or tab >>Prompting the Dead: Technological Spiritualism in the Age of Machine Learning
2025 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

Technical media of recording and playback have, since the 19th century at least, been employed in attempts to contact the spirits of the dead. In these histories of technological spiritualism, humans themselves have often played the role of “media”. In this talk, I compare the mid 20th century phenomenon of Electronic Voice Phenomena (EVP) where tape recorders allegedly picked up messages from “the other side” with so called deadbots: machine learning systems trained to simulate deceased people. In particular, the talk will note the significance of error and labor in these practices and how they distribute the effort of interpretation between user and machine.

Keywords
ai, artificial intelligence, deadbots, griefbots, AI history, media, medium, spiritualism, jürgenson, tape, magnetic, death, afterlife
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-369476 (URN)
Conference
AI and Media Roundtable, Digital Aesthetics and Media Studies Colloquium, Stanford Humanities Center, Stanford University, May 20, 2025.
Note

QC 20250919

Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-09-19Bibliographically approved
Fredrikzon, J. (2025). Raderingsvåldet i den AI-drivna populismens epok. Respons
Open this publication in new window or tab >>Raderingsvåldet i den AI-drivna populismens epok
2025 (Swedish)In: Respons, ISSN 2001-2292Article in journal (Other (popular science, discussion, etc.)) Published
Abstract [en]

The article argues that as of spring 2025, we should understand the developments in United States policy toward higher education as erasure as violence. This is different from traditional fears of bureaucracy. While we are used to thinking of governmental administration as burying citizens in inefficient processes and absurd procedures, the dismantling of administrative infrastructure is worse. This is the first stage of erasure as violence and concerns mainly the wrecking ball directed by DOGE and similar initiatives to tear down infrastructure upholding governance. The second stage concerns the individual level of research and teaching. Here, we have witnessed a deliberate campaign to make people in academia – students, teachers, researchers – insecure and afraid. The result is a second stage of erasure as violence in the form of self-censorship. Even as most actors in academia are not threatened directly by cuts in funding, discontinued courses, or demolished research initiatives, many feel the pressure to adjust as an act of survival. We have only begun to see the consequences of this forced behavior.

Place, publisher, year, edition, pages
Stockholm: , 2025
Keywords
artificiell intelligens, ai, radering, populism, trump, usa, universitet, byråkrati, doge, silicon valley
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-369472 (URN)
Note

QC 20250908

Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-09-08Bibliographically approved
Fredrikzon, J. (2025). Rethinking Error: ‘Hallucinations’ and Epistemological Indifference. Critical AI, 3(1)
Open this publication in new window or tab >>Rethinking Error: ‘Hallucinations’ and Epistemological Indifference
2025 (English)In: Critical AI, ISSN 2834-703X, Vol. 3, no 1Article in journal (Refereed) Epub ahead of print
Abstract [en]

In our current generative AI paradigm, so-called hallucinations are typically seen as a kind of nuisance that will eventually be swept away as the technology improves. There are several reasons to question this assumption. One of them is that the very phenomenon is the result of deliberate business decisions by corporations invested in delivering diverse sentence structures through deep learning and generative pretrained transformers (GPTs). This article urges a fresh view on “hallucinations” by arguing that, rather than being errors in any conventional sense, “hallucinations” are evidence of a probabilistic system incapable of dealing with questions of knowledge. These systems are epistemologically indifferent. Yet, by presenting as errors to users of generative AI, “hallucinations” can function as practical reminders of and indexes to the limits of this kind of machine learning. Viewed this way, “hallucinations” remind us that every time you get something reasonable-seeming from a system such as OpenAI’s ChatGPT, you might as well have been given something quite outrageous; from the machine’s perspective it’s all the same.

Place, publisher, year, edition, pages
Durham: Duke University Press, 2025
Keywords
ai, hallucination, error, mistake, LLM, large language model, fact
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362581 (URN)
Funder
Swedish Research Council, 2022-00352_VR
Note

QC 20250428

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-04-28Bibliographically approved
Fredrikzon, J. (2025). Training the Deceased: Deadbots and Technological Spiritualism. In: : . Paper presented at AI and Social Normativity: Rethinking Error, Bias, and Truth, 28 January 2025, UC Berkeley..
Open this publication in new window or tab >>Training the Deceased: Deadbots and Technological Spiritualism
2025 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

Deadbots—AI systems designed to simulate the dead—clarify how generative AI reshapes temporal sense‑making. Operating in the “digital limit situation” of loss and finitude, they neither preserve memories nor store archives; they sever material traces and outsource the work of remembrance to automated interaction, ultimately fostering forgetting. The talk frames deadbots as a convergence of two traditions. From cybernetics, they inherit the “empty archive,” where feedback replaces retention and provenance is erased during model training. From technological spiritualism, they draw on practices that use technical mediation to confer authenticity, echoing nineteenth‑century séance boards and mid‑twentieth‑century Electronic Voice Phenomena. In both cases, technology gains authority through its apparent objectivity and opacity, inviting speculation about contact with the absent. Yet deadbots diverge from their spiritualist lineage by eliminating the interpretive labor once required to sustain such connections. The user’s role is reduced to passive consumption of a corporate service, while the system’s probabilistic token prediction turns remembrance into chance encounters. Consequently, deadbots function as engines of presentism—the endpoint of an “automation of memory” that dissolves the past into ever‑renewed simulations.

Keywords
deadbot, EVP, technological spiritualism, forgetting, techniques of authenticity, Jürgenson, digital afterlife, error
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362588 (URN)
Conference
AI and Social Normativity: Rethinking Error, Bias, and Truth, 28 January 2025, UC Berkeley.
Funder
Swedish Research Council, 2022-00352_VR
Note

QC 20250422

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-04-22Bibliographically approved
Fredrikzon, J. (2024). ARARAT 1976: The Exhibition as Environing Medium. Journal of Social and Cultural Possibilities (JSCP), 58-79
Open this publication in new window or tab >>ARARAT 1976: The Exhibition as Environing Medium
2024 (English)In: Journal of Social and Cultural Possibilities (JSCP), ISSN 2836-7510, p. 58-79Article in journal (Refereed) Published
Abstract [en]

How were the problems and promises of technology addressed in the heightened public awareness of environmental issues during the 1970s? ARARAT provides insight here. It was an exhibition arranged at Moderna Museet in Stockholm in 1976. Its goal was to inform visitors about the interconnections between humans, society, and the environment and, on the basis of such knowledge, empower people to seek less wasteful ways of life.To this end, the exhibition invitedvisitors to become actively involved in experiments and get acquainted with everyday technologies that were less expensive to make, easier to repair, and more transparent in terms of production and life cycle. Previous research on ARARAT has focused on its relevance for current practices in art and architecture. The presentarticle, by contrast, aims to situate ARARAT in contexts that are both more general and more specific than previous work has been able to show.The study argues, first, that ARARAT can be understood as pioneering the field of practical knowledge (“praktisk kunskap”) before it was more formally established around 1980 in Sweden; second, that the ARARAT undertaking amounted to a new kind of popular education(“folkbildning”) with its combination of science, politics, and hands-on experimentation; and, third, that the ARARAT project was a demonstration of the exhibition format as an environing mediumwhereby it actively took part in changing the environment by empowering the population in an era with significant collective challenges.

Place, publisher, year, edition, pages
Philadelphia: Temple University Press, 2024
Keywords
RARAT, environment, environing technology, environing media, appropriate technology, systems ecology, future studies, practical knowledge, popular education, 1970s, Sweden, art
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362582 (URN)
Note

QC 20250422

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-06-12Bibliographically approved
Fredrikzon, J. (2024). History as Error: Uses of the Past in Cultures of Prediction. In: : . Paper presented at Fehler und Nichtfunktionieren in (Digitalisierten) Gesellshaften, Technische Universität Darmstadt, Jan 15, 2024.
Open this publication in new window or tab >>History as Error: Uses of the Past in Cultures of Prediction
2024 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

With generative artificial intelligence being applied in everyday operations, e.g. as an alternative to web search, the question of models as sites of knowledge must be considered. As a source of information, an AI model is neither an ordered archive nor a database but a statistical engine. In this lecture, I discuss the relevance of error in the production of knowledge in cybernetic and AI systems and how it relates to specific uses of the past. While error-correction is crucial to the operation of language models, false outcomes e.g. hallucinations, can hardly be considered errors or mistakes in a epistemological sense because the concepts of truth and falsity are beyond the model architecture. This situation can be compared to an earlier cybernetic principle of negative feedback as a method to regulate and control a system. Such an approach has been suggested as being the opposite of a traditional archive effectively producing a very instrumental use of the past: input to steer toward a more desired outcome. Nevertheless, there have been approaches in social and human sciences which drew on cybernetic ideas for how to make use of accumulated knowledge. Comparing cybernetic principles with current AI regimes – especially their conceptions of errors – this lecture asks: what uses of history (broadly conceived) are made possible by these paradigms respectively?

Keywords
AI, Cybernetics, Error-correction, archive, past, historical knowledge
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362589 (URN)
Conference
Fehler und Nichtfunktionieren in (Digitalisierten) Gesellshaften, Technische Universität Darmstadt, Jan 15, 2024
Funder
Swedish Research Council, 2022-00352_VR
Note

QC 20250428

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-04-28Bibliographically approved
Fredrikzon, J. (2024). How Evolving Paradigms Reflect Technology’s Role in Law. In: : . Paper presented at XXXIX Nordic Conference on IT and Law, Artificial Intelligence and Legal Methods: Navigating the New Frontier, Nov 5–6, 2024, Stockholm University.
Open this publication in new window or tab >>How Evolving Paradigms Reflect Technology’s Role in Law
2024 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

For more than half a century, the use of artificial intelligence in legal domains has been a topic of interest for scholars and organizations. In this talk, I look at some major trends in these undertakings, asking: How did they conceive of the role of technology in the area of legal work? From the standpoint of how these projects sought to implement AI, how did they imagine the function of the law in society? Then, turning to our current situation and looking forward, I discuss some potential trade-offs which lie before us. If previous attempts to make legal matters computable ran up against the limits of propositional logic, the bluntness of formalization et cetera – what can we expect from deep learning? More specifically, is there a conflict between prediction as a goal in AI and explainability as a requirement in the practice of law? Or – might the efficiency of predictive statistics encourage a new self-image of the legal domain: from one concerned with precision, transparency, and accountability to one which traffics in data-driven forecasting, valorized by the old dream of prevention?

Keywords
ai, artificial intelligence, law, jurisprudence, legal, explainability, intrepretability
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362587 (URN)
Conference
XXXIX Nordic Conference on IT and Law, Artificial Intelligence and Legal Methods: Navigating the New Frontier, Nov 5–6, 2024, Stockholm University
Funder
Swedish Research Council, 2022-00352_VR
Note

QC 20250428

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-04-28Bibliographically approved
Fredrikzon, J. (2024). John Durham Peters, Speaking into the Air (1999). In: Stina Bengtsson; Staffan Ericson; Frik Stiernst (Ed.), Classics in Media Theory: (pp. 372-389). London: Informa UK Limited
Open this publication in new window or tab >>John Durham Peters, Speaking into the Air (1999)
2024 (English)In: Classics in Media Theory / [ed] Stina Bengtsson; Staffan Ericson; Frik Stiernst, London: Informa UK Limited , 2024, p. 372-389Chapter in book (Other academic)
Abstract [en]

Our notion of what it means to communicate – where does it come from? This chapter visits a seminal work facing this question head on. In Speaking into the Air (1999), composed on the doorstep to the new millennium, historian and philosopher of media John Durham Peters suggests that our understanding of communication rests on an unfortunate view of mediated interpersonal exchange as something inherently broken. The presumed defective state of our interactions across distances, he argues, is based on the misguided idea that a harmonious and flawless union of souls is both possible and desired. To demonstrate his contention, Peters locates an original separation between the dialogues of Socrates – forcing participation, pushing towards agreement – and the dissemination of Jesus as told in the Gospels – spreading the seeds or words for those who have ears to hear. He then carries this ancient bifurcation on an idiosyncratic route between pillars of the Western canon of intellectual history, among them, Augustine, Aquinas, Bacon, Locke, Hegel, Marx, Kierkegaard, and Haraway. As Peters interrogates their positions on angels, money, love, law, ethics, labour et cetera, he takes these topics to be, ultimately, problems of communication whereby he builds on scholars like McLuhan and Kittler while also laying a foundation for continued extensions of concepts of media. Peter’s work productively shows the deep affinities between spiritual practices and technical endeavours in establishing contact with absent entities as in the cases of telegraphy and telepathy. It reminds its readers that techniques such as phonography, telephony, and photography were received with existential unease and wild speculation regarding the proper and probable locations of mind and matter. All carried fantasies of disembodied paths to the dead and the distant. Critiquing the idea of communication as an enduring shortcoming, Peters’ proposal is that we seek a less ambitious notion of what it means to connect. One that gives up on forced perfection and the desperate technological fixes employed to attain it and, instead, stands in awe of the fact that we are able to reach one another at all: humans and, perhaps, animals, computers, and aliens too. In making this case, Peters draws on the American pragmatist tradition and finds with William James and Ralph Waldo Emerson an attitude towards an exchange with others based on “making-do”; communication as a form of work that is ongoing, embodied, trusting, and always attentive to the profound otherness of fellow creatures and environments.

 

Place, publisher, year, edition, pages
London: Informa UK Limited, 2024
Keywords
john durham peters, media, socrates, communication, dissemination
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362583 (URN)10.4324/9781003432272-28 (DOI)2-s2.0-85195341480 (Scopus ID)
Note

Part of ISBN 9781032557953

QC 20250422

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-07-17Bibliographically approved
Fredrikzon, J. (2024). Nu tar tech-brorsorna över USA:s politik. Tidningen Vi, Article ID 9 november.
Open this publication in new window or tab >>Nu tar tech-brorsorna över USA:s politik
2024 (Swedish)In: Tidningen Vi, ISSN 0346-4180, article id 9 novemberArticle in journal (Other (popular science, discussion, etc.)) Published
Abstract [sv]

Artikeln argumenterar att Donald Trumps valseger 2024 utgör en strategisk triumf för Silicon Valleys “tech‑brorsor” – Elon Musk, Peter Thiel, Marc Andreessen m.fl. – vars långsiktiga mål är att ersätta politik med teknik. Författaren väver ihop observationer från Berkeley, historien om Ted Kaczynski och den moderna AI‑ekonomins extrema energihunger för att visa hur ett teknikcentrerat tänkesätt allt mer dominerar USA:s maktstruktur. Han beskriver hur riskkapitalet backar kandidater som J.D. Vance, hur AI‑jättar planerar egna kärnkraftverk (t.o.m. återöppningen av Three Mile Island) och hur Andreessens “tekno‑optimistiska manifest” speglar Kaczynskis idé om en livsavgörande fiende – nu riktad mot institutioner, reglering och demokrati. 2024 framställs som året då Silicon Valley “vann valet”, med löftet att teknik ensam ska lösa samhällsproblemen – men först efter att politiken avskaffats.

Keywords
ai, tech-brorsa, populism, berkeley, usa-valet, trump, harris
National Category
Other Humanities not elsewhere specified
Research subject
History of Science, Technology and Environment
Identifiers
urn:nbn:se:kth:diva-362590 (URN)
Funder
Swedish Research Council, 2022-00352_VR
Note

QC 20250422

Available from: 2025-04-21 Created: 2025-04-21 Last updated: 2025-04-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-5566-503X

Search in DiVA

Show all publications