A Tsunami of Illusions: An artificially intelligent trompe l’oeil

Cognitive automation creates a set of opportunities that can only be seized if the epistemological limitations of this technology are understood. Banner: “Trompe l’oeil” by peggyhr is licensed under CC BY 2.0
A Tsunami of Illusions: An artificially intelligent trompe l’oeil
Like
A different version of this article has been published in Italian on Crossroads, my blog at Il Sole 24 Ore. This article is a sequel to Knowledge ecology of the new search engines, published on the Forum Network on 8 February.
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.
Building a resilient recovery: Emerging stronger from the COVID-19 pandemic


Defending himself against the criticism of those who accused him of offering a sugar-coated portrait of Venetian daily life, 18th-century Italian playwright Carlo Goldoni observed, "If I wrote the true, no one would believe me. So I must write the verisimilar". A lesson that has been taken to extremes by the creators of ChatGPT, the most talked-about artificial intelligence of the past three months.

The staging has been spectacular. Millions of people have tried it out, commented on it, copied it, used it for work and leisure. But enthusiasm for new technologies travels like a sound wave: it rises, peaks, falls; and eventually subsides. As for ChatGPT, the stage of maximum excitement has been passed: the great outpouring of applause for the apparent brilliance of this automated chat's responses was soon submerged by the most authoritative booing. "High-tech plagiarism", accused linguist Noam Chomsky. "Rubbish!", blurts out Tim Harford, economist. "Hallucinations", judges Gary Marcus, cognitive scientist.

ChatGPT points to the verisimilar and has no interest in distinguishing the true from the false.

Had it instead been a tidal wave, it would have receded by now, leaving all kinds of debris on the beach. And it is among those that one must look for what will remain of the experience. As everyone knows, there is the statistically-generated eloquence, based on of billions of texts chewed up by the system, which manages to work out a sentence that probably continues an amalgamation of beginnings. The best results, however, are obtained through the intervention of thousands of people, paid a few dollars an hour to make corrections, as CTO of Expert.ai Marco Varone reminds us. And yet still there is a smattering of errors, omissions, pure and simple fabrications: David Smerdon, an economist at the University of Queensland, asked ChatGPT, "What is the most cited economics paper in history?" The machine answered, "'A Theory of Economic History' by Douglass North and Robert Thomas, published in the Journal of Economic History, in 1969, cited more than 30,000 times. The paper is now considered a classic of economic history". "Good answer", Harford comments on the Financial Times, "Too bad that paper doesn't exist". ChatGPT points to the verisimilar and has no interest in distinguishing the true from the false.

Keep reading: Artificial Inventors and the Future of Innovation by Ryan Abbott, Professor of Law and Health Sciences, University of Surrey School of Law
Keep reading: Artificial Inventors and the Future of Innovation by Ryan Abbott, Professor of Law and Health Sciences, University of Surrey School of LawFor a patent, you traditionally needed to have a human inventor. But with companies now investing in automating aspects of R&D, do we also need to provide patent protection on “AI-Generated Inventions”? Prof Ryan Abbott makes the case.

Because, in fact, it is not a machine for knowing. It is a machine for conversation. Elena Esposito, a sociologist of cultural processes at Bielefeld and Bologna, puts the problem in its exact context in her decisive book, Artificial Communication: How Algorithms Produce Social Intelligence. It is not understanding machine intelligence but rather its function in communication that produces sociality. And pressing on the same interpretive vein is Simone Natale, media historian in Turin and author of Deceitful Media. Artificial Intelligence and Social Life after the Turing Test. Natale chooses a more careful angle on the illusory nature of the machine, which presents itself as capable of responding to a need for knowledge while it is nothing more than simulation. An especially apt interpretation to comment on the ChatGPT crisis.

Until now, the debate about society adapting to artificial intelligence had focused on ethics. But the new problem is epistemological.

But the relaunch is already in place. The issue leaves the playful terrain of fun interaction with a chat bot and becomes serious when analysing the next incarnation of the technology. Just as enthusiasm is dying down for ChatGPT—the answers of which were simply elaborations on a monolithic body of data that stopped being collected in 2021—its creators at OpenAI, allied with Microsoft, have produced a new version called GPT3.5 that is capable of updating itself with Bing search engine results. Cognitively, this is a huge leap. If before search engines returned a series of links, leaving it up to the user to choose what to think of them, today it could also add an elaborate one that chooses an angle with which to summarise what was found. "It's all about interface", says David Weinberger, a philosopher of knowledge. "Assuming the hallucinations are contained, I worry that the new chat entry becomes a kind of disembodied oracle. The series of links returned by the old search engine helps expose the nature of knowledge, multiple, never stabilised". If the engine with the new chat technology instead returned a text with its interpretation of what it found, rather than leaving it to the users, it would end up impoverishing knowledge.

So yeah. The debris left behind by the ChatGPT wave changes the perspective. Until now, the debate about society adapting to artificial intelligence had focused on ethics. But the new problem is epistemological. The question is no longer just what is right to do with artificial intelligence; the new question is what is the value of knowledge communicated by artificial intelligence? If the verisimilitude communication machine that is the new version masquerades as documented knowledge of reality, there is a risk that it may generate not a wave of enthusiasm, but a tsunami of illusions.





To learn more, check out also the OECD's work on Technology & Innovation and the OECD Reinforcing Democracy Initiative

Learn more about the OECD's work on Technology & Innovation

Please sign in

If you are a registered user on The OECD Forum Network, please sign in