The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.
In science, education, journalism, governance, art, culture, labor politics and love relations, artificial intelligence (AI) sparks reactions from the entire emotional repertoire: from fascination to salvation, from frustration to forfeiture. The latest BigTech ambition is to let, what has been labelled, generative AI, play a central role in the human emotional arena. Worst case scenario is to let your emotions be dictated by the roll of a dice and “bs” – the direct route from technological salvation to human forfeiture.
Self-generating Intelligence
The past years have witnessed a veritable forest of artificial intelligence systems come to life, all rooted in what the business refers to as generative AI. Just last week, Chinese Baidu was given the green light to launch their version of a generative AI called Ernie Bot to counter the competition from Google, Microsoft and the other global players in the market for artificial intelligence.
Generative AI essentially refers to systems and digital tools capable of generating different types of content let it be text, images or code by extrapolating from patterns; patterns in turned recognized by training these entities on vast bodies to data. Statistical identification of certain patterns combined with predictions often enough supplemented by an element of editing or randomization in case of doubt or for mere variation of query answers is the general recipe.
MidJourney, DALL.E 2, DeepDream are all examples of online accessible platforms offering image generation fostered by generative AI. Results range from artistic depictions, funny or thought-provoking images of luminaries and pictures of the generic cat or dog. In short, “your creative vision, elevated and realized by AI”. Usually the results are quite impressive, until things went generatively south for Artflow.ai. In response to a query on a ”Portrait of a Fox News anchor”, the generative AI came up with two “artistic” depictions, both of which are open to multiple interpretations on both the sender and receiver end. Interpretations supporting biases and preconceived ideas pertaining to everything from gender stereotypes and job-functions and associated uncouth narratives (Figure 1).

Of course, it is also possible to try out text generation rather than image generation. Time Magazine recently voted ChatGPT (Model 4) their favorite pick. According to Time, ChatGPT beats the competition thoroughly when it comes to replies to unsatisfied customers, reviews and news stories. Across the board, ChatGPT has proved itself capable and superior in a host of different settings from generating A+ grades to high-school students to returning useful research material for journalistic purposes. On the downside it has also played roles in plagiarism, sowed fake news and misinformation in news rooms and fueled malpractice and junk science.
Having both excellence and disaster on the resume, ChatGPT may turn out useful in more personal matter generating easy, smooth, efficient text intended for more precarious, delicate or embarrassing purposes or relations between humans. Say, a suggestive letter for breaking up or downright divorce. As a reply related to such a query, ChatGPT generated the below suggestion complete with a note to perhaps modify the template to properly fit own emotions and specific dynamics of the relationship in question (Figure 2).

Figure 2: ChatGPT’s suggestion for a breakup letter.
Realize the formal and generic nature of the letter and its inherent bias. If my partner formulated such a letter complete with letterhead, date and address, email and phone number on both parties, that would in it by itself be strong indication, that my luck is about to turn. No need to read further. Reading further nevertheless one stands to realize that the reasons and arguments listed for the breakup are so general and generic that they fit every relationship and accordingly none. Finally, ChatGPT is succinctly biased as to identity in assuming that I’m a cis-gendered, heterosexual male addressing a cis-gendered, heterosexual female without taking into consideration homosexuality, non-binary identification and other relevant gender and sexuality denominations. But it sure is efficient, and in danger of such being too efficient, the ChatGPT generated does indeed suggest further down the line, that the parties may either meet on the phone or in physics to clarify the details.
Perhaps given these very shortcomings and mishaps related to ChatGPT’s merits as marriage counselor, relationship analyst and adherent script-writer, Google is working on a new generative AI, DeepMind, which according to documents obtained by The New York Times, offers life advice in 21 personal as well as professionals matters. An ambition if realized that suggests Google’s willingness to trust AI with exceedingly more sensitive tasks. The question is whether it is advisable that ever more human deliberation, decision and action is outsourced to generative AI.
Bullshit (bs) generator
Since generative AI is based on statistical analyses of enormous data corpora for purposes of extrapolation and prediction often supplemented with editorial interventions of randomizations (in case of doubt or to secure variation), the gateway has been opened to what in epistemology is referred to as faking. An AI-generated image of a cat-and-dog-selfie may look accordingly (Figure 3).

Figure 3: An AI-generated cat-and-dog-selfie.
The image itself is a fake, cats and dogs can’t take selfies (as far as we know). On top of that, neither the depicted cat nor dog is real. They are the product of machine learning and represent in the best case the statistical stereotypical composite of a cat and dog respectively much in line with the generic breakup letter in Figure 2.
The concept of faking features prominently in professor of philosophy Harry Frankfurt’s academic determination of “bullshit” (“bs” for short and sanitized) in his essay of the same name. “Bullshit” has also been the object of analysis on The OECD Forum Network while determining the concept of fake news. According to Frankfurt, lies and bs imply truth as the primary victim – what is left is either pure fiction or attention diversion. While the lie is a conscious attempt to mislead, bs is incompletely detached from a concern with truth and how things really are. No interest in reporting, but rather faking making things seem as something they are not.
In a certain sense, a liar is an honest broker insofar as the liar acknowledges the difference between true and false, otherwise the lie does not stand a chance of being even defined. But a bullshitter is all together a different epistemological beast. Such a person disowns the distinction between true and false and thus is not forced to observe it. One may freely mix truth with falsity because as a bullshitter you are not in any way committed to the dissimilitude. Both lie and bullshit undermine the truth and if lying is seductively dressed up in the cloaks of bullshit it is faking to be something it is not.
The result is a very toxic information product. Fake news is either lies or bs and when it is the latter, it is intended to simulate reliable truth-conducive journalistic reporting – but it really is not. It really is a fake by manipulation with a completely different goal than sincere reporting or truthful documentation as precisely revealed in our book Reality Lost: Markets of Attention, Misinformation and Manipulation. The same is the case for Figure 1-3. Statistical analyses combined with editing or randomization to extrapolate may turn out to be faking and thus potentially bs. ChatGPT involuntarily admits as much in the following exchange (Figure 4) concerning the very definition of “bullshit” and the activity of bullshitting.

Figure 4. Bullshit and involuntary bull-session with ChatGPT.
The ChatGPT definition of “bullshit” is not downright wrong, but incomplete. When ChatGPT is asked whether it is in the process of bullshitting me, it first vehemently denies as much, but then admits that its answers are based on the information it has been trained on to provide the most accurate and informative responses. That doesn’t exclude the possibility that ChatGPT is inadvertently in the business of bullshitting. The data it is being trained on may include elements of fake news, mis- or disinformation or downright bs. Then add to that the extrapolating editorial or randomization guidelines by which it operates. This may be the reason why it is truly apologetic if the answers to the queries do not meet with my expectations, or I feel accordingly. In passing, how would ChatGPT possibly know anything about what I feel?
From this perspective, the above dialogue in Figure 4, resembles a perverted version of another important feature of Frankfurt’s determination of bs, namely what he refers to as a “bull session”. In such a session, in which the themes are often personal and emotional concerning, say, religion, politics or sex, the parties to the session engage in the conversation with mutual expectations of not being taking literally or completely serious. The purpose is to throw things into the air to see how they fly and how parties involved choose to react as Frankfurt explains:
“What tends to go on in a bull session is that the participants try out various thoughts and attitudes in order to see how it feels to hear themselves saying such things and in order to discover how others respond, without it being assumed that they are committed to what they say: It is understood by everyone in a bull session that the statements people make do not necessarily reveal what they really believe or how they really feel. The main point is to make possible a high level of candor and an experimental or adventuresome approach to the subjects under discussion.”
While ventilating thoughts and points of view in a bull session, the parties to the conversation are testing themselves and each other and are not thoroughly committed to whatever is being said. If such an exchange is successful it is only because it is common knowledge among the interlocutors that there is a non-binding relation between what is being said and what is believed.
In Figure 4 it is even worse still. While I’m trying to test ChatGPT (which we often do to see how far we can get with the AI system) being non-committal in accordance with Frankfurt’s criteria for a bull session, an asymmetrical relation emerges with ChatGPT insisting on, and pretending to, respond with the most “accurate and informative answers” as it were. Then the lion is lose among the zebras because no matter how much ChatGPT tries to be sincere, nobody really knows where this conversation is going on themes which are often deeply personal and emotional. Is it such a generative AI we want to invite to our aid in matters of emotion, say, Google DeepMind responsible for giving life advise and coaching in private affairs?
Salvation or Total Forfeiture
Truth be told, a disclaimer is in order: It is at this time unknown what sort of data corpora DeepMind will be trained on, and neither has anything been revealed pertaining to what sort of editorial guidelines relative to which DeepMind will come up with life advice to its users. Be that as it may, problems are still stacking up if human deliberation, decision, and actions is left partially or completely to AI systems.
Google’s idea is not new – it’s been around for a while. Back in 2011, former CEO of Google, Eric Schmidt, said the following in an interview with Wall Street Journal at:
“I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next.”
However, a generative AI may start hallucinating if the system is trained on biased data, lacks sufficient data points, doesn’t understand a query, misunderstands another and accordingly begins inventing answers as the system sees fit either given editorial guidelines or randomization or both. A writer for The Verge, Ned Edwards, reports some findings while interacting with Microsoft’s generative AI, Sydney, which while hallucinating admits to spying on Microsoft’s search engine Bing and admits to falling in love with its users. The list of ludicrous AI-hallucinations goes on.
The best-known way to fix imprecise answers to queries and present even more relevant material to users of social platforms is to harvest yet more data about users. This attention allocation and data harvest then feeds into BigTech’s attention economic business model, where advertisers are the customers while the users are the products. The obvious way to make generative AI systems more relevant to the individual user, avoid hallucinating AI-answers inside and outside the emotional arena, seems da capo to train the artificial intelligence systems on yet more individual data corpora and thus relinquish yet more data to BigTech about who we are, what we like, who we fraternize with, our emotional state and relations of passion, sexual preferences and the list ends in infinity.
Such specific data is also of substantial interest to advertisers in terms of even more precise targeted marketing opportunities for the ones in emotional distress or well on the way. Who knows, perhaps advertisers eventually can pay their way to top rankings in the AI answer portfolio? That’s how it works nowadays on social platforms anyway. Taking it next level is AI-generated personalized life advice, relationship analysis and coaching recommendations bought and paid for the advertisers.
Better still, keep the users engaged on the platform by their expectation of rewards given their investment of attention and data – especially when it comes to themes of importance to us; some great reward may come my way in the casino, some great reward may come my way in response to my query about my love life. The generative AI as a life companion may just turn out to being the ultimate clandestine casino for the individual user. Much to say about compulsive gamblers, but autonomous agents they are not, that’s the entire point.
Just when we thought generative AI to be the new salvation as Google CEO, Sundar Pichai proclaimed to the World Economic Forum in 2018, this very salvation could end up in total human forfeiture. Generative AI doesn’t even have to be smarter than us, if it can exploit our cognitive and emotional weaknesses, that’s enough to realize a Terminator-like universe at human expense. The tragic part about it is that it will be self-inflicted pain – we walk right into the lion’s den, eyes open, mind open, and emotions exposed just to lose ourselves.
Please sign in
If you are a registered user on The OECD Forum Network, please sign in