Expert views about missing AI narratives: is there an AI story crisis?

XR Stories research fellow Dr Jenn Chubb, who works on responsible and ethical science communication and storytelling, explores the stories told about Artificial Intelligence (AI). Topics examined are drawn from research undertaken by Dr Chubb with co-authors Dr Darren Reed and Peter Cowling published in an article in the journal of AI & Society.

Stories are an important indicator of our vision of the future. Stories can alter and reframe our expectations around technology development. Stories about AI have a long history – this has been explored particularly through research led by the Leverhulme Centre for Future Intelligence and the Royal Society on AI narratives. AI in fiction has been collectively consumed by the public, across generations and cultures. Although the existing literature on non-English speaking representations of AI is sparse, there is growing interest within academia and beyond because of the impact narratives can have on how the public make sense of technology. One participant from our research proposed a “story crisis” in which narratives compete to shape the public discourse on AI.

Some of the media coverage of humanoid robotic artist Ai-Da, who presented to the House of Lords in October 2022, exemplifies the ways in which AI is depicted in society – as embodied, gendered and just a bit creepy. Ai-Da was alleged to have ‘fallen asleep’ when ‘she’ powered down. Experts say that we really need to be hearing from the roboticists, not the robots. 

Dominant stories are polarized between solutionism and threat wherein AI is either a ‘silver bullet’ that solves intractable ‘political problems’ or ‘embodied robots taking our jobs’.

Similarly science fiction – designed to provoke an imagination – influences current decision making, that in turn affects the future. This is seen in policy decisions and robot development. There are benefits to science fiction’s influence. Such stories can help us think ahead and predict new technologies, they also help us question why we might want new technologies, and make us ask ‘what if’ to challenge the possible. 

Stories are now being created by AI. GPT-3, Open AI’s language generator, creates human-like text and is incredibly good at doing so. GPT-3 can generate any kind of text, any kind of story. Often these stories are heavily subjected to hype. Take for instance the story of AI sentience and LaMDA, Google’s conversational AI. In this example, Blake Lemoine, a Google engineer, thought that the company’s AI had come to life:

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

This story received so much attention because it was the stuff of science fiction. Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted online. It’s just mimicking and learning. In this instance, because there is so much data out there in the world, AI does not need to be sentient to feel real.

The central storytellers, big tech, popular media, and authors of science fiction, also represent particular demographics and motivations. So many stories, and storytellers, are missing because of the way these kinds of attention grabbing headlines dominate and because of how AI is often framed as some form of magic, or voodoo. This leads to an omission of stories in the media because the ‘everyday’ is a bit mundane. As one of our participants told us, “no-one likes a boring story”. 

Our interviews with experts used these dominant narratives to reveal the untold stories and everyday realities of AI. Most regularly referred to stories included themes about Surveillance and privacy, Embodied AI and Robots, Solutionism, Job Loss and Automation, Superintelligence and Sci Fi, Silicon Valley and Big Tech.

A table showing AI themes and their rate of occurrence during interview

Table 1. Themes and rate of occurrence during interview (Chubb et al, 2022).*

Experts felt stories about AI that were more responsible and nuanced were required for improved understanding of AI. A range of domains are missing or diminished, namely in creativity, art and culture, science and education, etc. This graph shows the themes which arose and the amount of times they were mentioned in interviews with experts.

A table of theme and occurrence of related mentions of AI during interviews

Table 2. Theme and occurrence of related mentions during interviews (Chubb et al, 2022).*

Our research shows that narratives need to express the social implications of AI such as exposing inequality, for example. One expert asked:

What do we want from AI? The clearer question is who do we include in that ‘we’. Does it impact upon some groups more than others?

Going forward, it is not simply about telling new stories. Instead it is about listening to existing stories and asking what civil society collectively wants from AI. What is needed are realistic, nuanced, and inclusive stories, working with and for diverse voices, which consider (1) story-teller; (2) genre, and (3) communicative purpose. Such stories can then inspire the next generation of thinkers, technologists, and storytellers.

To find out more about responsible storytelling and AI, contact the authors Dr Jenn Chubb XR Stories fellow and Dr Darren Reed and read the full paper (Chubb et al, 2022). Jenn also gave a public lecture about AI responsible narratives which you can watch on Youtube

*No changes were made to the Tables included published in AI & Society. DOI 10.1007/s00146-022-01548-2

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

 

Published on 7 November 2022

Filed under: XR Stories