XR Stories research fellow Dr Jenn Chubb, who works on responsible and ethical science communication and storytelling, explores the stories told about Artificial Intelligence (AI). Topics examined are drawn from research undertaken by Dr Chubb with co-authors Dr Darren Reed and Peter Cowling published in an article in the journal of AI & Society.
Stories are an important indicator of our vision of the future. Stories can alter and reframe our expectations around technology development. Stories about AI have a long history – this has been explored particularly through research led by the Leverhulme Centre for Future Intelligence and the Royal Society on AI narratives. AI in fiction has been collectively consumed by the public, across generations and cultures. Although the existing literature on non-English speaking representations of AI is sparse, there is growing interest within academia and beyond because of the impact narratives can have on how the public make sense of technology. One participant from our research proposed a “story crisis” in which narratives compete to shape the public discourse on AI.
Some of the media coverage of humanoid robotic artist Ai-Da, who presented to the House of Lords in October 2022, exemplifies the ways in which AI is depicted in society – as embodied, gendered and just a bit creepy. Ai-Da was alleged to have ‘fallen asleep’ when ‘she’ powered down. Experts say that we really need to be hearing from the roboticists, not the robots.
Dominant stories are polarized between solutionism and threat wherein AI is either a ‘silver bullet’ that solves intractable ‘political problems’ or ‘embodied robots taking our jobs’.
Similarly science fiction – designed to provoke an imagination – influences current decision making, that in turn affects the future. This is seen in policy decisions and robot development. There are benefits to science fiction’s influence. Such stories can help us think ahead and predict new technologies, they also help us question why we might want new technologies, and make us ask ‘what if’ to challenge the possible.
Stories are now being created by AI. GPT-3, Open AI’s language generator, creates human-like text and is incredibly good at doing so. GPT-3 can generate any kind of text, any kind of story. Often these stories are heavily subjected to hype. Take for instance the story of AI sentience and LaMDA, Google’s conversational AI. In this example, Blake Lemoine, a Google engineer, thought that the company’s AI had come to life:
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.
This story received so much attention because it was the stuff of science fiction. Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted online. It’s just mimicking and learning. In this instance, because there is so much data out there in the world, AI does not need to be sentient to feel real.
The central storytellers, big tech, popular media, and authors of science fiction, also represent particular demographics and motivations. So many stories, and storytellers, are missing because of the way these kinds of attention grabbing headlines dominate and because of how AI is often framed as some form of magic, or voodoo. This leads to an omission of stories in the media because the ‘everyday’ is a bit mundane. As one of our participants told us, “no-one likes a boring story”.
Our interviews with experts used these dominant narratives to reveal the untold stories and everyday realities of AI. Most regularly referred to stories included themes about Surveillance and privacy, Embodied AI and Robots, Solutionism, Job Loss and Automation, Superintelligence and Sci Fi, Silicon Valley and Big Tech.