top of page
Writer's pictureGraeme Forbes

Artificial Intelligence: Choose Your Own Adventure

Why AI Is Not a Collaborator


Illustration of several people on smart phones connected by lines and dots
Jamillah Knowles & Reset.Tech Australia / Better Images of AI / Social media content / CC-BY 4.0

Artificial Intelligence, at least in the form of Large Language Models (LLMs), has rapidly changed how we interact with computers. Before, computers were machines that we gave instructions to. Now it seems like we can have conversations with them.


If we can have conversations with them, that sounds like we can collaborate with them, rather than merely commanding them. Those seem like really different relationships. When I command something be done, I am responsible for what I commanded, and I’m to some extent responsible for making sure my commands were interpreted and followed properly. When we collaborate, the responsibility starts to be shared between us.


When we collaborate, it looks like I have to split the credit.[1] Do we collaborate with LLMs? Here are three reasons to think that we don’t.


Pretending to collaborate

Philosopher Fintan Mallory argues that chatbots are make-believe. The idea that there is a conversationalist that we converse with when we interact with an LLM is a fiction. There is no collaborator on the other side of our ‘conversations’ with LLMs, we just pretend there is to make sense of the interaction.


Mallory argues that we need to be able to make sense of the way we interact with LLMs; it really seems like we’re having conversations. We also need to be able to make sense of the fact that we’re learning things from them. So, there is some pressure to accept that the strings of characters they spit out mean something. The solution, he argues, is that we treat it as ‘prop-oriented make-believe’. It’s like a child having a tea-party with their dolls, except the dolls can respond in astonishingly sophisticated ways.


Frigidaire poetry

Why not go further, and say that the LLM gets the credit for its end of the conversation? It depends whether you think the LLM is a contributor to the conversation or something else.


A helpful point of comparison is with Dominic McIver Lopes on ‘Frigidaire poetry’. If you buy one of those packets of words on magnets, often sold from gift shops in museums, you too can become a poet! You just(!) have to rearrange the specially chosen list of words, and something profound and moving will emerge. But the magnetic words aren’t co-authors of the poem, Lopes argues, they are a ‘work-generator’. They are a tool used to generate art, not a collaborator. And what goes for art here can be applied to meaning. The LLM is a meaning-generator, not an author. Granted, it is quite a sophisticated generator of meaning, but not sophisticated enough to count as a collaborator.


Choose your own adventure

Why doesn’t an LLM count as a collaborator? Mainly because whoever trained the LLM did all the collaboration in advance. The LLM is just a structure that you navigate.


Computer Scientists Murray Shanahan, Kyle McDonell and Laria Reynolds argue we should think about LLMs as engaged in roleplay. Where Mallory suggests that we are making believe that we are having a conversation, Shanahan, McDonell and Reynolds argue, as a metaphor at least, we should think of the LLM as pretending to be a single character when in fact it is a disembodied neural network. They caution readers against the tendency towards anthropomorphism, and the idea that the LLM is engaged in roleplay is meant to distance us from the idea that we’re in conversation with a real character. As far as that purpose goes, Mallory’s suggestion is better. We’re pretending the LLM is in a conversation, the LLM isn’t near enough being human to pretend.


Image of The Cave of Time book by Edward Packard
From Wikipedia entry for "Choose Your Own Adventure"

The idea of roleplay is useful, though. Roleplay games are examples of interactive fiction. You make the fictional story up based on what people do. Role-play games often involve a group of people collectively and collaboratively making a story. Even if it’s us, and not the LLM, engaging in roleplay, we can ask the question: ‘Is a conversation with an LLM like a roleplaying game?’.


Interactive fiction comes in different types. Multiplayer computer games, and table-based role-playing games do seem to involve collective collaboration to tell a story. But when it comes to interactive fiction, LLMs resemble something slightly different.


They resemble ‘choose your own adventure’ novels, like Edward Packard’s The Cave of Time. In those books, you would make a decision at the end of a page, and would be directed through the book on one of a number of different unfolding plots. If you read the pages in order, it would make no sense, but if you navigated the structure in various ways, you could get over 40 different stories.


LLMs are like a crazy-big choose your own adventure novel. They are a structure that exists before you show up, trained on what the most common routes through the structure are, and the prompts you type in are all choices on how to navigate that structure.

The reason they don’t count as collaborators is that the structure isn’t changing based on your responses, you are just choosing how to navigate the structure that already exists. The next word you are given is probabilistic. So it’s like a choose your own adventure novel where you roll some dice to see where your decision takes you. But the dice aren’t collaborating with you any more than the book is. All the collaboration was done before you showed up, by the person, Edward Packard or whoever, who made the structure and the way it gets navigated.


We care about collaboration because it involves sharing responsibility, credit and blame. But given that LLMs don’t involve collaboration, except with whoever trained them, the LLM itself isn’t the right kind of thing to get the credit or take the blame.


About the Author

Graeme A Forbes is a freelance philosopher and Honorary Senior Lecturer at the University of Kent, UK. He is author of Philosophy of Time: The Basics in press with Routledge, and The Growing Block View: Philosophy of time, change and the open future, forthcoming with Bloomsbury.


[1] The ideas for this blog post were developed in collaboration with Andrew Laing.


This blog and its content are protected under the Creative Commons license and may be used, adapted, or copied without permission of its creator so long as appropriate credit to the creator is given and an indication of any changes made is stated. The blog and its content cannot be used for commercial purposes.

Comments


bottom of page