Everyone is talking about AI right now. A popular topic of conversation is whether AI will ever become conscious. But what exactly do we mean by consciousness?
Consciousness is hard to define. To put it loosely, it’s a state of awareness about the “self." It allows us to solve problems by thinking (rather than trial and error), and to create a narrative about our lives through memory and language.
But the problem with defining consciousness in a broad way is that we can only ever understand it through the lens of the human experience. We don’t really know what it means to be a conscious non-human because we simply cannot imagine the way non-humans experience the world.
Does a rat know it’s a rat?
In animals, the question of consciousness is something scientists and philosophers have been struggling with for centuries. This struggle has led to a whole subfield of science, in which researchers look for evidence of consciousness in non-humans through observation and experimentation. But what does this evidence look like? How do we measure something so difficult to define?
There are “markers” of consciousness that we can look for. These are specific things that we think contribute to our own conscious experiences. They include the experience of pain, the ability to remember specific events, being self-aware (e.g., recognising that your reflection is you), and feeling emotions. Many animals seem to meet one, some, or all of these criteria, and every year we find more and more species that show these markers, overturning our previous assumptions about which animals might be conscious. For example, for a long time we thought that insects could not feel pain, but just last week a paper (not yet peer-reviewed) showed that bumble bees are likely to feel pain.
Is AI as clever as a rat?
A trend I’ve started to notice recently – in published articles, podcasts and on forums – is the comparison of AI’s “complexity” to that of living creatures. Usually – because humans are a little self-obsessed – this comparison starts with us, with questions such as “Will AI achieve human levels of consciousness?” But, on concluding that AI systems are currently nothing like our own brains, the question often then turns to animals: Is AI as clever as a monkey? Is it as clever as a rat? What about an insect? Or a worm?
This popular line of questioning has ethical implications for non-human animals. Asking if AI is as complicated as non-human animals reinforces the idea that animals and machines can be grouped together in some way. Indeed, in a podcast I listened to recently, an AI scientist was asked if AI will ever become conscious. They answered that if machines became conscious, their consciousness would be different to human consciousness in just the same way as animal consciousness is different.
I agree with this in technical terms: We cannot imagine any consciousness that isn’t our own. But I also think it is dangerous territory. The language of this answer creates, intentionally or not, two categories of consciousness: an “us” (humans) and an “other” (non-humans, including machines).
A troubling history
For centuries, animals were thought of as cleverly designed machines: non-conscious, unfeeling, and capable of responding to the environment using only pre-programmed rules. This perspective led people to believe that animals could not suffer, and this meant that animals had little or no rights or protections.
Only with dedicated research have we been able show that non-human species, from birds to monkeys, and insects to pigs, experience the world in ways that suggest they are conscious. This evidence means that many species have been given rights that they did not previously have.
A recent example of this is the extension of the "Animal Sentience Bill" in the UK, which now includes decapod crustaceans (e.g., crabs and lobsters) and cephalopods (e.g., squid and octopuses). This means that these animals are now formally recognised as conscious in UK law, which paves the way for them to be legally protected from painful practices such as being boiled alive.
But our perspectives about animals, our empathy toward them, and the rights we grant them evolve slowly, moving forward millimetre by millimetre as we accumulate more and more evidence that various species show markers of consciousness. Mountains of evidence, communication, and lobbying are required to cause any impactful change to animal rights. Unfortunately, many people still see animals as unfeeling, unthinking, and unworthy of empathy. We reinforce these perspectives when we talk about animals as if they are “less complex” than humans or "less conscious" than us, or imply that they are more similar to machines than to us.
The power of words
Language is powerful, especially in areas that are as hard-fought and delicate as animal rights. It might seem unlikely that a brief comment associating machines and animals could have any measurable impact on our ideas about animal minds, but time and again we see the power words have in shaping our perception of the world. If it turns out that machines could become conscious, they will be conscious in their own unique way. Not like a human, or a whale, or a rat. Like a machine.
There is no value judgment here, no suggestion that conscious machines (if they ever come to exist) would not deserve rights and protections in the same ways as humans and non-human animals, just the suggestion that we must be careful with our language. In the past, comparing animals to machines gave permission, both in personal interactions and in the eyes of the law, for animals to be treated as unfeeling, unthinking objects.
Being careful with our language seems like a small price to pay to ensure that we don’t backtrack on all the progress we’ve made in understanding and extending empathy to animals.