Any attempt to unravel, mimic or enhance consciousness is an ethical minefield. If an artificial system is conscious, does it have rights and responsibilities, emotions, a sense of free will? Would humans have the right to switch off or kill such a sentient being if we felt threatened?
Loading
Running through these troubling questions is an assumption that consciousness isn’t an all-or-nothing phenomenon but comes in degrees. We feel far more remorse killing a dog than a cockroach because we suppose that a cockroach isn’t all that conscious anyway and probably has no sense of self or emotions.
But how can we be sure? And in the highly charged debates about abortion, euthanasia and locked-in syndrome, the actual level of consciousness is usually the critical criterion.
Without a theory of consciousness, however, it’s impossible to quantify it. Scientists haven’t a clue what exactly is the defining feature of neural activity that supports conscious experience.
Why do the electrical patterns in my head generate sentience and agency, whereas the electrical patterns in Ausgrid NSW don’t? (At least, I don’t think they do.) And we all accept that when we fall asleep, our consciousness is diminished, and may fade away completely.
Loading
A few years ago, the neuroscientist and sleep researcher Giulio Tononi at the University of Wisconsin proposed a mathematical theory of consciousness based on the way information flow is organised, roughly, the arrangement of feedback loops, which in theory enables a specific quantity of consciousness to be assigned to various physical states and systems.
Is a thermostat conscious? A dish-brain? ChatGPT? A lobster in a boiling pot? A month-old embryo? A “brain-dead” road accident victim? These vexatious examples might be easier to confront, and to legislate about, if we really understood the physical basis of consciousness.
The foregoing advances, while promising, tell us little about the subjective experiences that attend conscious events, such as the redness of red, the sound of a bell, or the roughness of sandpaper, sensations that philosophers call qualia.
How can we tell if an agent really has an inner life experiencing such qualia, or is just an automaton, a zombie, programmed to respond appropriately to sensory input, for example, by stopping at a red traffic light without actually “seeing red”?
And if one cannot tell from the outside what is going on inside, why does this inner subjective realm exist in the first place? What advantage does it confer in that great genetic lottery called Darwinian evolution? Even if we create a truly conscious AI, that final problem may lay forever beyond our ken.
Paul Davies is Regents’ Professor of Physics at Arizona State University and author of over 30 books, including The Demon in the Machine. He will be speaking at the Sydney Opera House as part of the Your Brain on AI event on August 17. Tickets: sydneyoperahouse.com
Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.