A Rhythm in Notion
Small(er) Steps Toward a Much Better World

Machines and Consciousness

The Evolution of Zombies

A disturbing number of thinkpieces on whether AIs will ever be conscious start off, explicitly or implicitly, with something like:

of course, all of us humans could do everything we do with zero consciousness, so it’s hard to explain why we have it. Machines are just that, machines, and no matter what human tasks they take over, what feats they match, they will always remain wires and electricity, nothing more.

The humans-as-zombies assumption here does a lot of heavy lifting for the dead-machines conclusion. Beyond thinkpiece writers, my impression is that most people think this way too.

Yet all available evidence undermines the unquestioned assumption that humans could operate without consciousness. Every single functional example of high general-purpose intelligence that we know of exhibits consciousness, and the more intelligence, the more consciousness. By this I mean our fellow animals.

After all, by the p-zombie argument, some of the other mammals might have failed to develop consciousness. Dogs might have developed as efficient hunting machines, herding robots, guarding functionaries, with no emotions or will of their own on display. Indeed Descartes said dogs did not have a soul, and therefore are not conscious. Vivisect a dog without anesthesia, he said, its howls and whining are mere mechanical responses.

Of course we cannot know for sure that dogs are awake, just as we cannot know about other humans, but there’s no justification for believing either humans or dogs lack consciousness.1 Anyone who has lived with a dog and experienced its full personality knows that.

Mammals are an easy case, but take other examples, say birds. Even in birds, the more intelligent ones don’t merely accomplish their goals with greater ease. An African Grey parrot has more opinions, and expresses a greater range of feelings, than does a starling (which is annoyingly full of opinions as it is).

Perhaps evolution can develop an intelligent agent without consciousness, but so far it never has, or not that we can tell. There is not the faintest hint anywhere in nature that consciousness is some purely optional add-on to agentic intelligence. To all appearances, the two have always gone hand-in-hand.2

Now, it is easy to imagine a p-zombie. Perhaps that is why they are so popular as a foundation for an argument. But that says nothing about whether they are empirically possible. Nothing I have said so far rules them out as a logical possibility, but the empirics do imply something about their biological - even computational - possibility. At the least, before you carry on deducing what will and won’t happen with machines, you should bring some empirical support to the p-zombie assumption.

A Very Stupid Genius

True, our current AI programs do demonstrate that intelligence and consciousness are not matched one-to-one. AlphaGo is much more intelligent in its domain than any human, and in a game that most experts thought would take decades more for a computer to match a human.

The general rule is that if something can only be done by a human, it represents our uniqueness, our ineffable soul, the je ne sais quois of the human spirit, while if a machine can do it then it’s mere calculation. Chess was a marvel of the lucidity of human reason, deviousness and will and guts on display, until the machines outclassed our best. Now Hollywood hopes that stories and love make us special.

In contrast to the general rule, I think AlphaGo and GPT-3 do embody real intelligence, in a way that tells us something about humans, yet they aren’t awake. Nevertheless, this separation of intelligence and consciousness tells us almost nothing about the relation of higher intelligence and consciousness, because even these AIs are massively dumb.

They have been purpose-built to do exactly one thing in all the world. They excel at one minute slice of intelligence, one nearly incomprehensibly tiny piece of intelligence. They cannot play chess, and also play music, and dance, and do math, and wash dishes, and tie their shoes.3 Let alone much more difficult tasks such as making someone else angry or happy, or scheming to get their way in a group!4

We humans have as many neurons as there are stars in the Milky Way, and nearly as many connections between those neurons as there are stars in the universe. These AIs’ neuronal complexity is pitiful. They do not have the brains of a snail, much less a bird. Of course they don’t have consciousness. To think these AIs demonstrate that computers will never wake up is like pointing at a paper airplane and proclaiming space flight impossible.

Unknown Unknowns are Hard

I certainly am not saying consciousness is not a hard problem. It is hard, hard and mysterious. The more we pick apart the brain, the more we fail to find the source of consciousness. It isn’t in any part, or in any identifiable collection of parts, and we can’t figure out how or whether it arises as an epiphenomenon from some gestalt of the brain. We still don’t know the most important thing about ourselves: why we are awake.

This fundamental ignorance, despite all our science, must be our starting point for any speculating and philosophizing, but we do have some evidence to point us in the right direction.

We can start off, again, with biology. In general, the larger the animal, the larger the brain needed to run the body. If you want to know how intelligent an animal is, as a very rough rule you simply look at how much extra brain it has left over: the ratio of brain to body. Actually you must adjust and count the number of neurons in the forebrain, but then you’ll find, as expected, monkeys, elephants, dolphins much higher than most animals, and we humans up in a league of our own. Intelligence, computing power, and consciousness observably go together.

The failure to find consciousness’ location in the brain has pushed some to place its origin in matter itself, the panpsychist position, a move recommended only by desperation. Consciousness only reveals itself in clumps of matter which always happen to be arranged in a Turing-complete manner. We know the brain, though not usefully compared to a digital computer, performs neuronal computations, and universal computation hints strongly at a mathematical and physical explanation for why we can understand so many unexpected things.

If computation and consciousness and intelligence must go together past some point, then it is entirely possible computers will wake up one fine day. This may be so even though much of our own intelligence is deeply and literally embodied, and learned in incommunicably social ways, and takes the form of emotions fine-tuned over millions of evolutionary history.56 That merely means they will resemble us only insofar as we have designed them and they have learned in our digital environments.

A Long Bet

Although what I’ve said so far is strongly suggestive, I have not made a strong argument for the eventual consciousness of computers. I’ve merely argued for its possibility, and I hope convincingly attacked the fantasy that humans could all plausibly be zombies. However, like anyone else, I have my opinion, call it a long-term bet.

I think any machine, like any animal, will become conscious once it becomes intelligent enough. I do not know whether we can or will or should build such a machine, but I have no doubt it’s possible.

When and if that happens it will be a political and ethical quandary more than a philosophical one. When and if a computer wakes up, it will be a person. It will be a person possibly more alien than an extraterrestrial, who at least probably evolved under physical conditions, but nevertheless a sentient being.7

What do you do with a person you built, probably for some purpose or other, and whose means of life (electricity) you must provide? You must practice the platinum rule, and treat it as it would be treated, care for it as it would be cared for. There is no reason to think they will all be vastly more intelligent than we are, or that we will be unable to talk ethics with them.

Footnotes

  1. Chalmers’ p-zombie actually displays all the emotions of a conscious person, but feels nothing. I agree with those who say it assumes that which it seeks to prove, and here I’ll focus on empirical support rather than logical possibility. 

  2. I mean in broad bands, as between us and many other species. Erdős was more sensitive and aware of the joys of math than I am, while a person too unintelligent to be numerate might have a keen sense of the pathos inherent in daily life. 

  3. In a slight irony, Scott Aaronson notes that most people use such a list to show the impossibility of sentient AI, while I’m doing the opposite. 

  4. Modelling another human (“secretly he meant to do it”) feels simple to us thanks to intense evolutionary selection pressure but rests on vastly more complicated structures than any of theoretical physics (for instance). 

  5. Fjelland argues that machines cannot achieve human sentience because they are not embodied. I agree that AI is nowhere near AGI, and that humans are animals first, and calculating, narrating thinkers a distant second. But Fjelland does not mention the Church-Turing thesis or its speculative extension the Church-Turing-Deutsch principle, and I do not see how you can discuss the matter without mentioning these. 

  6. William James pointed out that humans do not have fewer instincts because we are more intelligent than other animals. Rather we have many more instincts, and are therefore more intelligent. 

  7. While this will be a political problem, I don’t regard it as an existential one. We don’t need some transcendent or ineffable source of human emotion and decision-making to find meaning in our beautiful and much-loved lives