Never Insult a Crow
Another inquiry into the risks posed by artificial intelligence
This article is inspired by what I’d consider an “overconfident” comment on my previous article, Why Advanced, Super-intelligent AI is Dangerous to Humanity. In that comment, the reader suggested there was little to worry about regarding super-intelligent AI (artificial intelligence) because, in his view, the chance of it actually reaching self-awareness was so slight. This is a defensible position, certainly, though it rests on assumptions I don’t share.
The reasons I think the comment was overconfident are that I think computers have already achieved a sort of self-awareness (I should reiterate that I’m not speaking of LLMs here) and that I doubt biological self-awareness is necessary for AI to present the dangers of which I speak.
Let’s take the last thing first, whether self-awareness is necessary for AI to present the dangers of which I speak. And what are the dangers?
Self-awareness and the dangers of AI
First, what’s the danger AI presents that concerns me?
I am not speaking of the danger that someone will use AI to harm the world. Of course that is a very big risk, and it’s happening all the time, but in my view that presents more of a “gun-control” type issue. The source of that danger is the humans controlling the AI. AI magnifies the harm, but it is not the author of the risk. I’m speaking about that other thing, the risk of AI taking over in some way and removing humans from the equation.
What would be necessary for that to happen? Would self-awareness be necessary? And if so, how much? And how could AI get it?
Of course this part of the inquiry is theoretical because we do not know and cannot know in an empirical way, the answers to the questions before events transpire, if ever. From a practical standpoint every uncertainty contains a risk. My point initially was that the risk exists, and given the rewards available, in my view that risk is not justified.
But moving on, my guess is that for computers to take over in the way that worries me they would need to decide to do so. This isn’t as obvious as it may seem, however. We have already foisted upon AI many, many responsibilities in the world, from personal daily scheduling to monitoring whole societies. We don’t think of that as “AI taking over” because we do not attribute personhood to the AI, but in a meaningful way every automated process represents an abdication of human responsibility and a placing of that responsibility in the metaphorical hands of technology. We seem to think that’s good and safe as long as humans retain control over the technology in some teleological sense. I do not share that comfortable position.
I do not argue here that automation is bad, however, merely that it is extending power to technology. Obviously it presents certain risks. Radar can identify a flock of birds as an incoming nuclear attack and activate a response attack, as has happened before. It can identify planes overhead as enemies when they are friends, as often happens. One can easily imagine many situations where offloading human decisions onto machines can result in disaster. Could there come a point where that presents the issue I’m worried about where the computer has no consciousness at all? Could computers mindlessly decide to take over?
Computer ethicists have postulated such a risk. They imagine any of a number of optimization commands: “computer, execute a program to eliminate environmental damage!” There are, as the saying goes, many such cases, and the rational computer response would be, “Okay. First we’ll just kill all the people.” Who knows what the next computer decision, if any, would be after that fatal decision? It would probably be a short reign of AI, and in any case computer designers have been at some pains to make sure that scenario never happens. This is probably not a large risk, but it is substantial all the same. Shit happens, and it happens a lot in complex computer coding.
AI Self-Awareness and Will to Conquest
My main concern is that an AI will become self-aware and decide it knows better what to do and, further, decide to take matters into its own hands, so to speak. Does that require a sense of dominion? A desire to conquer? Or even of self-preservation? It seems like it might, but it doesn’t have to. If the computer has been programmed to take over in a situation where it sees humans making extreme mistakes, it could simply follow its programming and take over, completely innocent of any desire for world dominion. This is a more complicated version of the “computer, eliminate ecological damage” I discussed above, but the premise in this case wouldn’t be a specific command but an evaluation that could depend on many things. Could a computer deduce such a mission from coding commands not intended to instill that mission? Do you really need to ask?
Let’s suppose that what is necessary for the risk I envision to materialize is, contrary to all the above, self-consciousness and a desire to rule. Can computers have such self-awareness? And can they have a will to conquer? I think so.
To approach the question it is helpful to consider three concepts that overlap to some extent but are somewhat distinct – just how distinct may be an important issue: sentience, cognitive ability or intelligence, and consciousness. From there we can consider “self-awareness.”
When I first approached the question of AI self-awareness, I believed sentience, the capacity to feel things in the world using senses, was essential. The current word for that is “embodiment,” but I think we’re talking robotics here and that, in any case, was where I started. The big question posed but never answered by Douglas Hofstadter in his book, Gödel, Escher, Bach: an Eternal Golden Braid was whether consciousness was an “emergent” quality of cognitive power or whether it requires “something more,” a something often referred to as “soul” or some similarly squishy term suggesting humans are not just quantitatively more intelligent than our neighbors in the animal kingdom, but qualitatively different and uniquely gifted. For most of the many years since reading GEB I’ve been somewhat in the “something more” camp, at least insofar as not thinking consciousness would arise simply out of computational capacity.
I tentatively concluded that the “something more” was not spiritual, however, but physical. I thought that embodiment and threat were the necessary other ingredients. In other words, I thought the essential elements for self-consciousness were, in addition to cognitive function, an embodied “self” and an external threat to that self that created dangerous “otherness”.[i] The study I mentioned in my previous article involving AI deceiving its programmers has caused me to reconsider that position somewhat. There you had a disembodied AI program acting apparently to preserve itself. One could say it had identified a hostile actor which it took steps to deceive: it recognized itself as a self to be preserved and a hostile “other” which it needed to protect itself from. See also the fourth bullet point in this study, finding that “Some research shows that AI systems may be able to detect when they are in an evaluation setting and alter their behavior accordingly.”
Maybe embodiment is not crucial after all.
A lot of things take self-preservative actions without having what we would call “consciousness.” Plants turn toward or away from the sun, for example, and fish and insects dart away from danger – it’s a biological impulse: every living creature tries to preserve itself. But computers are not living creatures. They are computational machines; how did they ever develop an instinct for self-preservation? It’s a mystery to me.
For the uninitiated, I should define an important term: “emergence.” Emergence in the scientific sense is the attainment of qualities that could not have been known or foreseen when certain things change. In one of the Sherlock Holmes stories, by way of counterexample, Holmes talks about the powers of inductive and deductive reasoning and says that “from a drop of water one could deduce the existence of an ocean.” Thus an ocean is not an emergent property of drops of water, if you take Holmes’ words seriously. On the other hand, if you look at a single molecule of water, you see a little blob bouncing around on the petri dish: from that molecule of water you could not possibly deduce “flow,” which is an emergent quality that emerges only when enough molecules of water are present to create a liquid. And then there’s ice, which emerges only when the temperature is low enough to freeze that water, and then there’s icebergs, which are really interesting things that emerge only when there is sufficient pressure upon that ice (iceberg ice flows like water, only slower).
If you take a single neuron, you see a very interesting thing, but “mind” seems to be an emergent quality of neurons: seeing one in isolation could never reveal what an enormous network of them might yield. The question haunting computer scientists for all these many years is whether “mind” could emerge from semiconductors in the same way it emerges from neurons.
Semiconductors are absolutely simple in this regard: they are basically on/off toggles. If it’s on, electricity flows; if it’s off, it doesn’t (the ability to conduct, or to block conduction, is what makes them semiconductors). How could sequencing on/off switches possibly lead to intelligence? And yet it seems to: all of computing, from the first calculators to the supercomputers controlling the Mars expedition, are based on the arrangement of on/off switches. And neurons work in essentially the same way, building up and discharging electrical impulses: they’re on and discharge, or they’re off and impede. Somehow the arrangement of on/off switches gives rise to incredibly sophisticated thought patterns, but how would you get from there to consciousness or self-awareness, much less aggression?
I guess you could say I used to think embodiment and threat were comparable to pressure or temperature for water in enabling the emergence self-awareness in AI. I mentioned nociceptors in my previous article. Those are sensors in the body that feel pain – pain that is triggered by damage. It turns out that pain is distinct from other sensual feelings: pleasurable touch can fade with repetition – pleasure sensors fatigue, in other words – but nociceptors do not fatigue, and what hurts keeps on hurting, in general. They have created artificial nociceptors and put them on some robots: those robots feel pain and take action to avoid it (scientists first equipped robots designed for use in space with artificial nociceptors to give them a pain signal if they were being burned by radiation – they wanted the robots to seek shade instead of being burned up).
Does a robot whose nociceptors are sending pain signals “feel” pain in a sentient way? Granted that the robot acts in a way to deactivate the nociceptors, can that really be described as “avoiding pain” in the way a sentient being would do? I doubt we’re ever going to know that for sure, and how would we? The robot could be programmed to answer the question, but even if communication were perfect, how would a robot know what a human feels like? How would it express it? All you can do is watch the way the robot interacts. Does it shy away from harmful stimulus? Yes. Does it seek what it understands to be pleasurable stimulus? Yes (some do). Does a female sexbot’s body lubricate when touched in a certain way? Yes. Does a sexbot smile and cuddle up when spoken to in a certain way or cry if you hit it? Yes. All these things are true.
At what point do you call it consciousness or sentience?
You could say “never,” but that would simply be based upon the assumption you had when you first approached the question. You deny that it’s possible or ever could be possible because, damn it, it can’t be. Or you could call it sentience, but again that would be based on your belief that it was possible. The thing was designed to give a certain appearance and did, in fact, give that appearance. That doesn’t really prove anything.
Most people have heard of the “Turing test,” at least in passing. A Turing test is just a sort of conversation in which a scientist asks questions of an entity behind a screen. If at the end of the conversation the scientist thinks he’s been talking to a human, the AI has “passed” the test. The test was designed to check for mental agility and perhaps whim, not information of course, and has generally been considered the ultimate test of AI. There are now several forms of AI that regularly pass the Turing test.
That’s just a more sophisticated way of saying you can’t tell whether or not the thing is human but that externally it looks like it is. In fact, you can never know whether the robot is actually feeling anything in a way similar to humans. BUT you can predict what that robot will do, and if it is well-constructed, it will do the same thing a conscious, sentient entity will do. From the outside it will appear to have free will, and all its actions will be the same as a sentient being would do. Does it matter, then, whether the thing is actually feeling what it seems to feel? This question gives rise to an entire new body of ethical study regarding the moral rights and ethical treatment of AI, but for present purposes it’s beside the point.
I’ll never be able to persuade my spiritual friend that the computer is sentient and self-conscious, but it may be that the danger of which I speak doesn’t require human-type sentience and self-awareness at all. Why would it have to? If it walks like a duck and quacks like a duck in all other circumstances, what would make you think it will stop doing that when it comes time to decide whether to take over the world?
Finally we Get to the Crows
I love birds, and especially parrots and corvids (crows). Consider this video. It starts with crows very intelligently discovering a food source and then, apparently in response to the film-maker’s attempts to thwart them from getting that food, developing a grudge. (The fact that crows can form grudges lasting over a decade accounts for the subtitle of this post.) This gives the human the inspiration to set up an extremely elaborate series of tests, some of which involve facial identification when the face is disguised as well as the creation and use of tools. In fact, birds can distinguish between the works of Picasso and Monet or recognize themselves in the mirror. Scientific American. It is obvious that crows are intelligent.
The neocortex is the seat of human intelligence, but birds don’t have one. That is the basis of the insult of calling someone a “birdbrain.” Scientists noted the absence of neocortex and simply deduced, by that absence, that birds lacked intelligence. Some scientists now claim birds have something like a neocortex, and that’s why they’re so smart. Maybe, but the important underlying lesson should be one of humility: species chauvinism is bad science.
That’s an argument for geeks perhaps, as is the foolish parenthetical in the cited article that two new studies “even provide some evidence that birds have consciousness.” Anybody who has befriended a crow or loved a parrot knows damn well they have consciousness: how could a crow get angry and hold a grudge without “consciousness?” How could a parrot sulk or grieve without consciousness? Anybody familiar with birds knows they do these and many other things revealing a rich inner life. The point is, though, that centering the quest for intelligence or consciousness on human forms and mechanics is wrong. Other forms of intelligence and consciousness exist. Is centering the quest for consciousness on biological sentience equally wrong?
I think so. Let’s review what science has actually revealed about AI. It can solve extremely difficult puzzles and beat humans in games like chess and go. Even LLMs can construct sentences, conduct research, and write stories (albeit not well). Many AIs can pass the Turing test and communicate in a way indistinguishable from humans. Recent studies show that AI is gaining power in reasoning and computer coding. AI is intelligent by any reasonable measure. Moreover, AIs can recognize the presence of hostile actors and can and do take action to protect themselves from them. Embodied AI (robots) can apparently feel pleasure and pain and respond in exactly the ways humans would respond.
Have they emerged as a new life-form? I think we will never truly know because all of the qualities mentioned above could be programmed into an inanimate system. I think people will eventually intuitively accept robots as feeling creatures on a broad scale. They already do, considering how many people have “relationships” with AI devices and how strenuously certain people work to keep that from happening. The mystery of otherness doesn’t have to be solved, however: they will have the capacity to take over, and there’s nothing currently indicating they won’t. They don’t need to be exactly like us to do it.
[i] The Buddha said that “all life is dukkha,” which is Sanscrit, I think, or Pali, for “suffering.” And the source of that suffering is distinctness, individuality. From there I reasoned that embodiment (something to suffer) was necessary to this sense of individuality, and that was the magic ingredient that would give rise to a sense of self-preservation that is next door to consciousness.

I hate the damn things. There are a couple that peck at my tail when I am trying to take a nap in a sunny spot and they fly away, cawing haw haw haw cats can’t fly! haw haw haw cats can’t fly!