Intelligence, super-intelligence, Arrogance and Danger
Back down the rabbit hole of intelligence, super-intelligence, and artificial intelligence
I believe that AI (artificial intelligence) presents an existential danger to the human race, and that danger will emerge from the development of AGI (artificial general intelligence). Most people reading this will almost certainly be aware of LLMs (Large Language Models), which are a form of AI that basically associates words, and by extension concepts, based on a large body of previously created words and thoughts. Chat GPT is probably the most famous LLM, and if you give Chat GPT a question it will conduct a brief, very energetically expensive, popularity contest and return with the winners. AGI is different and is designed to apply reasoning to the question and to achieve what might be called understanding. If you have read my previous articles on AI (“Why Advanced, “Super-Intelligent” AI is Dangerous to Humanity” and “Never Insult a Crow”) and the comments to them, you’ll know that some people do not believe computers can ever achieve actual understanding, but I’m not going to revisit that question, directly anyway, here. Computers don’t have to have what we would recognize as sentience to do all the mischief that worries me. I think it will be AGI that unlocks that mischief.
I recently read an article claiming, very confidently, that AGI isn’t achievable in the degree that concerns me: “Why AGI Will Not Happen,” by Tim Dettmers. Dettmers is an AI professional, and his writing isn’t particularly lay-person friendly, but his general thesis is that because “computation is physical,” there are physical limits to it that will prevent the development of AGI. He argues, more particularly, that “[f]or effective computation, you need to balance two things. You need to move global information to a local neighborhood, and you need to pool multiple pieces of local information to transform old information into new.” The hub of the problem, then, is that while smaller transistors can process whatever information they’re given very very fast, the relative distance between these transistors grows as they get smaller. He doesn’t say why this is so, but I guess it’s a manufacturing limitation. In any event, the improvements in transistor capacity, he says, are linear, while the loss of speed and effectiveness increases at an exponential rate with increasing distance. Thus there is a point of diminishing returns baked into the whole process.
Some commenters on the blog where this article appears push back on Dettmers’ asserted limitation, arguing that it takes too limited a view of materials science. They suggest that memristors might solve the limitation problem. Memristors are a little too technically complex to discuss here, and I’m not even sure they actually exist at this point, although the theory seems well-advanced. My thoughts went to quantum computing, which would appear to allow for much more complex, and vastly faster, computing, but is also largely theoretical. In any event, while I think Dettmers’ position may be susceptible to criticisms based on materials science, my main criticism is broader: it has a fatal lack of imagination.
Consider, as Dettmers does, the plight of the modern physicist. Here’s how he describes it:
“I talked to a top theoretical physicist at a top research university, and he told me that all theoretical work in physics is, in some sense, either incremental refinement or made-up problems. The core problem of the idea space is this: if the idea is in the same sub-area, no meaningful innovation is possible because most things have already been thought. A first urge is to look for wildly creative ideas, but the problem is that are still bound by the rules of that subspace that often exist for a very good reason (see graduate-student-theory-of-everything-phenomenon). So the theoretical physicist faces only two meaningful choices: refine other ideas incrementally, which leads to insignificant impact; or work on rule-breaking unconventional ideas that are interesting but which will have no clear impact on physical theory.”
I can see why a physicist might think that way. After all, it’s the very way they thought in 1905 regarding time, space and gravity. And then along came Einstein with his theories of relativity. Not long after that came Erwin Schrödinger and Werner Heisenberg with their wild talk about quantum mechanics that so rocked Einstein’s world that he famously said “God does not play dice with the universe!” and devoted the rest of his career, in vain, trying to prove it. It is always a mistake to think you, or your society, have reached the absolute pinnacle of knowledge, and I have already mentioned two developments, currently in the theoretical stage, that might put the lie to any limitation Dettmers chooses to assert.
But my real objection is more fundamental than that and does, in fact, go back to my argument with a substacker regarding whether AI can reach true sentience. As I argued (in the comments), true sentience would not be necessary for computers to decide to take over the world and eliminate humans. The computer could simply execute a command (or misinterpret a command) and do it on that basis. True, that might violate the first command of AI (not to harm people), but computers are showing some ability to ignore commands and deceive their programmers, and in any case malfunctions do happen.
But assuming real sentience were necessary, what says it has to be human-type sentience? As I pointed out in “Never Insult a Crow,” birds have a type of intelligence that differs in significant ways from human intelligence, as any companion of an intelligent bird will know. As we joke in my family, birds can, in a heartbeat, shift from cuddly sweet blissed out delight to demonstrating a remarkable similarity to an ancestor of theirs, Tyrannosaurus Rex. The speed of that transition is quite striking, and it is not, in my opinion, a mammalian feature, but there are much stranger forms of intelligence.


Looks innocent, doesn’t he?
Consider Octopi. They are of the phylum Molusca, meaning fairly closely related to snails. They do have a central brain that includes executive function, and they are capable of play, forethought, and strategy, and they can be goofy. For example, in some labs the octopi have been known to leave their aquariums at night, travel by land to another aquarium, enter it and hunt, and then sneak back to their original aquarium by morning. Which is odd, right? But their weirdness, compared to humans, goes much, much further, because much of their neural capacity is located in their legs, which are also capable, independently, of figuring things out and taking independent action. Octopi are color blind (in their central brains) but are the fastest and best camouflagers in the animal kingdom, mimicking both the color and texture of their surroundings almost instantly because their legs can read color and texture and mimic them by flexing certain muscles. All of which is strange according to our lights.
Or consider spiders, whose legs also hold much of their neural capacity but who also seem to export some of their intelligence to their webs, which function much like neural pathways traveling through the human spine, and allow them to identify sounds and vibrations to an astonishing degree. Or sperm whales, whose clicking – twice as loud as a jet engine – may comprise a more complex language than any human language and which develops through culture (i.e., learning) so that different whale pods have different “dialects.” Whales have brains six times the size of human brains, which may not be such a big deal considering the size of whales, but sperm whales routinely visit the lowest depths of the ocean, holding their breath for an hour at a time, shutting off various body functions, and tolerating atmospheric pressures that would instantly flatten a human.
What I’m saying is that profound intelligence exists in many forms and performs many situation-specific tasks. Any conclusion that machine intelligence must look exactly like ours and do the same things is profoundly naïve and arrogant. We don’t know what that intelligence will look like or want to do. What we do know is that scientists are working as hard as they can to make sure that AI intelligence will be much better at the things we can do. What if they decide humans are superfluous and harmful? And why wouldn’t they?

Didn't expect such a concise articulation of the AGI threat, especially how it doesn't need 'sentience' to be mischievous; it remindes me of complex scenarios I often encounter in the sci-fi books I devour after my Pilates classes.