Mailbag: Is AI actually intelligent?
It's that time again.
From the inbox:
Hi Carl, I was wondering what your view is in this debate over whether or not artificial intelligence is actually intelligent? I know you’ve made a few comments about it over the years but I can’t find your posts now.
Sadly, Elon banning my Twitter account means that more than a decade of my posts evaporated overnight, so I’m not surprised that anyone is having trouble finding various points I’ve made about intelligence over the years. Since they’re gone now (and since even when they were still up they were scattered all over the place), I might as well lay out my thoughts on the matter here.
The main obstacle that prevents most people from understanding intelligence, in my view, is profound incuriosity. On one hand, evolution has endowed humans with a powerful, pre-rational ability to recognize other human-like creatures, and this is something we usually explain as an ability to recognize intelligence; so we all have a powerful sense that we know on a deep level what intelligence is, even if we can’t explain it. On the other hand, meanwhile, our culture is full of ideas about intelligence that have nothing to do with science and everything to do with culture and ideology. It is widely believed, for example, that being intelligent means being good at math — even though calculation is one of the simplest cognitive functions there is. That belief plainly has little to do with any rigorous theory of intelligence and much more to do with cultural ideas about how useful math is in our economy.
This complex of intuition and ideology makes people so confident that they already know what intelligence is that they are often baffled that anyone would even bother investigating it, much less make counterintuitive or controversial claims about it.
The more we look into the question, however, the more complicated it gets. When we ask about what intelligence is, for example, what we are often asking about is actually human intelligence; when people ask whether AI is intelligent, what prompts the question is their intuition that it seems to be doing human-like things. But humans are plainly not the only intelligent creatures. In fact, most organic life seems to possess some form of intelligence; one might even argue that all organic life is intelligent, though this is harder to say in some cases than others. Are sponges intelligent? How about plants? Bacteria? Viruses?
Here, I think, is where we gain our first real analytical traction into the question. One way you could define whether or not a virus is intelligent is by asking whether or not it has a brain, which it clearly does not. But this answer doesn’t seem satisfactory; most people would say that jellyfish are intelligent in some sense even though they don’t have a central nervous system, and if a virus started reciting Shakespeare we would say that it is intelligent whether it had a brain or not.
The question really seems to be about how a virus “behaves.” What makes viruses so useful for this inquiry is that they are such incredibly simple creatures, barely distinguishable from inanimate strands of protein, that one point seems clear: insofar as proteins are just a complex cluster of molecules engaged in deterministic chemical reactions, it is hard to say that they are intelligent. When for example it seems like viruses were swimming around looking for a host cell to latch onto and infect, this seemed to be intelligent behavior. But once we realized that they were just drifting around in the intercellular fluid and that they were only “latching on” to cells in the same way that one piece of velcro will inevitably stick to another, this stopped seeming like intelligent behavior; it was just a complex physical process that we didn’t know enough about.
This simple insight can take us surprisingly far. Working our way back up the tree of life, we can observe bacteria, and notice that a lot of what bacteria do can be explained as “mindless” chemical reactions. But even with something as simple as bacteria, we find some processes very difficult to explain at the level of chemistry and physics. Some very primitive bacteria find food by swimming towards light sources, since in the ocean food is more abundant near the surface than in the depths; and while the details get complicated, we can trace a basic causal path from a photon hitting one of its photoreceptors to an electrochemical cascade triggering motion in its flagellum, causing it to swim. But in other bacteria the process gets so complicated that describing it in terms of chemical reactions stops being plausible or even useful from an explanatory perspective; this is especially true when multiple factors (say the presence of light and of certain trace chemicals) combine in some complex way to trigger the flagellum. When this happens we tend to start describing the physical process with metaphorical analogies to human behavior: when it gets the right kind of stimuli it will “think” about it and “decide” to whether or not to “hunt.”
Here, I think, is where we arrive at the meaning of intelligence. Some processes in the world are so simple and transparent that we can describe the causes and effects in completely physical terms. Others, however, are so complex and opaque that we have difficulty conceptualizing and describing them in terms of physical causality; at this point, we resort to metaphorical comparisons to human intelligence.
From a psychological point of view, this makes sense. When we introspect on our own behavior, we see something like “thinking” going on, an elaborate process of reasoning and observations along with insights and decisions that seem to come from some deeper part of us that we can’t directly observe. When we look at how other humans behave, we develop a “theory of mind” and assume that something similar is going on with them; that when they cry, for example, there is something going on in their minds that is similar to what we experience when we cry. We assume this precisely because we cannot see what is happening in their brains, and even if we could it would be too complex and elaborate to easily conceptualize in terms of causes and effects.
Here, then, we are simply using this strategy to tell what other creatures are doing, too. If I see a slug touch a mound of salt and instantly curl up, this may seem like a purely physical process to me because I can easily understand how dehydration make some materials change their shape. But if I see it move across the front porch towards a flower and then start nibbling on it, this is very difficult to explain terms of chemical reactions, so instead I compare whatever is going on to what happens in my head when I “decide” to walk somewhere.
One crucial point to notice here is that my recognition of “intelligence” in something seems to have as much with my own ability to conceptualize its behavior as it does with the behavior itself. One proof of this is the well-documented way in which young children explain physical behavior that they do not yet understand by anthropomorphizing it, as when Piaget documents a child explaining that the sun “decided” to “follow” him home. This was really just the child noticing that the sun seems to stay in roughly the same relative place in the sky no matter what direction he moves in, but since he had no way to explain this behavior yet, he resorted to the theory that the sun had a mind.
This brings us to another crucial point of insight, which is that our association of intelligence with life mostly just reflects the fact that even our adult brains have trouble conceptualizing the complexity of biological processes. But this fact also holds outside of biological processes: thus we are often to say things like “the weather was angry today” or “the stock market isn’t cooperating” to describe extremely complex systems like the climate and the economy.
Generally, then, we can say that “intelligence” is really just an explanatory gimmick we use to describe causal processes that are too opaque or complex. When that happens, we analogize the cause to the thing in our own minds that seems to cause our behavior — a thing that we don’t understand very well either, but that we are at least familiar with. When we say that something has “human intelligence,” all we really mean is that it engages in behavior that we can’t really explain with a causally exhaustive and rigorous way, but that resembles things only humans can do.
This insight, among other things, gives us an easy way to understand arguments over the “intelligence” of AI. When people argue that it is not intelligent, they tend to stress that the causes of its behavior are deterministic and well-understood: it is really just a kind of elaborate big data processor that dumps out lexemes based on calculations of sequential probability. When people argue that it is intelligent, meanwhile, they tend to insist that something more than that is happening; for example, one common claim is that LLMs express something called “emergent behavior,” a property of complex systems that is often misunderstood to mean “behavior that is not deterministically caused.”
Conversely, people who argue that AI is not intelligent tend to make similar quasi-mystical claims about human intelligence: that something “emergent” is happening, or something immaterial, or something that cannot be fundamentally understood. Meanwhile, people who argue that AI is intelligent often gravitate towards strong claims about how the human brain works: we know that it is just neurons firing, or that it is just a neural network, or something similar.
In my view, human intelligence only seems intelligent because we still do not have a thorough and coherent understanding of how the brain operates; but this is just an artifact of our current scientific knowledge, and could change in the future. It is conceivable to me that someday humans will tend to have a more scientific understanding of the brain, one that will probably involve both a mechanical understanding of how neural networks operate, a schematic understanding of all of the different parts of the brain and what precisely they do, and a systematic understanding of how this all fits together to create mental phenomena. When that happens, “intelligence” will mostly be remembered as the crude and inarticulate way humans conflated and talked about all kinds of different features of the brain ranging from its capacity for short-term memory recall to how densely the structures mapping its semantic network are populated. It will be a lot like how prescientific cultures would talk about “resilience” to mean everything from the body’s capacity to produce white blood cells to pain tolerance to metabolic levels; everyone will just take for granted that “intelligence” was a primitive colloquialism that we since replaced with more accurate and precise language.
This also explains why some people think AI possesses “human intelligence” even though this is, by various objective measures and based on our still early understanding of the human brain, clearly not true. A whole genre of online comedy, for example, revolves around showcasing AI’s affliction with what we would in humans call (generously) a severe learning disability: it cannot count to a million, and it cannot even learn to count to a million. AI’s visual intelligence excels in some ways (like object recognition), but in other areas (like motion recognition) it is less intelligent than your average insect. In language use AI excels at common grammatical constructions, but it also gets tripped up on simple linguistic tasks that even children excel at, like interpreting ambiguous statements and constructing sentences that rely on long-distance dependencies.
If your theory of human intelligence just comes down to “it can do some things that humans can do” and you don’t care about the causal mechanics at work then these points won’t matter to you. But in that case, you’ll also have trouble explaining why any rigor why a calculator doesn’t have human intelligence, or why an MP3 file of me reading this article doesn’t have human intelligence. Once we allow ourselves to be curious about what human intelligence actually is, however, it becomes evident quite quickly that AI doesn’t have it. This is true even though we still know very little about what human intelligence is and how it works. Someday, we may have an understanding of the brain so thorough that it seems more like an elaborate chemical reaction than like the container for some mystical property called “intelligence”; but already, all kinds of simple facts about how it works and how it behaves makes it clear that it does very different things than AI.
Refer enough friends to this site and you can read paywalled content for free!
And if you liked this post, why not share it?


