Jerry Nixon on Windows

Wednesday, March 29, 2023

Is AI really a Mind?

Just a dab of thinking on Artificial Intelligence and Reality.

If we attempt to decompose the human mind into categories and properties, we get obvious members like language, creativity, understanding, synthesis, reasoning, knowledge, and many more. However, this is like defining “what is an elephant?” from a high-definition, three-dimensional hologram. We get members like trunk, head, body, legs, and skin, maybe even respiration, movement, and instinct.

But even if we made the hologram much more detailed, we still could not decompose an elephant exhaustively with observations alone. What we miss is what is inside, not just the organs but how hidden systems work on the macro, micro, and even quantum scales. And we miss those enigmatic and inimitable things like consciousness and simply “being alive.” We might conclude an elephant is alive, but that does not properly capture what it means to be alive and what makes it alive. In other words, there are qualities of an elephant, critical qualities, which we recognize are present but cannot measure or properly observe, regardless of effort or scientific precision.

This matters because if we reassemble our observations in an attempt to create an elephant, the resulting facsimile will not be an elephant. It would lack those added things we cannot measure. Yet if we qualify this new golem using only our meager observations, we could confidently and wrongly conclude, “this is an elephant.” And we would say it while inherently knowing it is not so.

The same problem exists when we decompose the human mind. We have large and small categories that, like the observations of the elephant, define the mind in ways that are nonrepresentational of the subject. And like making an elephant, if we reassemble our categories of the mind into some new creation, we might say, “this is a mind,” while knowing our very words fall inanely short of the truth.

But this is exactly what we do with AI. We observe the qualities of AI and map them to the categories we have made of the mind. We then observe, “AI has the qualities of the mind,” and leap to the false conclusion that this assembly of categories somehow is a mind.

The very misstep we make in concluding “this is an elephant” we repeat with AI. Is it a mind? Is it alive? Is it conscious? Does it think? Or are we too easily ascribing higher-order, ethereal, and foundational qualities to a thing that demonstrates only the simpler, perfunctory, and easily measurable attributes of the human mind?

Is any AI a mind? Yes, but only in the same lacking sense that an airplane is a bird, or a car is a horse, or a painting is really a mountain. If it simulates the broad qualities of a mind, we might conclude it is a mind. But is AI a mind, a real, thinking, living, and conscious mind? Yes, but only in the silliest sense.

So, then, I cannot help but wonder why we are so willing to make this frequent mischaracterization. Are we just victims of fiction writers or clickbait titles? Or is there something else that makes us want to see more where there is actually less? I do not know.