Just a dab of thinking on Artificial Intelligence & Reality.
If we attempt to decompose the human mind into categories and properties, we get obvious members like language, creativity, understanding, synthesis, reasoning, knowledge, and many more things. However, this is like defining "what is an elephant?" from a high-definition, three-dimensional hologram. We get members like trunk, head, body, legs, and skin, - maybe even respiration, movement, and instinct.
But even if we made the hologram much more detailed, we just can't decompose an elephant exhaustively with only observations. What we miss is what is inside, not just the organs but how hidden systems work on the macro, micro, and even quantum scales, and we miss those enigmatic and inimitable things like consciousness and just "being alive". We might conclude an elephant is alive, but that doesn't properly capture what it means to be alive and what makes it alive. That is to say, there are qualities of an elephant - critical qualities - which we recognize are present, but we cannot measure or properly observe regardless of effort or scientific precision.
What makes this important is that if we reassemble our observations in an attempt to create an elephant, the resulting facsimile will not be an elephant. It would be missing those added things which we cannot measure. However, if we attempt to qualify this new golem using only our meager observations, we could confidently - and wrongly - conclude "this is an elephant". And we would say it while inherently knowing what we were saying is not so.
This is similar to our decomposition of the human mind. We have large (and small), true categories that, like the observations of the elephant, define the mind in a way that is nonrepresentational of the subject. Like making an elephant, if we reassemble our categories of the mind into some new creation, we might say "this is a mind" while knowing our very words fall inanely short of the truth.
But this is what we mistakenly do with AI. We observe the qualities of AI and map those to the categories we have made of the mind. We consequently observe "AI has the qualities of the mind" then leap to the false conclusion that this assembly of categories somehow is: a mind.
The very misstep we make in concluding "this is an elephant" we incautiously repeat with AI. Is it a mind? Is it alive? Is it conscious? Does it think? Or are we too easily ascribing higher-ordered, ethereal, and foundational qualities to a thing that demonstrates only the simpler, perfunctory, and easily measurable attributes of the human mind?
Is any AI a mind? Yes, but only in the same lacking sense that an airplane is a bird, or a car is a horse, or a painting is really a mountain. If it simulates the broad qualities of a mind, we might conclude it is a mind. But is AI a mind - a real, thinking, living, and conscious mind? Yes, but only in the silliest sense.
So, then, I can't help but wonder why we are so willing to make this frequent mischaracterization? Are we just victims of fiction writers or clickbait titles? Or is there something else that makes us want to see more where there is actually less? I don't know.