When it comes to AI, machine learning, and the rest of that space, I can see why.
Things are changing very quickly.
The fact we have machines producing texts that you'd assume could only be done by a consciousness (as a layperson) but then find out they were made by a very very complex process that has no consciousness at all, well it makes you wonder.
Will we know when a machine is conscious? This is the question on everyone's lips, but it still bugs me.
If a machine says it is conscious, can we trust that?
We are happy to trust that our humans friends are probably conscious, but will we ever trust that a machine is?
I am certain that when machines are helping out around the house, or acting as surrogate pets, people will treat them as conscious without regard or concern for if they actually are. We humans talk to our cars and our dogs and all manner of other things, so a machine that is responding back will definitely be treated as a living thing, even when no part of it is in any way biological. Even when the builders a emphatic that the machine has no emotions or internal existence.
But something I read about recently was the idea to give machines an internal existence. Give them an internal representation of their space so that they can test their actions 'in their head' before they do them in the real world.
And it made me think about my own experience. This is basically what consciousness feels like to me. A representation of the world that lives inside me. And as far as I can read, that's accurate. Our brains guess the world then our sensory data updates the guess.
A machine operating the same way, with an internal representation of the environment and of itself, it by definition self-aware.
A very interesting proposition.