Consider a self-aware computer, somewhere in the space of minds. It’s smart enough to think about itself. But it can’t have perfect self-knowledge, due to Godelian infinite recursion issues. Hence, some of its parts must remain mysterious upon self-reflection.
The computer, realizing this, needs a label to describe the parts whose behavior can be observed, but whose detailed workings are (to it) inherently mysterious. In humans, this label seems to be “consciousness”.
hofstadter tried to explain this idea and despite doing so in a pulitzer prize winning popular book he mostly failed to get the idea to a wide intellectual audience.
aside from godel issues, there are reasons from abstraction barriers etc (need to use an information source without understanding how it works) for having high-level systems with opaque processes/symbols.
Thomas Metzinger *The Ego Tunnel* was the best I have read on this.
Imagine if there were two such computer’s and they could think about each other.
This is an interesting idea, that no intelligence can be fully self-knowing. Do you think that consciousness, as you define it, arises necessarily when you exceed a certain level of complexity? Or do you think you could conceivably have an artificial intelligence as complex as say the human mind that wasn’t self aware?
This is an interesting idea, that no intelligence can be fully self-knowing. Do you think that consciousness, as you define it, arises necessarily when you exceed a certain level of complexity? Or do you think you could conceivably have an artificial intelligence as complex as say the human mind that wasn’t self aware?