How does consciousness arise?
I just watched a debate on consciousness. It seems we are no closer to understanding how this emerges.
I had a few thoughts:
Maybe we can understand some precursors of consciousness before trying to explain the whole thing. I think consciousness depends on understanding, which in turn depends on its mental states having meaning.
I think we can see how meaning arises - consider hieroglyphics in a pyramid. We can interpret their meaning because there is an awful lot of writing in hieroglyphics. It is possible to guess the meanings of a few symbols, and test that against a lot of text. If all the occurrences of a symbol make sense, with the assigned meaning, we continue by guessing related symbols. If not, we guess a new meaning. Eventually meanings are assigned to all symbols, which consistently make sense. Some meanings may be 'fuzzy' - it might be apparent that a symbol relates to water, but not clear if it refers to a cup of water, a stream or a sea. Analysing more text will eventually tighten the interpretation.
So meaning emerges from a large body of text. The meaning gets clearer, with greater amounts of text (obviously this ties in well with the emergent 'understanding' of LLMs). The data does not need to be text - it could be recorded sensor and motor data from an agent exploring an environment. This would make the emergent meaning more grounded, and more convincingly like human understanding, much of which is derived from direct observation of the world (although we read too!). In all cases, the meaning emerges due to the volume of the data. As more data is added, it becomes increasingly hard to assign incorrect meanings to a symbol, without finding a usage where that meaning does not make sense.
I think LLMs understand much of the text that they process in a basic way. This still does not amount to consciousness, but it is a step towards it. If they were trained on data from real world experience (via a robotic interface), I think it would be possible to claim some sentience. If there mental states also self-referentially recorded and summarised their own internal mental states, I think we could argue for consciousness.