That's very technical language that's quite hard to follow. Not a criticism, but it is proving a barrier to my understanding. What I'm curious to know is how the models described are constructed; section 1.4 seems to be talking about things going on in the subject's head (brain?), but then section 1.4.1 talks about "interesting results" from artificial intelligence robotics. That doesn't seem really relevant to how humans operate, and I generally consider myself to be an optimist about the prospects of reproducing consciousness in silico.
I’m genuinely sorry about the technical language. This whole area is a fucking headfuck of complexity, frankly, which is why I didn’t want to try to explain it in my own words. (I’ve read a tonne of recent papers about this stuff, and it was only after getting through loads of them that I actually started to get what they were saying because they all assumed you knew everything else already. I wish I’d had this textbook first.).
The general theory is this, although there are lots of variations of it:
The field of psycholinguistics tries to make sense of how humans process language — what it is, what happens in the brain and so on. From pretty early on in the field, it was obvious that language is crucial to how humans understand the world — fundamental to consciousness — and it also became obvious that human language is (possibly uniquely) abstracted from the immediately concrete. So the idea of “concepts” came into focus as the way that this abstraction happens. People asked what is a concept? How do humans use them? And so on. At first, it was suggested that concepts are lists of features that members of the conceptual category will contain. But in the early 1970s, a pioneer in the field called Elenor Rosch noted that people will judge some members of a category as more “typical” as other members, like an apple is more typical of “fruit” than an olive. So that meant conceptual categories can’t just be a matter of include/exclude. At that point, people started to dream up more complex systems to do with things like “prototypes” that are typical of a concept. But these raised the question of how the brain actually copes with typicality and other issues raised by these fuzzy ideas, and how can you actually process the words that are feeding these concepts?
Now, your brain has lots of modular systems through which it interprets the sensory inputs from the world. There is a system that deals with vision, one that deals with moving your hand, one that deals with judging distance and all these other things. These systems operate in both directions too, so as well as making sense of what comes in, they also simulate future actions so that you can do things like throw a ball or walk or listen for a cuckoo. So far, so simple — all animals have these systems, more or less — it makes sense that they would evolve because they allow animals to better find food and mates and avoid predators. We now think that when you process words — when you “think”, really — these simulation systems become recruited to more rapidly process the action ideas. We can see this through weird effects, like the differential speeds at which you will react to a statement, and things like fMRI scanning. And what you see is that the processing is contextual and related to the meaning itself — if you say “NoXion kicks a ball” then the same motor circuits that operate when you kick will light up, but if you say, “NoXion wants to kick a ball” then they will not.
There are live arguments about whether these modular systems process words by themselves or whether there is an additional non-modular master controller, but that’s not important really to this discussion. The point is that what you think is “thinking” noticeably involves the evolved parts of the brain simulating real-world actions and coordinating this with chemical and electrical signalling, which in turn create the bodily co-ordination chemicals that we then construct as “emotions”. The simulations are based on your real-world experiences of being an embodied human in particular situations.
So what the hell does all this mean? Well, one implication is that there is no single seat of “consciousness” — no little homunculus that decides on what you think and feel. It’s a mess of embodied, concrete simulations that evolved to survive in different environments that are combined with after-the-event meaning-making, which is itself based on the same system of simulated, embodied reactions. Is that free will? I don’t see how it can be, in the sense that purists would have it. But at the same time, the simulation is unique to you and the recursive nature of the way it is reinterpreted at least produces the illusion of thought, which the system uses to make unique and individual decisions.
(Regarding the stuff from 1.4.1 onwards — I was going to say something in response but then I realised that I don’t know enough about what it is referring to with respect to the experiments in computer linguistics. The subsequent pages seem to suggest that they’ve done experiments with programmes that can learn language that find that putting these programmes together result in the system generating a new language. But I don’t know what to make of that, really, or what the actual parameters of those experiments were.)