Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

BTW - A transistor cannot make a meaningful choice. A transistor can (does) implement a computation. The two are not commensurate.
 
Fruitloop said:
BTW - A transistor cannot make a meaningful choice. A transistor can (does) implement a computation. The two are not commensurate.

But does a neuron make a meaningful choice? Is there anything about a neuron that makes it commensurate with meaning?
 
Jonti said:
Heh! Dennet does take things a tad too far on occasion :D

He seems to think the colour-blind (achromatic) neurologist could nevertheless understand what "redness" is; but I think she would not understand my perception of red at all. And nor would she understand a poem like "Silver" -- and all the semantic mark-up in the world would not help her bridge the explanatory gap.

One thing Dennet is really excellent on, is the demolition of the "epiphenomenal" view of consciousness. He points out that epiphenomenalism cannot explain why someone says they are conscious. The reason he gives is that an epiphemomenal consciousness cannot affect behaviour, including verbal behaviour, in any way!

Simple answer to this. Knowledge about electromagnetic radiation and its categorical perception is A, knowledge about what it's like to do so is B. Mary (as I think she's called) knows A and not B. No reason to tie yourself in knots.
 
Knotted said:
But does a neuron make a meaningful choice? Is there anything about a neuron that makes it commensurate with meaning?

We're in danger of becoming lost in imprecise language here. A single neuron 'understands' no more than a transistor. There is a transition from syntax to semantics, but it sin't happening at the level of a single neuron.
 
Fruitloop said:
Gurrier - you've still failed to answer (or indeed address) the David Chalmers zombie argument. We know that all the former is possible, we're doing it right now. Is that consciousness? If so then eliminativism is true and there is no question to be answered.
Sorry, I'm ignoring a substantial proportion of the contributions so I must have missed it, will take a look and respond.
 
Knotted said:
But does a neuron make a meaningful choice? Is there anything about a neuron that makes it commensurate with meaning?
Not if it *only* implements a computation, no.

My conjecture is that meaning is added not by processing pre-existing information, but by creating new information. If there is something about a neuron that enables it to create new information, then I reckon one can make a decent argument that the workings of the nervous system enable choice.

The underlying philosophical thinking is that a physical body that can make meaningful choices is conscious.
 
Fruitloop said:
We're in danger of becoming lost in imprecise language here. A single neuron 'understands' no more than a transistor. There is a transition from syntax to semantics, but it sin't happening at the level of a single neuron.
Actually, the evidence suggests that single neurons do encode particular semantics - you may have a single 'cup' neuron, even a 'bill clinton' neuron which map precisely to the concept (i.e. when you are thinking of a cup, the neuron is excited, otherwise it is not).
 
Jonti said:
Not if it *only* implements a computation, no.

My conjecture is that meaning is added not by processing pre-existing information, but by creating new information. If there is something about a neuron that enables it to create new information, then I reckon one can make a decent argument that the workings of the nervous system enable choice.
Neurons certainly create new information - indeed the molecular mechanism is well known and Erik Kandel won the Nobel prize for identifying it. That, however, does exactly nothing for your argument. Programmes are of course capable of creating new information too.

Jonti said:
The underlying philosophical thinking is that a physical body that can make meaningful choices is conscious.
That makes plants and transistors conscious.
 
Fruitloop said:
BTW - A transistor cannot make a meaningful choice. A transistor can (does) implement a computation. The two are not commensurate.
It most definitely can. It makes the choice but not the meaning. The meaning is a factor of what it is connected to.
 
gurrier said:
Sorry, I'm ignoring a substantial proportion of the contributions so I must have missed it, will take a look and respond.
It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.
 
Semantics might be a bit like playing chess well.

What is a good move in chess? What does that mean? Can we formulate rules for what a good move is? What if this pawn is here instead of there? Are the rules the same?

Ultimately it is possible to work out what the best move is in every situation, its just computationally very difficult.

But if we are not allowed such brute force methods, then what meaning can we give to a good move as opposed to a bad move?

Semantic rules are complex and, crucially, context-sensitive. How do we know when we are following the rules?

Wittgenstein's Philosophical Investigations are relevant here. Rule following is open ended without being ad hoc. I think it should ultimately submit to computation but it would be dissapointing if it were brute force.
 
Jonti said:
Not if it *only* implements a computation, no.

My conjecture is that meaning is added not by processing pre-existing information, but by creating new information. If there is something about a neuron that enables it to create new information, then I reckon one can make a decent argument that the workings of the nervous system enable choice.

The underlying philosophical thinking is that a physical body that can make meaningful choices is conscious.

I think I've said this before but when I type at the keyboard my computer is receiving new information. Its still a computer. If you can identify a new type of information that somehow contains the necessary whateveritisness then you might be going somewhere. I don't think this is likely though.
 
gurrier said:
Neurons certainly create new information - indeed the molecular mechanism is well known and Erik Kandel won the Nobel prize for identifying it. That, however, does exactly nothing for your argument. Programmes are of course capable of creating new information too.

That makes plants and transistors conscious.
All good stuff, but no, just manipulating syntactical relationships (as with a collection of transistors in an adding machine) does not create new information. It only explicates the consequences of pre-existing info. In a similar way, generating the logical consequences of a set of axioms does not tell us anything new, anything that was not explicit in the axioms.

Such data processing can extract the signal from the noise, but does not create entirely new information.

Of course, this depends on exactly what is meant by information in the first place.
 
Knotted said:
I think I've said this before but when I type at the keyboard my computer is receiving new information....
Not really, no; the information already exists in the world. It is merely transferred to your PC.
 
gurrier said:
Actually, the evidence suggests that single neurons do encode particular semantics - you may have a single 'cup' neuron, even a 'bill clinton' neuron which map precisely to the concept (i.e. when you are thinking of a cup, the neuron is excited, otherwise it is not).
I'm going to weigh into a very interesting debate way too late, and without having properly read the thread to address this
Single neurons may be found to be activated under very specific conditions, but to say that they definitely encode semantics is pushing the point rather too far for my liking (and that of most neuroscientists, I reckon).
There is no proof that they can be activated only by the specific stimuli, nor is there convincing proof that there are unique neurons for such stimuli.
The notion that high-level concepts can be represented by a single neuron is fine in principle, but there just isn't much evidence to support it. The general consensus is that neural assemblies which code for the properties of a given object are responsible as a group for supraordinate concepts. So, for example, if you see a cup, it's mroe likely that neurons encoding the shape, size, and colour of it are activated.
There is now controversial but increasing evidence that the representation of action words is supported not just by the "traditional" auditory and speech areas of the brain, but also by the motor regions involved in doing that action (e.g. the word "kick" activates a network which includes motor regions which control the leg).
In the face of evidence like this the "grandmother cell" hypothesis which you state above doesn't really hold up.
 
Jonti said:
Not really, no; the information already exists in the world. It is merely transferred to your PC.
You've gone down another blind alley with this information argument. If we get recursive, we can reduce your argument ad absurdam. Consider the piece of information "the computer contains a copy of this text" - it's hard to argue that this information has not been created. Trying to distinguish between information that is moved around and information that has been created is not a productive avenue for analysis - the recursive case shows that any situation can be considered as either (the piece of information that the information has been moved around has been created).
 
Jonti said:
All good stuff, but no, just manipulating syntactical relationships (as with a collection of transistors in an adding machine) does not create new information. It only explicates the consequences of pre-existing info. In a similar way, generating the logical consequences of a set of axioms does not tell us anything new, anything that was not explicit in the axioms.

What about evolutionary algorithms (I think they're called) where the machine learns to perform a task better through practice. Is there anything new here? To us certainly. We don't know what tricks the machine will learn before it learns it. Is anything new to the universe? Does it matter for our purposes?
 
gurrier said:
It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.
Let me turn this round ...

If consciousness is a requirement for acting as we do, then what is it that we do for which consciousness is a prerequisite??

My answer to that question is "make choices" -- and that's why I think that "nothing but" the manipulation of abstract syntax falls some way short of explaining consciousness.
 
Knotted said:
... Is anything new to the universe? Does it matter for our purposes?
Well yes, I think it does matter for the reason that the strong AI position is also one of hard determinism. Initial conditions are everything, and there is nothing new under the sun.

On the other hand, perhaps the cookie that is the universe can crumble in a variety of different but entropically equivalent ways. If so, new information can come into existence, for the future contains information that cannot be calculated from antecedent conditions.
 
perplexis said:
I'm going to weigh into a very interesting debate way too late, and without having properly read the thread to address this
Single neurons may be found to be activated under very specific conditions, but to say that they definitely encode semantics is pushing the point rather too far for my liking (and that of most neuroscientists, I reckon).
There is no proof that they can be activated only by the specific stimuli, nor is there convincing proof that there are unique neurons for such stimuli.
The notion that high-level concepts can be represented by a single neuron is fine in principle, but there just isn't much evidence to support it. The general consensus is that neural assemblies which code for the properties of a given object are responsible as a group for supraordinate concepts. So, for example, if you see a cup, it's mroe likely that neurons encoding the shape, size, and colour of it are activated.
There is now controversial but increasing evidence that the representation of action words is supported not just by the "traditional" auditory and speech areas of the brain, but also by the motor regions involved in doing that action (e.g. the word "kick" activates a network which includes motor regions which control the leg).
In the face of evidence like this the "grandmother cell" hypothesis which you state above doesn't really hold up.
Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure.

I agree entirely that high-level concepts such as 'cup' are more likely to be encoded as the assemblage of ontological concepts which the abstract cup concept is assembled from. There is, however, some experimental evidence which suggests that there is a one-to-one relationship between some neurons and some high-level concepts. It is, of course, impossible to experimentally verify whether the particular neurons only fire for the particular postulated concepts (you could do it theoretically though if you had an accurate and complete wiring diagram for the brain). The two are not mutually exclusive either - a single neuron could serve as the link between the various lower level ontological concepts that make up the high-level concept and also be an accurate semantic mapping of the concept itself.

The idea of action words being encoded across the brain's architecture is particularly interesting. I have always felt that the NLP people had got it wrong when they saw the subject-action-object model being embedded in language - it makes much more sense to me if its manifestation in language is an indication of the pattern being embedded in a much deeper way in our cognition.
 
Jonti said:
Well yes, I think it does matter for the reason that the strong AI position is also one of hard determinism. Initial conditions are everything, and there is nothing new under the sun.

On the other hand, perhaps the cookie that is the universe can crumble in a variety of different but entropically equivalent ways. If so, new information can come into existence, for the future contains information that cannot be calculated from antecedent conditions.

I think this is a popular myth. Are algorithms by their nature deterministic? The answer is no. Determinism and computationalism are two completely distinct questions.

Stephen Jay Gould also fell for this myth with his run-evolution-all-over-again-argument. Would it turn out the same? More to the point is the question even meaningful? But that aside nobody has ever claimed that the natural selection algorithm is deterministic. So the argument says nothing.
 
gurrier said:
Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure. ...
I think everyone in the discussion (with the possible exception of the romantic dualists) accepts both of these points.
 
gurrier said:
It most definitely can. It makes the choice but not the meaning. The meaning is a factor of what it is connected to.

So it makes meaning in the same sense that an ankle plays football?
 
gurrier said:
Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure.
No, you've misunderstood what people think. Have you read the thread all the way through?
 
Knotted said:
I think this is a popular myth. Are algorithms by their nature deterministic? The answer is no. Determinism and computationalism are two completely distinct questions.

Stephen Jay Gould also fell for this myth with his run-evolution-all-over-again-argument. Would it turn out the same? More to the point is the question even meaningful? But that aside nobody has ever claimed that the natural selection algorithm is deterministic. So the argument says nothing.
Steven Jones -- and I suspect Dawkins as well -- would agree with Gould that the outcome of evolution would not be the same, if things were replayed.

Although phycisists like Hawkins seem sure that the information content of the universe never changes, biologists don't seem to share that view. Steve Jones for one is quite explicit that evolution does create information.

But what is this "information" stuff, anyway?
 
Fruitloop said:
No, you've misunderstood what people think. Have you read the thread all the way through?
Yeah, with the exception of a couple of contributors.

What have I missed?
 
gurrier said:
You've gone down another blind alley with this information argument. If we get recursive, we can reduce your argument ad absurdam. Consider the piece of information "the computer contains a copy of this text" - it's hard to argue that this information has not been created. Trying to distinguish between information that is moved around and information that has been created is not a productive avenue for analysis - the recursive case shows that any situation can be considered as either (the piece of information that the information has been moved around has been created).

There's a world of difference between copying and referemcing in terms of computer sciences. In Perl, for example, if @info is ( 'dog', 'cat' ) and you say

@moreinfo = @info

then @moreinfo is ( 'dog', 'cat' ) . However you could also say

$more_info = \@info, and notice that @$more_info is still ( 'dog', 'cat' ) . However, something fundamentally different has happened here - in one sense more 'info' has been created, and in the other it hasn't. Many a newb has fallen foul of this confusion.
 
gurrier said:
It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.

OK, we're not talking about consciousness with the same meaning. A Chalmers zombie can have 'feelings', it just doesn't possess the fact of what it's like to have feelings.
 
Jonti said:
Steven Jones -- and I suspect Dawkins as well -- would agree with Gould that the outcome of evolution would not be the same, if things were replayed.

Although phycisists like Hawkins seem sure that the information content of the universe never changes, biologists don't seem to share that view. Steve Jones for one is quite explicit that evolution does create information.

But what is this "information" stuff, anyway?

There's a philosophical answer to the question and an evolutionary answer.

The philosophical answer is that the universe has only been run once. If you want to run it again in your imagination then it is up to you whether it would be exactly the same or not.

The evolutionary answer is that the theory of evolution by natural selection has never claimed to predict exactly how species evolve. It is not that sort of theory. Dennett deals with this question quite nicely.
 
Fruitloop said:
OK, we're not talking about consciousness with the same meaning. A Chalmers zombie can have 'feelings', it just doesn't possess the fact of what it's like to have feelings.
This is my problem though - to me feelings cannot be removed from "what it's like to have feelings" - they are inherently and absolutely teleological and inseperable from what it's like to have them. The whole point of sadness, for example, is that it's unpleasant and that it prompts the organism to seek ways to escape it. Sadness without the subjective unpleasantness is not sadness.
 
Back
Top Bottom