Fruitloop said:BTW - A transistor cannot make a meaningful choice. A transistor can (does) implement a computation. The two are not commensurate.
Jonti said:Heh! Dennet does take things a tad too far on occasion
He seems to think the colour-blind (achromatic) neurologist could nevertheless understand what "redness" is; but I think she would not understand my perception of red at all. And nor would she understand a poem like "Silver" -- and all the semantic mark-up in the world would not help her bridge the explanatory gap.
One thing Dennet is really excellent on, is the demolition of the "epiphenomenal" view of consciousness. He points out that epiphenomenalism cannot explain why someone says they are conscious. The reason he gives is that an epiphemomenal consciousness cannot affect behaviour, including verbal behaviour, in any way!
Knotted said:But does a neuron make a meaningful choice? Is there anything about a neuron that makes it commensurate with meaning?
Sorry, I'm ignoring a substantial proportion of the contributions so I must have missed it, will take a look and respond.Fruitloop said:Gurrier - you've still failed to answer (or indeed address) the David Chalmers zombie argument. We know that all the former is possible, we're doing it right now. Is that consciousness? If so then eliminativism is true and there is no question to be answered.
Not if it *only* implements a computation, no.Knotted said:But does a neuron make a meaningful choice? Is there anything about a neuron that makes it commensurate with meaning?
Actually, the evidence suggests that single neurons do encode particular semantics - you may have a single 'cup' neuron, even a 'bill clinton' neuron which map precisely to the concept (i.e. when you are thinking of a cup, the neuron is excited, otherwise it is not).Fruitloop said:We're in danger of becoming lost in imprecise language here. A single neuron 'understands' no more than a transistor. There is a transition from syntax to semantics, but it sin't happening at the level of a single neuron.
Neurons certainly create new information - indeed the molecular mechanism is well known and Erik Kandel won the Nobel prize for identifying it. That, however, does exactly nothing for your argument. Programmes are of course capable of creating new information too.Jonti said:Not if it *only* implements a computation, no.
My conjecture is that meaning is added not by processing pre-existing information, but by creating new information. If there is something about a neuron that enables it to create new information, then I reckon one can make a decent argument that the workings of the nervous system enable choice.
That makes plants and transistors conscious.Jonti said:The underlying philosophical thinking is that a physical body that can make meaningful choices is conscious.
It most definitely can. It makes the choice but not the meaning. The meaning is a factor of what it is connected to.Fruitloop said:BTW - A transistor cannot make a meaningful choice. A transistor can (does) implement a computation. The two are not commensurate.
It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.gurrier said:Sorry, I'm ignoring a substantial proportion of the contributions so I must have missed it, will take a look and respond.
Jonti said:Not if it *only* implements a computation, no.
My conjecture is that meaning is added not by processing pre-existing information, but by creating new information. If there is something about a neuron that enables it to create new information, then I reckon one can make a decent argument that the workings of the nervous system enable choice.
The underlying philosophical thinking is that a physical body that can make meaningful choices is conscious.
All good stuff, but no, just manipulating syntactical relationships (as with a collection of transistors in an adding machine) does not create new information. It only explicates the consequences of pre-existing info. In a similar way, generating the logical consequences of a set of axioms does not tell us anything new, anything that was not explicit in the axioms.gurrier said:Neurons certainly create new information - indeed the molecular mechanism is well known and Erik Kandel won the Nobel prize for identifying it. That, however, does exactly nothing for your argument. Programmes are of course capable of creating new information too.
That makes plants and transistors conscious.
Not really, no; the information already exists in the world. It is merely transferred to your PC.Knotted said:I think I've said this before but when I type at the keyboard my computer is receiving new information....
I'm going to weigh into a very interesting debate way too late, and without having properly read the thread to address thisgurrier said:Actually, the evidence suggests that single neurons do encode particular semantics - you may have a single 'cup' neuron, even a 'bill clinton' neuron which map precisely to the concept (i.e. when you are thinking of a cup, the neuron is excited, otherwise it is not).
You've gone down another blind alley with this information argument. If we get recursive, we can reduce your argument ad absurdam. Consider the piece of information "the computer contains a copy of this text" - it's hard to argue that this information has not been created. Trying to distinguish between information that is moved around and information that has been created is not a productive avenue for analysis - the recursive case shows that any situation can be considered as either (the piece of information that the information has been moved around has been created).Jonti said:Not really, no; the information already exists in the world. It is merely transferred to your PC.
Jonti said:All good stuff, but no, just manipulating syntactical relationships (as with a collection of transistors in an adding machine) does not create new information. It only explicates the consequences of pre-existing info. In a similar way, generating the logical consequences of a set of axioms does not tell us anything new, anything that was not explicit in the axioms.
Let me turn this round ...gurrier said:It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.
Well yes, I think it does matter for the reason that the strong AI position is also one of hard determinism. Initial conditions are everything, and there is nothing new under the sun.Knotted said:... Is anything new to the universe? Does it matter for our purposes?
Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure.perplexis said:I'm going to weigh into a very interesting debate way too late, and without having properly read the thread to address this
Single neurons may be found to be activated under very specific conditions, but to say that they definitely encode semantics is pushing the point rather too far for my liking (and that of most neuroscientists, I reckon).
There is no proof that they can be activated only by the specific stimuli, nor is there convincing proof that there are unique neurons for such stimuli.
The notion that high-level concepts can be represented by a single neuron is fine in principle, but there just isn't much evidence to support it. The general consensus is that neural assemblies which code for the properties of a given object are responsible as a group for supraordinate concepts. So, for example, if you see a cup, it's mroe likely that neurons encoding the shape, size, and colour of it are activated.
There is now controversial but increasing evidence that the representation of action words is supported not just by the "traditional" auditory and speech areas of the brain, but also by the motor regions involved in doing that action (e.g. the word "kick" activates a network which includes motor regions which control the leg).
In the face of evidence like this the "grandmother cell" hypothesis which you state above doesn't really hold up.
Jonti said:Well yes, I think it does matter for the reason that the strong AI position is also one of hard determinism. Initial conditions are everything, and there is nothing new under the sun.
On the other hand, perhaps the cookie that is the universe can crumble in a variety of different but entropically equivalent ways. If so, new information can come into existence, for the future contains information that cannot be calculated from antecedent conditions.
I think everyone in the discussion (with the possible exception of the romantic dualists) accepts both of these points.gurrier said:Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure. ...
gurrier said:It most definitely can. It makes the choice but not the meaning. The meaning is a factor of what it is connected to.
No, you've misunderstood what people think. Have you read the thread all the way through?gurrier said:Okay, I agree with you - I was simplifying a bit due to the fact that I'm arguing with people who don't accept that the brain is an information processor at all or that one can map semantics onto its structure.
Steven Jones -- and I suspect Dawkins as well -- would agree with Gould that the outcome of evolution would not be the same, if things were replayed.Knotted said:I think this is a popular myth. Are algorithms by their nature deterministic? The answer is no. Determinism and computationalism are two completely distinct questions.
Stephen Jay Gould also fell for this myth with his run-evolution-all-over-again-argument. Would it turn out the same? More to the point is the question even meaningful? But that aside nobody has ever claimed that the natural selection algorithm is deterministic. So the argument says nothing.
Yeah, with the exception of a couple of contributors.Fruitloop said:No, you've misunderstood what people think. Have you read the thread all the way through?
gurrier said:You've gone down another blind alley with this information argument. If we get recursive, we can reduce your argument ad absurdam. Consider the piece of information "the computer contains a copy of this text" - it's hard to argue that this information has not been created. Trying to distinguish between information that is moved around and information that has been created is not a productive avenue for analysis - the recursive case shows that any situation can be considered as either (the piece of information that the information has been moved around has been created).
gurrier said:It's a nonsense argument in my view. The problem is his assumption, which he never backs up, that it would be possible to completely emulate our behaviour without being conscious - that consciousness is not a requirement for acting as we do. For a start, without access to feelings or emotional data, you would have no way of actually evaluating plans and your life would simply consist of a random selection of stimulus response behaviours, which would stand out just a tad.
Jonti said:Steven Jones -- and I suspect Dawkins as well -- would agree with Gould that the outcome of evolution would not be the same, if things were replayed.
Although phycisists like Hawkins seem sure that the information content of the universe never changes, biologists don't seem to share that view. Steve Jones for one is quite explicit that evolution does create information.
But what is this "information" stuff, anyway?
This is my problem though - to me feelings cannot be removed from "what it's like to have feelings" - they are inherently and absolutely teleological and inseperable from what it's like to have them. The whole point of sadness, for example, is that it's unpleasant and that it prompts the organism to seek ways to escape it. Sadness without the subjective unpleasantness is not sadness.Fruitloop said:OK, we're not talking about consciousness with the same meaning. A Chalmers zombie can have 'feelings', it just doesn't possess the fact of what it's like to have feelings.