Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

8ball said:
Any chance of fleshing that out a bit for one unfamiliar with the terms - I think I know what you mean but could be misunderstanding.

The question is whether we can conceive of a world which is physically just like ours but where we don't experience qualia (or where qualia were inverted so that light of the wavelength that we currently associate with redness 'looked' green to us). If light of a particular wavelength had to be associated with the experience of what redness looks like to me (it might of course look different to you anyway - how could I know?) then there would have to be a physical (neural) difference when the qualia were missing or inverted, so the worlds would be physically different. But the first-hand redness (the qual) of red is arbitrary, so physically identical worlds with inverted or absent qualia are conceivable, even if they don't actually exist.
 
littlebabyjesus said:
It's odd how some folk seem to find their own insignificance galling. Inasmuch as I find it anything at all, it is quite reassuring to me.

You don't have to be an animal to be insignificant, though.
 
8ball said:
In which case I'm completely lost as to where you were going with your 'there is nothing to explain' argument. :confused:

I don't agree with you, I just think what you are saying makes sense! I'm trying not to be too argumentative. :)

Is there anything inbetween subjective experience and what occurs during that subjective experience? We still don't know what subjective experience is so we can't answer that!
 
Knotted said:
Is there anything inbetween subjective experience and what occurs during that subjective experience?

It's not a requirement that there be something 'in between'. The fact that there are two things that that thing could be 'in between' is all the explanatory gap you need. Otherwise there would just be one thing, and 'between' would make no sense.
 
Fruitloop said:
The question is whether we can conceive of a world which is physically just like ours but where we don't experience qualia (or where qualia were inverted so that light of the wavelength that we currently associate with redness 'looked' green to us). If light of a particular wavelength had to be associated with the experience of what redness looks like to me (it might of course look different to you anyway - how could I know?) then there would have to be a physical (neural) difference when the qualia were missing or inverted, so the worlds would be physically different. But the first-hand redness (the qual) of red is arbitrary, so physically identical worlds with inverted or absent qualia are conceivable, even if they don't actually exist.

Bizarrely, that's what I thought you meant. :)

This is also why I think a paradigm beyond evolutionary theory will be involved in the explanation, and why I think the answer to the OP's question is 'no'.
 
Fruitloop said:
The main problem with the identity argument that gurrier is making about consciousness and neurology is that it makes the identity of the physical process and consciousness/qualia a 'brute fact' about the world - i.e. it is explanatorily primitive. This may be the case, but if true it is a very strange state of affairs since the only other brute facts are the fundamental physical laws. Gurrier is also wrong that a world that was physically identical to our own but lacked consciousness or qualia is inconceivable - because for that to be true physical facts and phenomenal facts would need to be necessarily related, and we know that they aren't (just ask a bat ;) )

So the question of consciousness is not some philosophical word-game, it's an attempt to explain important phenomena in our experience of the world. Of course you can deny the existence of consciousness at all via a Churchland-style eliminativism, which says that once you have explained the functions of the brain and their integration there is nothing left to describe, but this doesn't really match up with most people's experience of what it is like, first hand, to be them.

I can't really say much else in reply here. There is no tangible problem to be explained. Maybe there is still a problem nevertheless. But one thing I think we should be clear about is that we are doing philosophy, not science here. Until we can define meaningfully what the hard problem is then discussing it as a scientific problem is wrongheaded.
 
8ball said:
It's not a requirement that there be something 'in between'. The fact that there are two things that that thing could be 'in between' is all the explanatory gap you need. Otherwise there would just be one thing, and 'between' would make no sense.

But are there two things? What sort of thing is a qual?
 
Knotted said:
Until we can define meaningfully what the hard problem is then discussing it as a scientific problem is wrongheaded.

Where was it being discussed as a scientific problem? :confused:
 
Knotted said:
But are there two things? What sort of thing is a qual?

You said there were two things - subjective experience and what occurs during that subjective experience - I'm just quoting ;)
 
8ball said:
You said there were two things - subjective experience and what occurs during that subjective experience - I'm just quoting ;)

I'm not sure I did, but if I did then I was being careless.
 
Knotted said:
I'm not sure I did, but if I did then I was being careless.

Ok, cool.

As for defining a 'hard' problem and needing a 'solid' definition of what a 'qual' is before even deciding if there's anything to be explained - I think that's mistaken.

There's no really 'solid' definition of what a 'thought' is either, but no one is disputing that there are such things as 'thoughts' - it's just 'thought' is a term we use all the time so we're more comfortable with it
 
and why I think the answer to the OP's question is 'no'.

Which I pointed out about a zillion posts ago - evolutionary theory can offer explanations as to HOW something came to be and why it's retained, but it can't 'explain' what something is...and it doesn't try to...the whole question is a straw man of epic proportions...
 
Fruitloop said:
The main problem with the identity argument that gurrier is making about consciousness and neurology is that it makes the identity of the physical process and consciousness/qualia a 'brute fact' about the world - i.e. it is explanatorily primitive. This may be the case, but if true it is a very strange state of affairs since the only other brute facts are the fundamental physical laws. Gurrier is also wrong that a world that was physically identical to our own but lacked consciousness or qualia is inconceivable - because for that to be true physical facts and phenomenal facts would need to be necessarily related, and we know that they aren't (just ask a bat ;) )

So the question of consciousness is not some philosophical word-game, it's an attempt to explain important phenomena in our experience of the world. Of course you can deny the existence of consciousness at all via a Churchland-style eliminativism, which says that once you have explained the functions of the brain and their integration there is nothing left to describe, but this doesn't really match up with most people's experience of what it is like, first hand, to be them.

To dwyers nonsense I will return presently.
1. Gurrier is never wrong :)
2. I think that the problem of creating something which was capable of emulating a human would be intractable without qualia (according to how I've defined them). This is unaffected by bats ;)

Anyway, I've probably stated and restated my argument enough times now to leave it, but one last summary of my argument:
a) consciousness can be explained as the brain's long-term planning function
b) qualia are the subset of those concepts, associations and emotions which the brain passes to the consciousness in response to a particular stimulus
c) the consciousness has access to qualia because it needs that information in order to plan.
d) it doesn't matter what they subjectively feel like - we're interested in function here - no more than we care what a particular data-model feels like to a computer. It has to feel like something and the particular form that is used to communicate emotional and associative data to the consciousness is probably a highly fine-tuned and efficient choice, considering the fact that it is dealing with an unimaginably complex context model which we couldn't even dream of replicating.

From this argument, the hard problem disappears.
 
kyser_soze said:
Which I pointed out about a zillion posts ago - evolutionary theory can offer explanations as to HOW something came to be and why it's retained, but it can't 'explain' what something is...and it doesn't try to...the whole question is a straw man of epic proportions...

I think evolutionary theory can offer more insights than just that but I think I agree with your general argument.

I didn't read your posts earlier - I was sort of waiting for the debate to 'mature'* a little after some initial bunfights - so I've not read the whole thread in detail. :oops:


* - not that I'm saying the debate is now 'mature', but it's a bit more temperate at least
 
kyser_soze said:
Which I pointed out about a zillion posts ago - evolutionary theory can offer explanations as to HOW something came to be and why it's retained, but it can't 'explain' what something is...and it doesn't try to...the whole question is a straw man of epic proportions...

Ah yes, you are right. The question should be re-phrased: Can Evolutionary Theory Explain How Human Consciousness came to be?
 
gurrier said:
1. Gurrier is never wrong :)
2. I think that the problem of creating something which was capable of emulating a human would be intractable without qualia (according to how I've defined them). This is unaffected by bats ;)

Anyway, I've probably stated and restated my argument enough times now to leave it, but one last summary of my argument:
a) consciousness can be explained as the brain's long-term planning function
b) qualia are the subset of those concepts, associations and emotions which the brain passes to the consciousness in response to a particular stimulus
c) the consciousness has access to qualia because it needs that information in order to plan.
d) it doesn't matter what they subjectively feel like - we're interested in function here - no more than we care what a particular data-model feels like to a computer. It has to feel like something and the particular form that is used to communicate emotional and associative data to the consciousness is probably a highly fine-tuned and efficient choice, considering the fact that it is dealing with an unimaginably complex context model which we couldn't even dream of replicating.

From this argument, the hard problem disappears.

a: If this is all conciousness is then the word is superfluous. This kind of reasoning earned Dennett's 'Consciousness Explained' book the moniker 'Consciousness Ignored'. Personally I have no idea why qualia are necessary for long-term planning - there are desires and aversions, a semantic map of some kind, a symbolic order, and abstract self-object; all these are quite adequate for long-term planning (unless by planning you mean something radically different to its common usage).

b '...which the brain passes to the consciousness...'??? What fresh madness is this? :eek: This is completely incompatible with the identity theory you were expounding earlier, besides which I have no idea what it could mean. Either the brain is the consciousness, in which case what is being passed to what, or it isn't and you are back with the hard problem. Or some kind of substance or property dualism, neither of which is particularly attractive.

'c', I'm afraid, is bunk. It's trivially easy to imagine a system with a basic set of desires/aversions and a semantic system for overcoming obstacles to achieve them. No need for it to know what it feels like to run your hand along the surface of the desk for such a system to function. The former entity is precisely the 'philosophical zombie'.

With regard to d: it doesn't feel like anything for a computer to implement a 'data model'! That's why we're conscious and they aren't. This is precisely the point that you seem unable to grasp.

Finally, please note that the claim that function is all that needs to be explained is in no sense a scientific claim - it's a philosophical claim through and through. Therefore it needs a philosophical justification, and what I've seen so far from you has been very, very weak.
 
It's possible that gurrier is, in the philosophical sense, a 'zombie', and that qualia is just a name for arbitrary labels that his brain uses to identify the source of various internal signals.
 
Nikolai said:
Ah yes, you are right. The question should be re-phrased: Can Evolutionary Theory Explain How Human Consciousness came to be?
Fuck me, you're not serious? :eek: 440 posts just to establish that the question needs an extra word to be phrased properly.......
 
Knotted said:
I can't really say much else in reply here. There is no tangible problem to be explained. Maybe there is still a problem nevertheless. But one thing I think we should be clear about is that we are doing philosophy, not science here. Until we can define meaningfully what the hard problem is then discussing it as a scientific problem is wrongheaded.

You wound me! :( I never said it was a scientific problem. More on this later.....
 
Knotted said:
I think all the qualia argument does is say that we cannot explain what it is to us to have a subjective experience - regardless of whether we use AI or poetry. It does not present us with an explanatory gap. There is nothing to explain.
I think all the qualia argument does is attempt to preserve dualism once you've dropped the idea of the thinking thing distinct from the extended thing.

</broken record>
 
nosos said:
I think all the qualia argument does is attempt to preserve dualism once you've dropped the idea of the thinking thing distinct from the extended thing.

</broken record>

I thought once you dropped the distinction then dualism was kinda fucked either way.

:confused:
 
If you drop dualism, the notion of private subjective experience ceases to make sense and we no longer need qualia. I mean you have Gurrier on this thread talking about qualia as things we have that enable us to do stuff: substitute "ideas" for "qualia" and you basically have Locke. The whole notion of an inner world of representations (subjective private experiences that serve epistemically to represent the outer objective public world) makes sense when you draw an ontological distinction between mind and world. It's just that people have unfortunately kept the Cartesian epistemology, or at least the basic framework of it, even though they've dropped the ontology. Which is completely and utterly fine for the natural sciences but has blighted the human sciences for much of the last century and with the popularity of evolutinary psychology, not to mention the ever present feeling that eliminative physicalism represents some sort of radical strike against theism, looks likely to continue.
 
nosos said:
If you drop dualism, the notion of private subjective experience ceases to make sense . . .

I think you'll need to back that up a bit rather than expect everyone to immediately take it seriously.
 
Fruitloop said:
'c', I'm afraid, is bunk. It's trivially easy to imagine a system with a basic set of desires/aversions and a semantic system for overcoming obstacles to achieve them. No need for it to know what it feels like to run your hand along the surface of the desk for such a system to function. The former entity is precisely the 'philosophical zombie'.

I might not be a philosophical zombie but everybody else is. Nobody has feelings but me. Everybody else merely behaves as if they have feelings.

If we use the word 'qualia' with any consistency then the above is not just a contention it REALLY IS TRUE. Only I have qualia. Only I am subjective. I have nobody else's feelings.

The reason why the qualia belief is nonsense (its not even wrong!) is not that it logically leads to pschopathy. It is because two different people now correctly believe two opposite things. They each necessarily believe the other to be a zombie. They can't both be right and they can't both be wrong.

It is only because we naturally do not talk about subjective feelings in a strictly consistent and meaningful way that we do not arrive at the above conclusion. We imagine what is like for other people to have an experience. We imagine experiences are universal. We imagine experience is an entity or quality that we share. This is very natural for social abstract thinking beings such as ourselves. This naturalness does not make it a logical argument.
 
Back
Top Bottom