Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

gurrier said:
Yes there is. Whatever the qualia that distinguishes pain to a complex robot is - it's going to have to be remarkably similar to our pain. It needs to catch it's attention very quickly and it needs to produce a very strong negative effect on the overall goodness score, trumping all other factors in order to produce a very strong and immediate desire for it to stop.

Dude, these assertions are completely unsupported. Maybe it's a rescue robot and if it emerges from a burning building with a baby in its one remaining arm and every other bit of it burnt to a crisp then that counts as a resounding success. You can't just say this stuff and expect us to take it as gospel.
 
gurrier said:
Your CPU monitor has such a simple model of the universe and such a simple goodness function that it's existence is unimaginably emotionally impoverished. Indeed, since it doesn't actually do any planning at all, it doesn't need to have any idea of state at all or a goodness function, just a handful of rules to follow.

Computation is a handful of rules to follow, its just that the handful can be large or small. Consciousness in the sense of the hard problem is why we have qualitative experience to accompany our computations, given that we don't appear to need them.
 
Fruitloop said:
Dude, these assertions are completely unsupported. Maybe it's a rescue robot and if it emerges from a burning building with a baby in its one remaining arm and every other bit of it burnt to a crisp then that counts as a resounding success. You can't just say this stuff and expect us to take it as gospel.
In other words - if it is programmed with a different goodness function, then different states will have different meaning to it - that's the whole point.
 
They don't have any 'meaning' to it at all - that's the whole point. It has no more meaning that the ball-cock valve in your toilet cistern. Cistern full - switch off flow. Cistern empty, switch on flow. It's not an explanation of subjective experience, just a denial of it.
 
I have read some theories about the dimesion of time not being "real". As Einstein explained time is not absolute and varies from observer to observer depending upon how they are accelerating in space.

I find time just as difficult to get my head around as conciousness itself.

What does conciousness mean without time? Would time exist without conciousness?

I reckon they must be the same thing (but that hinges on my own hunch that time does not exist without conciousness)

Can evolutionary theory explain conciousness?
would lead to.........
Can evolutionary theory explain time?
would then lead to......
What is evolution without time?
which I find a rather meaningless question because what is change without time?

to get away from the time problem perhaps we need to forget that the apparent time in our universe is not he only one. (other universes exist, some without time, some with time, some with time and concious observers)

I think the only way evolutionary theory could explain it is to conisder the "many worlds" approach and apply evolutionary theory to a multiverse of possibilities. It just so happens we are in a particular style of universe which is condusive to the production of concious beings.
 
Fruitloop said:
They don't have any 'meaning' to it at all - that's the whole point. /QUOTE]
That's a totally wrong whole point then. The goodness function gives everything that may influence it a real meaning to the robot.

I mean you could just as easily assert that emotions have no meaning, they're just evolution's goodness function result which will be processed by a bit of code.

The point where I think we diverge is that you do not agree that the meaningfulness of things to us could depend on how they affect our goodness function. Perhaps you can explain to me exactly what rules this out.
 
Fruitloop said:
They don't have any 'meaning' to it at all - that's the whole point. It has no more meaning that the ball-cock valve in your toilet cistern. Cistern full - switch off flow. Cistern empty, switch on flow. It's not an explanation of subjective experience, just a denial of it.


Richard Dawkins has tried to explain it with respect to how complex as system becomes. I am not quoting him but I remember something along the following lines:

The toilet cistern, like you say, is very simple. But when something gets very very complex with billions and billions of inputs and outputs it needs to include a model of itself into it's own model of reality, at this point it becomes self aware.

Not totally satisfactory, where is the cut off point?
 
Think about the thought experiment of Mary the colour-deprived neurologist. Mary knows everything there is to know about colour and the neural processing of colour, and yet she has lived her whole life in a black-and-white room. Now either you think that Mary learns nothing new about redness when she goes outside and sees a fire-engine (which she instantly recognises as being red) - the no qualia argument - or you think that she learns something new that is ineffable about the first-person phenomenal properties of redness.

Now a robot could recognise redness as a particular section of the electromagnetic spectrum (700nm) without ever gaining the piece of knowledge that Mary gains when she finally goes outside and sees a postbox.

Maybe you think there is no new information (and thus there are no qualia), but the information she has before she leaves the room that enables her to recognise redness (which is what gurrier is referring to) is not qualia- if qualia exists then it can only be the new information she gains when she finally goes outside and sees a red thing.
 
Fruitloop said:
Think about the thought experiment of Mary the colour-deprived neurologist. Mary knows everything there is to know about colour and the neural processing of colour, and yet she has lived her whole life in a black-and-white room. Now either you think that Mary learns nothing new about redness when she goes outside and sees a fire-engine (which she instantly recognises as being red) - the no qualia argument - or you think that she learns something new that is ineffable about the first-person phenomenal properties of redness.
Objection 1. How does she instantly know it is red?

In any case, assuming that Mary has been told that the fire engine is red in advance, Mary does learn something, she discovers that evolution has programmed her brain so that the qualia is weakly associated with a whole range of stuff that is meaningful to her. She may observe that the colour catches her eye more than anything she has seen before - as we know that evolution has programmed our visual processing system so that bright colours, particularly red, are very salient. She may observe that it stimulates a whole range of memories and emotional associations - these will be things that evolution has programmed rather than things that she has learned.

Fruitloop said:
Now a robot could recognise redness as a particular section of the electromagnetic spectrum (700nm) without ever gaining the piece of knowledge that Mary gains when she finally goes outside and sees a postbox.
Once again your assumptions are based upon the robot being exceedingly simple. For any such comparison to be meaningful you really need to imagine the robot to be as complex as needs be. So, in this example, the robot learns whatever associations its programmer has defined for this particular visual stimulus. It might, for example, have been programmed to associate redness with danger, or with food, or with communism, or a complex and variegated combination of all of these concepts, each of which is capable of influencing the robot's goodness function. The only thing that stops it from having just as rich an experience as Mary is the limitations of its programmer compared to evolution.
 
You're claiming that calculation can give rise to the experience of qualia.

Most people, when they consider the thought experiment of explained by Fruitloop, agree that Mary did *not* understand what red means to someone of a normal upbringing (despite the fact that she has access to all the instruments and calculations she desires) until she actually saw a red thing.

You disagree. Interestingly enough, so does Daniel Dennett.
 
Jonti said:
You're claiming that calculation can give rise to the experience of qualia.

Most people, when they consider the thought experiment of explained by Fruitloop, agree that Mary did *not* understand what red means to someone of a normal upbringing (despite the fact that she has access to all the instruments and calculations she desires) until she actually saw a red thing.

You disagree. Interestingly enough, so does Daniel Dennett.
This discussion really drives home to me the imprecision of language. It seems that nobody understands my point at all.

I agree that Mary did not understand what red meant to someone of a normal upbringing - she has never experienced the qualia of the stimulus before and she doesn't know what it's like. I thought that was clear? I'm arguing that if we had a sophisticated robot, and we had programmed it to have all the same emotional associations, direct and indirect, that evolution had programmed the stimulus to have for Mary, then it would also not have understood what red means to it until it saw it (assuming that it doesn't have the ability to analyse its own circuitry).
 
Mary she sees something that is qualitatively different to anything she has seen before. The calculations she was able to do before gave her no knowledge of what the sensation would be like. That is exactly the point; why philosophers talk about "the explanatory gap".

Even given that Mary understands neurology ("could analyse her own circuitry") this still pertains.
 
Jonti said:
Mary she sees something that is qualitatively different to anything she has seen before. The calculations she was able to do before gave her no knowledge of what the sensation would be like. That is exactly the point; why philosophers talk about "the explanatory gap".

Even given that Mary understands neurology ("could analyse her own circuitry") this still pertains.
1. So does the robot see something qualitively different - it has no idea of the range of meaningful emotions that will be aroused by the stimulus until it sees it.
2. Neurology certainly isn't anywhere near to being capable of analysing mary's wiring.
 
I've never understood why this argument is considered persuasive.

We need to understand what we mean when we talk about something being red. If, following Wittgenstein we seek the meaning of a word in the use of the word rather than the ostensive definition of the word, then there is no problem.

When, in everyday life, we talk about a red thing then we are talking about redness as we experience it, we use the word in a way which other people can understand. If we are scientists examining the properties of light and we give red light a precise scientific definition then 'red' has a different meaning ie. we use it in a different way. Of course we necessarily talk about the same object being red using both meanings but we use the meanings in a different way.

So I don't see the problem. There are at least two different meanings of the word 'red' and it is possible to understand one of them, both of them or neither.

The question of qualia is unintersting to me. Its not a tangible problem. Its not clear what the explanatory gap is that we are trying to explain. We perhaps feel that there is one, but what can we say about it? The problem is not a behavioural one, the robot could behave in exactly in the same way as a human and talk about seeing a red something in a convincing way and we would not know whether it is 'really experiencing red'.

In a sense nobody experiences my qualia except me, qualia are not things. Does this make everybody else zombies relative to me? I cannot put myself into somebody else's shoes so to speak because 'I' is not an object.

How can we bat about this idea of qualia when qualia are neither ideas nor objects? Can we conclude anything here? Only that we should not dwell on philosophical questions. There are surely more fruitful lines of inquiry.
 
Does the word 'qualia' mean anything? Is it really just a noise we make when we are startled by the idea of artificial intelligence? Perhaps we should pay attention to this startled feeling? Perhaps not?
 
Knotted said:
Does the word 'qualia' mean anything? Is it really just a noise we make when we are startled by the idea of artificial intelligence? Perhaps we should pay attention to this startled feeling? Perhaps not?
As I'm using it, it means the complex set of associations and feelings that our brain produces when it is presented with stimuli.
 
gurrier said:
As I'm using it, it means the complex set of associations and feelings that our brain produces when it is presented with stimuli.

That's why you don't see eye to eye with Jonti and Fruitloop. There might be something happening when you feel something but that thing that happens is not your subjective experience.

I think all the qualia argument does is say that we cannot explain what it is to us to have a subjective experience - regardless of whether we use AI or poetry. It does not present us with an explanatory gap. There is nothing to explain.
 
I'm inclined to say that the qualia argument is circumstancial evidence for AI. Its an example of something we can't talk about yet feel inclined to talk about. We've found one of our Godelian funny bones.

I'll shut up about shutting up about qualia now and drink my booze.
 
Knotted said:
That's why you don't see eye to eye with Jonti and Fruitloop. There might be something happening when you feel something but that thing that happens is not your subjective experience.
I think they are the same - or at least virtually the same. To try to be more precise: "a subset of the complex set of associations and feelings that our brain produces when it is presented with stimuli; specifically that subset that the brain presents to the consciousness".

I don't see what, in principle, rules this out as a definition of qualia as subjectively experienced by the consciousness.
 
Knotted said:
There might be something happening when you feel something but that thing that happens is not your subjective experience.

I'm sure there is but it's the bit that is subjective experience that we're struggling to explain.
 
gurrier said:
I think they are the same - or at least virtually the same. To try to be more precise: "a subset of the complex set of associations and feelings that our brain produces when it is presented with stimuli; specifically that subset that the brain presents to the consciousness".

I don't see what, in principle, rules this out as a definition of qualia as subjectively experienced by the consciousness.

Why are we trying to come up with a definition for this sort of thing? Where will this lead? We have qualia defined in terms of feeling and consciousness. What's feeling, what's consciousness?

The definition you give is fine. But does it have any meaning? Is it not just a pure formality that just looks like something meaningful? The only light that I think you have shed is on what is actually going on. There is a complex of associations in the brain, but we don't experience the actual workings of the brain - if we did there would be no problem.
 
8ball said:
I'm sure there is but it's the bit that is subjective experience that we're struggling to explain.

What sort of thing are we talking about? A process? An object? An idea? We experience sensation not processes, objects or ideas. Its not any sort of thing at all.
 
littlebabyjesus said:
Did Hegel also cling to idea of the four humours? How about the brain being there to cool the blood?

You misunderstand. Hegel is not advocating these ideas, he is adumbrating them.
 
gurrier said:
You see, my argument is that pain, and other emotions, are inseperable from the experience of what they are like - emotions are the experience of having them.

And that is your fundamental error, from which your multifarious other errors all flow. Human beings experience through concepts. Human beings are not animals. The ethical implications of treating human beings as animals are clear to everyone except those blinded by dogma, such as yourself.
 
Fruitloop said:
Think about the thought experiment of Mary the colour-deprived neurologist. Mary knows everything there is to know about colour and the neural processing of colour, and yet she has lived her whole life in a black-and-white room. Now either you think that Mary learns nothing new about redness when she goes outside and sees a fire-engine (which she instantly recognises as being red) - the no qualia argument - or you think that she learns something new that is ineffable about the first-person phenomenal properties of redness.

Now a robot could recognise redness as a particular section of the electromagnetic spectrum (700nm) without ever gaining the piece of knowledge that Mary gains when she finally goes outside and sees a postbox.

Maybe you think there is no new information (and thus there are no qualia), but the information she has before she leaves the room that enables her to recognise redness (which is what gurrier is referring to) is not qualia- if qualia exists then it can only be the new information she gains when she finally goes outside and sees a red thing.

But the point is that any human being's experience of redness will be mediated through culture and language--roughly what you call the 'symbolic order.' Human beings have no access to non-cultural or prelinguistic experience. Human beings do not have access to the world as it really is.
 
Knotted said:
What sort of thing are we talking about? A process? An object? An idea? We experience sensation not processes, objects or ideas.

I know. All I was saying was that I have no doubt that when we experience 'qualia', there is something that happens at the same time that we can 'put our finger on' in a scientific sense, but that thing itself is not the same as the feeling experienced.

My point was that the explanatory gap lies between the correlated physical event and the subjective 'feeling'. Anything evolutionary theory can come up with to 'explain' consciousness that I've seen so far is to do with the possible benefits of having a model of 'self' within the model of 'world', or of having multiple models that are similar to the model of 'self' where inputs are manipulated to emulate the likely responses of conspecific organisms. All of this, however, can be achieved by the purely physical flow of information - there is no requirement for 'qualia', and if they are a 'side-effect', then we need another paradigm aside from normal evolutionary thinking in order to properly explain it.

Hope that makes some kind of sense - I'm a scientist, not a philosopher, plus I've had a few beers. :oops: ;)
 
Back
Top Bottom