gurrier said:Yes there is. Whatever the qualia that distinguishes pain to a complex robot is - it's going to have to be remarkably similar to our pain. It needs to catch it's attention very quickly and it needs to produce a very strong negative effect on the overall goodness score, trumping all other factors in order to produce a very strong and immediate desire for it to stop.
gurrier said:Your CPU monitor has such a simple model of the universe and such a simple goodness function that it's existence is unimaginably emotionally impoverished. Indeed, since it doesn't actually do any planning at all, it doesn't need to have any idea of state at all or a goodness function, just a handful of rules to follow.
In other words - if it is programmed with a different goodness function, then different states will have different meaning to it - that's the whole point.Fruitloop said:Dude, these assertions are completely unsupported. Maybe it's a rescue robot and if it emerges from a burning building with a baby in its one remaining arm and every other bit of it burnt to a crisp then that counts as a resounding success. You can't just say this stuff and expect us to take it as gospel.
Fruitloop said:They don't have any 'meaning' to it at all - that's the whole point. /QUOTE]
That's a totally wrong whole point then. The goodness function gives everything that may influence it a real meaning to the robot.
I mean you could just as easily assert that emotions have no meaning, they're just evolution's goodness function result which will be processed by a bit of code.
The point where I think we diverge is that you do not agree that the meaningfulness of things to us could depend on how they affect our goodness function. Perhaps you can explain to me exactly what rules this out.
Fruitloop said:They don't have any 'meaning' to it at all - that's the whole point. It has no more meaning that the ball-cock valve in your toilet cistern. Cistern full - switch off flow. Cistern empty, switch on flow. It's not an explanation of subjective experience, just a denial of it.
Objection 1. How does she instantly know it is red?Fruitloop said:Think about the thought experiment of Mary the colour-deprived neurologist. Mary knows everything there is to know about colour and the neural processing of colour, and yet she has lived her whole life in a black-and-white room. Now either you think that Mary learns nothing new about redness when she goes outside and sees a fire-engine (which she instantly recognises as being red) - the no qualia argument - or you think that she learns something new that is ineffable about the first-person phenomenal properties of redness.
Once again your assumptions are based upon the robot being exceedingly simple. For any such comparison to be meaningful you really need to imagine the robot to be as complex as needs be. So, in this example, the robot learns whatever associations its programmer has defined for this particular visual stimulus. It might, for example, have been programmed to associate redness with danger, or with food, or with communism, or a complex and variegated combination of all of these concepts, each of which is capable of influencing the robot's goodness function. The only thing that stops it from having just as rich an experience as Mary is the limitations of its programmer compared to evolution.Fruitloop said:Now a robot could recognise redness as a particular section of the electromagnetic spectrum (700nm) without ever gaining the piece of knowledge that Mary gains when she finally goes outside and sees a postbox.
This discussion really drives home to me the imprecision of language. It seems that nobody understands my point at all.Jonti said:You're claiming that calculation can give rise to the experience of qualia.
Most people, when they consider the thought experiment of explained by Fruitloop, agree that Mary did *not* understand what red means to someone of a normal upbringing (despite the fact that she has access to all the instruments and calculations she desires) until she actually saw a red thing.
You disagree. Interestingly enough, so does Daniel Dennett.
1. So does the robot see something qualitively different - it has no idea of the range of meaningful emotions that will be aroused by the stimulus until it sees it.Jonti said:Mary she sees something that is qualitatively different to anything she has seen before. The calculations she was able to do before gave her no knowledge of what the sensation would be like. That is exactly the point; why philosophers talk about "the explanatory gap".
Even given that Mary understands neurology ("could analyse her own circuitry") this still pertains.
As I'm using it, it means the complex set of associations and feelings that our brain produces when it is presented with stimuli.Knotted said:Does the word 'qualia' mean anything? Is it really just a noise we make when we are startled by the idea of artificial intelligence? Perhaps we should pay attention to this startled feeling? Perhaps not?
gurrier said:As I'm using it, it means the complex set of associations and feelings that our brain produces when it is presented with stimuli.
I think they are the same - or at least virtually the same. To try to be more precise: "a subset of the complex set of associations and feelings that our brain produces when it is presented with stimuli; specifically that subset that the brain presents to the consciousness".Knotted said:That's why you don't see eye to eye with Jonti and Fruitloop. There might be something happening when you feel something but that thing that happens is not your subjective experience.
Knotted said:There might be something happening when you feel something but that thing that happens is not your subjective experience.
gurrier said:I think they are the same - or at least virtually the same. To try to be more precise: "a subset of the complex set of associations and feelings that our brain produces when it is presented with stimuli; specifically that subset that the brain presents to the consciousness".
I don't see what, in principle, rules this out as a definition of qualia as subjectively experienced by the consciousness.
8ball said:I'm sure there is but it's the bit that is subjective experience that we're struggling to explain.
Fruitloop said:Zizek on what it means to be a Hegelian film critic (first 30 seconds only). Remind you of anyone you know?
Knotted said:Yes, Newton's theories is one of the things that Hegel couldn't fit into his philosophy.
littlebabyjesus said:Did Hegel also cling to idea of the four humours? How about the brain being there to cool the blood?
gurrier said:You see, my argument is that pain, and other emotions, are inseperable from the experience of what they are like - emotions are the experience of having them.
Fruitloop said:Think about the thought experiment of Mary the colour-deprived neurologist. Mary knows everything there is to know about colour and the neural processing of colour, and yet she has lived her whole life in a black-and-white room. Now either you think that Mary learns nothing new about redness when she goes outside and sees a fire-engine (which she instantly recognises as being red) - the no qualia argument - or you think that she learns something new that is ineffable about the first-person phenomenal properties of redness.
Now a robot could recognise redness as a particular section of the electromagnetic spectrum (700nm) without ever gaining the piece of knowledge that Mary gains when she finally goes outside and sees a postbox.
Maybe you think there is no new information (and thus there are no qualia), but the information she has before she leaves the room that enables her to recognise redness (which is what gurrier is referring to) is not qualia- if qualia exists then it can only be the new information she gains when she finally goes outside and sees a red thing.
Knotted said:What sort of thing are we talking about? A process? An object? An idea? We experience sensation not processes, objects or ideas.