Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

'To you' is the crucial bit. I said before:

A lot of what is being talked about here is just information processing, that can be explained computationally (computation is when a causal structure in the world corresponds with a logical structure, and is something that even a transistor does - it implements an 'AND'). 'Consciousness' is only the property of having 'something that it is like' to be that thing. This means that 99% of what we do can be explained ultimately by embodiedness (i.e. being embedded in the world with goals that are in their most basic sense pre-programmed; a baby doesn't need to learn that having pins stuck in it is a bad thing, for example) plus the processing of incoming information. The remaining 1% is the hard problem of consciousness, the 'what it is like' to have a pin stuck in you or whatever. There is no direct evolutionary need for there to be a 'what it's like', it is the outcome of a variety of a range of more directly selected-for characteristics.

The subjective unpleasantness 'for you' has to be expressed in the symbolic order, otherwise symbolisation (which has its own advantages) rapidly becomes a disadvantage. That's the point of the zombie thought-experiment - even though it seems absolutely necessary for you, that doesn't make it necessary in all possible worlds.
 
Fruitloop said:
'To you' is the crucial bit. I said before:
You see I completely disagree with the statement that there is "no direct evolutionary need for there to be a 'what it's like'". In fact I think there is the most obvious and fundamental need for there to be a 'what it's like" and I don't think anything remotely resembling a human being could function without knowing 'what it's like' to be in any particular state - how could you possibly try to make your situation better if you didn't know what it's like?

In AI terms, the zombie thought-experiment is a challenge to build a robot without any goals or any ability to compare it's current situation with its desired situation - an absurdity. To me, it's the equivalent of asking us to imagine a world where some cars went around without motors and we had no way of knowing which ones actually had motors. My claim is that the thought experiment is based on an incorrect assumption - we would actually be able to identify the humans without consciousness since they'd be either a) doing absolutely nothing, ever or b) doing random stuff. Why would they do anything if they didn't have a reason to?
 
Knotted said:
What about evolutionary algorithms (I think they're called) where the machine learns to perform a task better through practice. Is there anything new here? To us certainly. We don't know what tricks the machine will learn before it learns it. Is anything new to the universe? Does it matter for our purposes?
Yes, I think one can say that evolution, and evolutionary algorithms in general, can and do create new information. The alternative is to imagine that the zebra genome (and all the others!) is implicit in the initial conditions of the universe.

I don't see that as a problem for the view that a body needs nervous tissue and a nervous system if it is to enjoy(?) anything like our human awareness. Even allowing that the creation of a bit of (new) information is accompanied by the smallest phenomenisca of awareness, there still remains the problem of integrating the phenomenisca into an organised, unified consciousness.
 
To expand a bit and to give a better idea of what I'm getting at.

Lets say that you are designing a robot. If you just give it a list of stimulus response rules to follow, it will never be capable of doing anything much and it will invariably be pretty crap at what it is you want it to do. This is because the definition of these rules would have to be so complex and so complete to be useful that it is, in practice, impossible to define. For example, early naive attempts to design robots sometimes took this tack. A classic mistake was forgetting to tell the computer that the robot destroying itself was a bad thing - so that the robot would merrily achieve it's goal by bashing its arms off.

So, modern cybernetics instead relies on goal states and machine learning. You write rules allowing the robot to evaluate a semantic 'goodness' score for its current state. Then you give it a planning capacity which allows it to reason about how its future actions may affect its state and how its future state in turn might affect its goal states. i.e. you give it the means to tell how 'good' its current state is and how 'good' its future state will be if it adopts a particular plan. The ability to compare current and future goodness gives the robot teleology - it gives it a reason to act and a metric by which it can learn (plan A increased goodness by more than predicted, plan B turrned out to lead to a less good state than predicted).

The Zombie thought-experiment basically asks us to consider a situation where the relative goodness of different states can not be compared by our robot - and it is absurd to think that it might exhibit the same behaviour without this ability - it'll do nothing or do random stuff, why would it do otherwise?
 
We should update Plato and put a sign on this forum saying, "Let no one ignorant of software engineering enter."
 
gurrier said:
...
The Zombie thought-experiment basically asks us to consider a situation where the relative goodness of different states can not be compared by our robot ...
No, I don't think it does. It points out that the calculation of "the relative goodness of different states" does not demand consciousness -- only calculation.
 
Jonti said:
No, I don't think it does. It points out that the calculation of "the relative goodness of different states" does not demand consciousness -- only calculation.
That's not the point I was making. The zombie thought experiment rules out access to information about the relative goodness of different states - I was pointing out why I think the thought experiment is flawed. More generally, I've already stated that I consider consciousness to be the sub-routine which handles strategic planning - your objection here thus boils down to "consciousness does not demand consciousness - only calculation" - which is something that I agree with ;)

More broadly, I think that the problem of calculating the relative goodness of different states does demand something that needs to to have the various characteristics that we can observe in consciousness.
 
gurrier said:
That's not the point I was making. The zombie thought experiment rules out access to information about the relative goodness of different states - I was pointing out why I think the thought experiment is flawed. More generally, I've already stated that I consider consciousness to be the sub-routine which handles strategic planning - your objection here thus boils down to "consciousness does not demand consciousness - only calculation" - which is something that I agree with ;)

More broadly, I think that the problem of calculating the relative goodness of different states does demand something that needs to to have the various characteristics that we can observe in consciousness.
OK, that's your interpretation, but there's nothing in the description of the philosophical zombie to rule out the possibility of weighting different possible internal states to enable the calculation of which is preferable. An AI chess playing program can assess which position, which state of the board, is relatively good, even though it calculates only a few moves ahead.

I consider consciousness to be the sub-routine which handles strategic planning - your objection here thus boils down to "consciousness does not demand consciousness - only calculation" - which is something that I agree with ;)
Yeah, that's your theory, that consciousness demands only calculation. But other definitions are possible. For example, I consider consciousness to be a correlate of the creation of information.

The "consciousness = only calculation" theory holds that the substrate of the calculation (the nature of the calculating body) is irrelevant. The physical structure of brains and nerve cells, the physical and chemical processes at synapses and inside neurons are all irrelevant to the hard problem of consciousness. I find that ... implausible.

The philosophical zombie accepts that the brain does information processing; but it points out that plenty of information processing occurs without a conscious correlate. What's so special about the symbolic representation of strategic planning that manipulating it (but not other calculations) causes consciousness to emerge?
 
Spion said:
I'm not sure that he explained its origin or evolution in that social context either. I'm happy to wait for our Young Hegelians to proffer an exegesis though :)

You should, while you're sitting on yer arse waiting, READ THE ORIGINAL and start at least minimally acquainting yourself with the original text! No substitute for that! http://www.marxists.org/reference/archive/hegel/index.htm

You will see both the emergence/evolution of it [how little you know, K.!!!] in relation to Nature and to the "Other" [Human]!

Make sure you note the mediating element!!!;)

Good luck!:cool:
 
Just one more thing for our "positivistic" friends here...

In his Philosophy Of Nature Hegel has this paragraph:

§ 221 The ineptitude, tastelessness, even dishonesty of Newton's observations and experimentations.

§ 221.

Light behaves as a general identity, initially in this determination of diversity, or the determination by the understanding of the moment of totality, then to concrete matter as an external and other entity, as to darkening. This contact and external darkening of the one by the other is colour.

According to the familiar Newtonian theory, white, or colourless light consists of five or seven colours; - the theory itself can not say exactly how many. One can not express oneself strongly enough about the barbarism, in the first place, of the conception that with light, too, the worst form of reflection, the compound, was seized upon, so that brightness here could consist of seven darknesses, or water could consist of seven forms of earth. Further, the ineptitude, tastelessness, even dishonesty of Newton's observations and experimentations must be addressed, as well as the equally bad tendency to draw inferences, conclusions, and proofs from impure empirical data. Moreover, the blindness of the admiration given to Newton's work for nearly one and a half centuries must be noted, the narrowmindedness of those admirers who defend his conceptions, and, in particular, the thoughtlessness with which a number of the immediate conclusions of that theory (for example, the impossibility of an achromatic telescope) were dropped, although the theory itself is still maintained. Finally, there is the blindness of the prejudice that the theory rests on something mathematical, as if the partly false and one-sided measurements, as well as the quantitative determinations brought into the conclusions, would provide any basis for the theory and the nature of the thing itself.-A major reason why the clear, thorough, and learned illumination by Goethe of this darkness concerning light has not had a more effective reception is doubtlessly because the thoughtlessness and simplemindedness, which one would have to confess for following Newton for so long, would be entirely too great.

Instead of these nonsensical conceptions disappearing, they have recently been compounded by the discoveries of Malus, by the idea of a polarisation of light, the notion of the four-sidedness of sunbeams, and the idea that red beams rotate in a movement to the left, whereas blue beams rotate in a movement to the right. Such simplistic ideas seem justified by the privilege accorded to physics to generate "hypotheses." But even as a joke one does not indulge in stupidities; thus so much the less should stupidities be offered as hypotheses which are not even meant to be jokes.

:D
 
Yes, Newton's theories is one of the things that Hegel couldn't fit into his philosophy. Yes Hegel just took a philistine attitude. And yes I predicted that you would bring this up at some point.
 
Oh, I just must do this, too...:D

Hegel’s Lectures on the History of Philosophy

Section One: Modern Philosophy in its First Statement
A. BACON.
There was already being accomplished the abandonment of the content which lies beyond us, and which through its form has lost the merit it possessed of being true, and is become of no significance to self-consciousness or the certainty of self and of its actuality; this we see for the first time consciously expressed, though not as yet in a very perfect form, by Francis Bacon, Baron Verulam, Viscount St. Albans. He is therefore instanced as in the fore-front of all this empirical philosophy, and even now our countrymen like to adorn their works with sententious sayings culled from him. Baconian philosophy thus usually means a philosophy which is founded on the observation of the external or spiritual nature of man in his inclinations, desires, rational and judicial qualities. From these conclusions are drawn, and general conceptions, laws pertaining to this domain, are thus discovered. Bacon has entirely set aside and rejected the scholastic method of reasoning from remote abstractions and being blind to what lies before one’s eyes. He takes as his standpoint the sensuous manifestation as it appears to the cultured man, as the latter reflects upon it; and this is conformable to the principle of accepting the finite and worldly as such.

Ahem...

For though he rejected the syllogism and only permitted conclusions to be reached through induction, he unconsciously himself drew deductions; likewise all these champions of empiricism, who followed after him, and who put into practice what he demanded, and thought they could by observations, experiments and experiences, keep the matter in question pure, could neither so do without drawing deductions, nor without introducing conceptions; and they drew their deductions and formed their notions and conceptions all the more freely because they thought that they had nothing to do with conceptions at all; nor did they go forth from deduction to immanent, true knowledge. Thus when Bacon set up induction in opposition to the syllogism, this opposition is formal; each induction is also a deduction, which fact was known even to Aristotle. For if a universal is deduced from a number of things, the first proposition reads, “These bodies have these qualities;” the second, “All these bodies belong to one class;” and thus, in the third place, this class has these qualities. That is a perfect syllogism. Induction always signifies that observations are instituted, experiments made, experience regarded, and from this the universal determination is derived.

Also:

There is another shortcoming of a formal nature, and one of which all empiricists partake, — that is that they believe themselves to be keeping to experience alone; it is to them an unknown fact that in receiving these perceptions they are indulging in metaphysics. Man does not stop short at the individual, nor can he do so. He seeks the universal, but thoughts, even if not Notions likewise, are what constitute the same. The most remarkable thought-form is that of force; we thus speak of the force of electricity, of magnetism, of gravity. Force, however, is a universal and not a perceptible; quite uncritically and unconsciously the empiricists thus permit of determinations such as these.

Well...

Bacon thus does not by any means take the intelligent standpoint of an investigation of nature, being still involved in the grossest superstition, false magic, &c. This we find to be on the whole propounded in an intelligent way, and Bacon thus remains within the conceptions of his time.

This is where he places him:

To a certain extent knowledge from the absolute Notion may assume an air of superiority over this knowledge; but it is essential, as far as the Idea is concerned, that the particularity of the content should be developed. The Notion is an essential matter, but as such its finite side is just as essential. Mind gives presence, external existence, to itself; to come to understand this extension, the world as it is, the sensuous universe, to understand itself as this, i.e., with its manifest, sensuous extension, is one side of things. The other side is the relation to the Idea. Abstraction in and for itself must determine and particularize itself. The Idea is concrete, self-determining, it has the principle of development; and perfect knowledge is always developed. A conditional knowledge in respect of the Idea merely signifies that the working out of the development has not yet advanced very far. But we have to deal with this development; and for this development and determination of the particular from the Idea, so that the knowledge of the universe, of nature, may be cultivated — for this, the knowledge of the particular is necessary. This particularity must be worked out on its own account; we must become acquainted with empirical nature, both with the physical and with the human. The merit of modern times is to have accomplished or furthered these ends; it was in the highest degree unsatisfactory when the ancients attempted the work. Empiricism is not merely an observing, hearing, feeling, etc., a perception of the individual; for it really sets to work to find the species, the universal, to discover laws. Now because it does this, it comes within the territory of the Notion — it begets what pertains to the region of the Idea; it thus prepares the empirical material for the Notion, so that the latter can then receive it ready for its use. If the science is perfected the Idea must certainly issue forth of itself; science as such no longer commences from the empiric. But in order that this science may come into existence, we must have the progression from the individual and particular to the universal — an activity which is a reaction on the given material of empiricism in order to bring about its reconstruction. The demand of a priori knowledge, which seems to imply that the Idea should construct from itself, is thus a reconstruction only, or what is in religion accomplished through sentiment and feeling. Without the working out of the empirical sciences on their own account, Philosophy could not have reached further than with the ancients. The whole of the Idea in itself is science as perfected and complete; but the other side is the beginning, the process of its origination. This process of the origination of science is different from its process in itself when it is complete, just as is the process of the history of Philosophy and that of Philosophy itself. In every science principles are commenced with; at the first these are the results of the particular, but if the science is completed they are made the beginning. The case is similar with Philosophy; the working out of the empirical side has really become the conditioning of the Idea, so that this last may reach its full development and determination. For instance, in order that the history of the Philosophy of modern times may exist, we must have a history of Philosophy in general, the process of Philosophy during so many thousand years; mind must have followed this long, road in order that the Philosophy may be produced. In consciousness it then adopts the attitude of having cut away the bridge from behind it; it appears to be free to launch forth in its other only, and to develop without resistance in this medium; but it is another matter to attain to this ether and to development in it. We must not overlook the fact that Philosophy would not have come into existence without this process, for mind is essentially a working upon something different.
 
Jonti said:
OK, that's your interpretation, but there's nothing in the description of the philosophical zombie to rule out the possibility of weighting different possible internal states to enable the calculation of which is preferable.
There is you know: "...it does not actually have the experience of pain as a person normally does". You see, my argument is that pain, and other emotions, are inseperable from the experience of what they are like - emotions are the experience of having them.

In your quote, you mention that the zombie could calculate "which [state] is preferable" - but unless it knows what each state is like to experience, how on earth can it have any notion of which is preferable - an inherently value-laden term? This is why I find zombie-based arguments to be absurd - if nothing has meaning to the zombie then it simply won't act like a person - it will do random stuff or do nothing since any situation will be as good to it as any other situation. It needs to have the ability to decide which states are preferable too since the problem of enunciating every possible situation as a rule is intractable.

Jonti said:
An AI chess playing program can assess which position, which state of the board, is relatively good, even though it calculates only a few moves ahead.

A zombie couldn't do that - why would it bother playing chess? Why would it try to win? Our AI chess-player is given a single value - winning is good, that wouldn't work with a zombie who had to emulate a human.

Jonti said:
The "consciousness = only calculation" theory holds that the substrate of the calculation (the nature of the calculating body) is irrelevant. The physical structure of brains and nerve cells, the physical and chemical processes at synapses and inside neurons are all irrelevant to the hard problem of consciousness. I find that ... implausible.
It does not hold that the physical substrate is irrelevant! Just that the physical substrata could be emulated on a general purpose computation device. It is either correct or there is something seriously weird and unknown going on on a physical level in the brain - and since we have zero evidence for this, it's one of those things that we generally assume to be untrue.

Jonti said:
The philosophical zombie accepts that the brain does information processing; but it points out that plenty of information processing occurs without a conscious correlate. What's so special about the symbolic representation of strategic planning that manipulating it (but not other calculations) causes consciousness to emerge?

It's not that it causes consciousness to emerge - I'm claiming that it is consciousness. What is so special is the following:
1. A requirement for value judgements about the states that it finds itself in
2. A requirement for the function to identify with the entire organism - in order to plan properly properly for the organism, its value judgements must be based around the effects on the whole organism.
3. A requirement for planning, prediction and simulation in order to get as accurate as possible an estimate of the desirability of the future states that will come about as a result of its decisions. This gives us, in turn:
4. A requirement to have access to the brain's entire store of ontological, historical experiential and situational data allowing it to compare past results to future possibilities, allowing its planning to learn and to come up with heuristic rules to guide future decisions.
5. A seriously sophisticated filter on the sense-data that it has access to, with the data presented from the senses mapped into high-level concepts in its ontology - a level of abstraction appropriate for planning in whatever situation it finds itself in.

From where I'm sitting, these requirements sound like exactly the sort of things that would cause evolution to construct something like my consciousness - but maybe I'm a zombie!
 
Knotted said:
Yes, Newton's theories is one of the things that Hegel couldn't fit into his philosophy. Yes Hegel just took a philistine attitude. And yes I predicted that you would bring this up at some point.

Crikey!:rolleyes: :p :D
 
some 19th-century mystic whose idiotic ramblings are best forgotten said:
Light behaves as a general identity, initially in this determination of diversity, or the determination by the understanding of the moment of totality, then to concrete matter as an external and other entity, as to darkening. This contact and external darkening of the one by the other is colour.

Did Hegel also cling to idea of the four humours? How about the brain being there to cool the blood?


Thank you, Gorski, for introducing the whole thread to your and phildwyer's nonsensical hero. May his name never darken (or should that be 'colour') these boards again.
 
gurrier said:
To expand a bit and to give a better idea of what I'm getting at.

Lets say that you are designing a robot. If you just give it a list of stimulus response rules to follow, it will never be capable of doing anything much and it will invariably be pretty crap at what it is you want it to do. This is because the definition of these rules would have to be so complex and so complete to be useful that it is, in practice, impossible to define. For example, early naive attempts to design robots sometimes took this tack. A classic mistake was forgetting to tell the computer that the robot destroying itself was a bad thing - so that the robot would merrily achieve it's goal by bashing its arms off.

So, modern cybernetics instead relies on goal states and machine learning. You write rules allowing the robot to evaluate a semantic 'goodness' score for its current state. Then you give it a planning capacity which allows it to reason about how its future actions may affect its state and how its future state in turn might affect its goal states. i.e. you give it the means to tell how 'good' its current state is and how 'good' its future state will be if it adopts a particular plan. The ability to compare current and future goodness gives the robot teleology - it gives it a reason to act and a metric by which it can learn (plan A increased goodness by more than predicted, plan B turrned out to lead to a less good state than predicted).

The Zombie thought-experiment basically asks us to consider a situation where the relative goodness of different states can not be compared by our robot - and it is absurd to think that it might exhibit the same behaviour without this ability - it'll do nothing or do random stuff, why would it do otherwise?


You seem to me to be undermining your own point. The robot has goals and calculates the 'goodness' of a particular future state (a bit like the way Deep Fritz plays chess), but there is no point in asking what it's like to be that robot, because it's not like anything at all.

The zombie thought experiment asks what it would be like to have no qualia, to have no 'what it's like to be' that thing, it has nothing to do with the goodness of future states, which could be expressed by a single integer.

edit: Jonti said this already. Oops.
 
Fruitloop said:
You seem to me to be undermining your own point. The robot has goals and calculates to 'goodness' of a particular future state (a bit like the way Deep Fritz plays chess), but there is no point in asking what it's like to be that robot, because it's not like anything at all.

The zombie thought experiment asks what it would be like to have no qualia, to have no 'what it's like to be' that thing, it has nothing to do with the goodness of future states, which could be expressed by a single integer.

How good a state is _is_ what it's like - something's goodness is inherently a semantic value judgement.

Pursuing your line of argument, if our single integer goodness value depends on a complex combination of factors (as it inevitably would have to in anything even approaching the sophistication of a human) then each of these factors has meaning to the robot - and by extension anything in the universe that can, directly or indirectly, affect the goodness score has real semantic content to the robot. The semantics and emotional richness of the robot's view of the universe is only limited by the limitations imposed by its creator when defining how its "goodness score" is calculated. Our chess playing robot can only attach semantics to chess positions - its view of the universe is constrained so that these are the only things that have any meaning to it. But if we were to design even a trivial automaton which was capable of not destroying itself, we would need to design its 'goodness function' so that it took into account a complex mix of factors and the automaton's world would thus be imbued with meaning. As changing situations might cause some "goodness factors" to increase and other "goodness factors" to decrease and they might be combined in complex formulae, then the world and the automaton's choices in it become full of emotional and semantic richness.
 
gurrier said:
There is you know: "...it does not actually have the experience of pain as a person normally does". You see, my argument is that pain, and other emotions, are inseperable from the experience of what they are like - emotions are the experience of having them.

In your quote, you mention that the zombie could calculate "which [state] is preferable" - but unless it knows what each state is like to experience, how on earth can it have any notion of which is preferable - an inherently value-laden term? This is why I find zombie-based arguments to be absurd - if nothing has meaning to the zombie then it simply won't act like a person - it will do random stuff or do nothing since any situation will be as good to it as any other situation. It needs to have the ability to decide which states are preferable too since the problem of enunciating every possible situation as a rule is intractable.



A zombie couldn't do that - why would it bother playing chess? Why would it try to win? Our AI chess-player is given a single value - winning is good, that wouldn't work with a zombie who had to emulate a human.


It does not hold that the physical substrate is irrelevant! Just that the physical substrata could be emulated on a general purpose computation device. It is either correct or there is something seriously weird and unknown going on on a physical level in the brain - and since we have zero evidence for this, it's one of those things that we generally assume to be untrue.



It's not that it causes consciousness to emerge - I'm claiming that it is consciousness. What is so special is the following:
1. A requirement for value judgements about the states that it finds itself in
2. A requirement for the function to identify with the entire organism - in order to plan properly properly for the organism, its value judgements must be based around the effects on the whole organism.
3. A requirement for planning, prediction and simulation in order to get as accurate as possible an estimate of the desirability of the future states that will come about as a result of its decisions. This gives us, in turn:
4. A requirement to have access to the brain's entire store of ontological, historical experiential and situational data allowing it to compare past results to future possibilities, allowing its planning to learn and to come up with heuristic rules to guide future decisions.
5. A seriously sophisticated filter on the sense-data that it has access to, with the data presented from the senses mapped into high-level concepts in its ontology - a level of abstraction appropriate for planning in whatever situation it finds itself in.

From where I'm sitting, these requirements sound like exactly the sort of things that would cause evolution to construct something like my consciousness - but maybe I'm a zombie!

You're an eliminativist. If this computation is all there is, or this kind of computation is consciousness, then there is no hard problem - it hasn't been explained but rather 'explained away'. Eliminativism is attractive but not without its problems, foremost among which is the one already mentioned; why do we talk about 'what it's like' to undergo a set of computations when there isn't a 'what it's like'.?

Also there's the philosophical question of the possibility of inverted-qualia or absent-qualia worlds. Are these possible, or does any world where sufficiently complex computations take place require qualia. If so, why?
 
gurrier said:
How good a state is _is_ what it's like - something's goodness is inherently a semantic value judgement.

Pursuing your line of argument, if our single integer goodness value depends on a complex combination of factors (as it inevitably would have to in anything even approaching the sophistication of a human) then each of these factors has meaning to the robot - and by extension anything in the universe that can, directly or indirectly, affect the goodness score has real semantic content to the robot. The semantics and emotional richness of the robot's view of the universe is only limited by the limitations imposed by its creator when defining how its "goodness score" is calculated. Our chess playing robot can only attach semantics to chess positions - its view of the universe is constrained so that these are the only things that have any meaning to it. But if we were to design even a trivial automaton which was capable of not destroying itself, we would need to design its 'goodness function' so that it took into account a complex mix of factors and the automaton's world would thus be imbued with meaning. As changing situations might cause some "goodness factors" to increase and other "goodness factors" to decrease and they might be combined in complex formulae, then the world and the automaton's choices in it become full of emotional and semantic richness.

Deep Fritz or whatever has no semantic notions whatsoever - no idea that it is 'playing chess'. All that's going on is syntactic manipulations within a given rule-bound system with a pre-programmed goal; it calculates ahead a given number of steps and chooses the move with the highest 'score'. This process is repeated until the game is won or lost.

Edit: It's true that the value judgement that at a particular stage in the game a move that yields an internal score of 235 is 'better' than one that yields a score of 115 is indeed semantic, but this is pre-programmed by a human who knows about chess - Deep Fritz is none the wiser.
 
Fruitloop said:
You're an eliminativist. If this computation is all there is, or this kind of computation is consciousness, then there is no hard problem - it hasn't been explained but rather 'explained away'. Eliminativism is attractive but not without its problems, foremost among which is the one already mentioned; why do we talk about 'what it's like' to undergo a set of computations when there isn't a 'what it's like'.?
I've been trying to explain, for some time now, that there is a 'what it's like' and that there has to be. I really don't know how to explain it further. My explanation also does not eliminate hard problems - trying to design a 'goodness function' which even allows a robot to not destroy itself is pretty damn hard. A 'goodness function' which managed to incorporate the vast range of stuff that might affect a human's emotions is about as hard a problem as you could get (not necessarily computationally hard, but architecturally and algorithmically).

Fruitloop said:
Also there's the philosophical question of the possibility of inverted-qualia or absent-qualia worlds. Are these possible, or does any world where sufficiently complex computations take place require qualia. If so, why?
The consciousness requires information about its situation in order to plan, obviously. The qualia that are presented to it are an executive summary of the situation. Anything that doesn't have access to qualia, of some sort, is not going to be able to know anything about the world in which it finds itself and isn't going to be able to do a whole heap of stuff.
 
Fruitloop said:
Edit: It's true that the value judgement that at a particular stage in the game a move that yields an internal score of 235 is 'better' than one that yields a score of 115 is indeed semantic, but this is pre-programmed by a human who knows about chess - Deep Fritz is none the wiser.
We are pre-programmed by evolution to associate certain states with certain goodness scores. We supplement this with learning. Only difference is that our semantics are much richer and evolution is a better programmer than us!

We don't like sex because we've figured out that it will lead to us reproducing. Evolution has programmed us to like sex - just like Fritz, we do it because our programmer has told us its good!
 
There is no point in this flip-floping about from us to machines. We have semantics and a symbolic order, besides it's all further back in the thread. A robot could be programmed to recognise unplanned alterations to 'its' (need to program some boundaries of the self, abjection etc - a tall order!!) material body as bad, and seek to avoid, repair, prevent repetition etc. Still there is nothing that it's like for the robot to feel pain the way we do.

People on the other hand may seek out pain because they have a symbolic order, and thus perversion, a multiplicity of drives competing and being deformed within their semantic existence. So for the sake of sanity, lets talk about one or the other.
 
Your argument leads to absurdities. My computer monitors the CPU temperature already, and takes steps to avoid damage when the temperature becomes too high (first it notifies me, then it shuts down). What's it like for a computer to have an overheating CPU?
 
Fruitloop said:
There is no point in this flip-floping about from us to machines. We have semantics and a symbolic order, besides it's all further back in the thread. A robot could be programmed to recognise unplanned alterations to 'its' (need to program some boundaries of the self, abjection etc - a tall order!!) material body as bad, and seek to avoid, repair, prevent repetition etc. Still there is nothing that it's like for the robot to feel pain the way we do.

Yes there is. Whatever the qualia that distinguishes pain to a complex robot is - it's going to have to be remarkably similar to our pain. It needs to catch it's attention very quickly and it needs to produce a very strong negative effect on the overall goodness score, trumping all other factors in order to produce a very strong and immediate desire for it to stop.

Incidentally, we are mixing up two different uses of the word 'semantics'. In computer science and AI, semantics normally refers to formal models of the concepts in a domain and their inter-relationships. What we are talking about here is whether things have meaningful value to the program.

Fruitloop said:
People on the other hand may seek out pain because they have a symbolic order, and thus perversion, a multiplicity of drives competing and being deformed within their semantic existence. So for the sake of sanity, lets talk about one or the other.
Robots might do so as well if they learned that pain would, in some circumstances, lead to a high 'goodness score'. For example if the robot was raised in a household where the only time it was cared for was in the aftermath of a beating.
 
More on Hegel...

You have 2 Cows...

Hegelian Analysis
You have two cows. The having of two cows is the -thesis-, and their very existance brings about in the World Spirit necessarily their negation, or -antithesis-, which is Mad cow disease. These two combine and form a -synthesis-, which is you not having any cows but instead insurance money, which is itself a new thesis, and as such necessitates the existence of its own antithesis. These will one day combine and form a synthesis, which is its own thesis...as infinitum, until you have a Farm, which is the ultimate ethical ideal and the final state of your agriculture.
 
Fruitloop said:
Your argument leads to absurdities. My computer monitors the CPU temperature already, and takes steps to avoid damage when the temperature becomes too high (first it notifies me, then it shuts down). What's it like for a computer to have an overheating CPU?
Your CPU monitor has such a simple model of the universe and such a simple goodness function that it's existence is unimaginably emotionally impoverished. Indeed, since it doesn't actually do any planning at all, it doesn't need to have any idea of state at all or a goodness function, just a handful of rules to follow.
 
gurrier said:
Anything that doesn't have access to qualia, of some sort, is not going to be able to know anything
Qualia: a 'thing' you have 'access' to that 'enables' you to do stuff? It's just 17th Century empiricism expressed in 'modern' terminology. Are we not past this yet? :confused:
 
Back
Top Bottom