Urban75 Home About Offline BrixtonBuzz Contact

"Darwin made it possible to be an intellectually satisfied atheist"

Do you agree with Dawkins statement?


  • Total voters
    37
kyser_soze said:
Ever hear of a thing called a succint answer, which is generally what a point it, not half an essay.
Nope. Can you show me what one is?

I know humans create a story. What I am saying is that the question which makes me create the story I create is fundamentally unanswerable, or at any rate, most of the people who have provided answers so far have answered different, but related questions. The question is, simply, why does anything have subjective awareness of anything, when it is completely unnecessary to the functioning of the universe?
kyser_soze said:
you can't know the answer and question at the same time...mmm...maybe a late observation, but do you think he was allusing to the problems of measuring the quantum realm where you can know an objects position/velocity but not both at the same time?
Probably, although actually you can know both, but only to a limited degree of precision (which is why macroscopic objects, like grains of sand, usually look like they have determinate positions and momenta).

He may also have been referring to certain problems in philosophy and logic -- if you can prove, using a system's own rules, that it is logically self-consistent, then it isn't.

On a cheesily-related note, I made a drawing of the Infinite Improbability Drive that used something called a Bolswami inversion to make maximum indeterminacy out of minimum indeterminacy. Pseudoscience -- I love it!

http://www.ripe-fruit.co.uk/drawings/hitch/heartofgold.shtml
 
gurrier said:
It exists for a reason, but it is an engineering one rather than a mystical one. Things must have meaning for your consciousness or else it would not be able to estimate the emotional desirability of the predicted outcomes of the brain's planning processes and hence would not be able to come up with survival behaviour in such a complex environment.
The use of the phrase "emotional desirability" seems to weight the model in favour of a conscious solution. This is what I meant when I said I thought your argument was circular. An unconscious system could do exactly the same engineering job, but we wouldn't call its constraints "emotions" and we wouldn't call its decision procedure "conscious", although in this universe, I suspect any system with such constraints and such a decision procedure would actually be conscious (intuitively, for the same reasons Turing did). I just think a universe in which such a decision procedure was unaware and its constraints were not emotions is entirely conceivable, and the organism would be as "capable", as we understand the term, as we ourselves are.
gurrier said:
There are lots of things that the universe could get along quite happily without and it doesn't mean that they are anything special.
This is, of course, true. My argument has no pretensions to being watertight. I just think you can't write it off as insane or illogical. Consciousness is very special to me (and, I imagine, to you) so I consider it important. The rest of the universe may not give two shits.
 
The question is, simply, why does anything have subjective awareness of anything, when it is completely unnecessary to the functioning of the universe?

And for me at least, this comes back to 'random' chance - or at least a series astronomically improbable events forming a cause and effect chain that lead to the sludgy goo that we call 'life'...

I remember reading somewhere that, if you go back thru the generations, the chances of you being born are astronomically improbable and that we forget this cos there's so many people on Earth. Or something.
 
I don't have mush time, but I just have to say that Gurrier's reasoning above is a textbook example of capitalist ideology expressed in pseudo-scientific terms. He unthinkingly endorses the objectification which is the prime psychological effect of capitalism. He believes that human beings are *things,* and he believes this because that is what capitalist society does to people--it makes them into things.
 
kyser_soze said:
And for me at least, this comes back to 'random' chance - or at least a series astronomically improbable events forming a cause and effect chain that lead to the sludgy goo that we call 'life'...
But in this case, I think the random chance (whatever its likelihood) predates evolution or life. The capacity for consciousness in a complex system doesn't seem to me to be the kind of thing which could have "just happened". So it had to be there, potentially, all along. Even if there had been no living organisms in this universe, that property would still have been remarkable, although of course nobody would have remarked on it.
kyser_soze said:
I remember reading somewhere that, if you go back thru the generations, the chances of you being born are astronomically improbable and that we forget this cos there's so many people on Earth. Or something.
It depends on what you compare the event against. If you compare it against all the other possible atomic configurations of all possible universes, it is nearly impossible! If you compare it with all the universes which were identical up until about a month before you were born, it is rather more likely :)
 
But in this case, I think the random chance (whatever its likelihood) predates evolution or life. The capacity for consciousness in a complex system doesn't seem to me to be the kind of thing which could have "just happened". So it had to be there, potentially, all along. Even if there had been no living organisms in this universe, that property would still have been remarkable, although of course nobody would have remarked on it.

I'd explain that as saying 'all the ingredients were there, it just needed the right circumstances to bring them together' - if 'random' chance hadn't bought them together in the right way/circumstances/blah we wouldn't be sat typing 'Well isn't *that* remarkable' :D

What really noodles my noggin is knowing that Sol is a 3rd generation star, and there (potentially) have been countless intelligent lifeforms that may have risen, conqured galaxies and then died off.

Not every Pulsar, Supernova and black hole is necesarily natural...
 
C S Lewis said:
But it doesn't make any sense to say, simple nervous systems aren't conscious, and brains that are sufficiently complex to analyse and select their sensory input for further processing just are. Why should they be? It's still just a highly complex biological mechanism. What need for the consciousness if it doesn't actually do anything? Or if it does do something, how does it do it?
If you read the very brief outline of my hypotethical model again, you will see that I am claiming that consciousness very much does something:

"Consciousness being the functional sub-system which receives a high-level summary of sensory data and memory data, transmits this to the planning centres, performs an emotional weighting of the resultant predicted outcomes and acts upon the one that comes closest to an emotionally stable / desirable state."

Can you think of a quality of consciousness that can't be covered by such a functional explanation?

How does it do this? I don't even need to be particularly speculative about that - it does it by passing electric charges through synapses, dendrites and neuron, by regulating the concentration of the various chemicals in the intersynaptic medium and by slowly changing the strengths of connections that are used frequently.
 
gurrier said:
Can you think of a quality of consciousness that can't be covered by such a functional explanation?
Sentience. All the processing doesn't need anyone (or anything) to be aware, at least conceptually.

(Sorry to harp on. I am not trying to win -- I am just trying to prove that winning is impossible!)
 
andrewwyld said:
The use of the phrase "emotional desirability" seems to weight the model in favour of a conscious solution. This is what I meant when I said I thought your argument was circular. An unconscious system could do exactly the same engineering job, but we wouldn't call its constraints "emotions" and we wouldn't call its decision procedure "conscious", although in this universe, I suspect any system with such constraints and such a decision procedure would actually be conscious (intuitively, for the same reasons Turing did).
The 'desirability' of emotions and the particular emotions that are attached to particular outcomes are a consequence of evolution, not consciousness (an emergent property). Once a critter's behaviour gets complex enough, there is no point in evolution attempting to influence behaviour directly, instead it affects the 'weight' that we attach to the outcomes of decisions. We call this weighting emotion and the more complex the critter, the more variables that the word covers. I don't think that this is circular and I think it's fairly straightforward to conceive how such a thing could evolve in minute steps. If I get a chance, I'll illustrate this diagramatically for you.

andrewwyld said:
I just think a universe in which such a decision procedure was unaware and its constraints were not emotions is entirely conceivable, and the organism would be as "capable", as we understand the term, as we ourselves are.
I think that the decision procedure must be "aware" or it would not be able to manage the requirements of behaviour in complex species. Whether it is inevitable that awareness takes the form of our concsciousnesses I don't know, but I suspect that there aren't too many engineering solutions available to evolution to allow the management of such complex behaviour and that consciousness was inevitably going to look something like this.
 
andrewwyld said:
Sentience. All the processing doesn't need anyone (or anything) to be aware, at least conceptually.

(Sorry to harp on. I am not trying to win -- I am just trying to prove that winning is impossible!)
Now that is a circular argument! Recall that what you call sentience is what I call consciousness! (I'm pretty sure that mine is the standard terminology).

From an engineering point of view, the system requires a functional unit which receives a high level summary of sensory data and evaluates choices in terms of the desirability of predicted outcome states. If we look at it purely as an engineering problem, our requirements for the consciousness functional processing unit would have all the features of what we experience as consciousness. In short, consciousness is like it is because it is the best available solution to a very difficult engineering problem.
 
gurrier said:
Now that is a circular argument! Recall that what you call sentience is what I call consciousness! (I'm pretty sure that mine is the standard terminology).
No, I have thought about it some more and I think I am talking about what you call sentience. However, you have taken a purely scientific view of sentience, regarding it as the fact that people and animals have access to data about their surroundings, whereas I am looking at the phenomenon philosophically, and making a sharp distinction between objective fact (systems having access to information) and subjective fact (agents experiencing information). That these two are intimately related, I do not dispute, but they are distinct and distinguishable.
gurrier said:
From an engineering point of view, the system requires a functional unit which receives a high level summary of sensory data and evaluates choices in terms of the desirability of predicted outcome states. If we look at it purely as an engineering problem, our requirements for the consciousness functional processing unit would have all the features of what we experience as consciousness. In short, consciousness is like it is because it is the best available solution to a very difficult engineering problem.
The whole point I am trying to make is that what you describe, while absolutely necessary from an engineering viewpoint (on this much we agree!) does not require anything anywhere to have a subjective viewpoint.

To show you what I mean, you are probably already aware that I may experience what you experience as red and green swapped. We would never know, since every object in the universe which you saw as red, I would also call red, since I had only ever heard it called that. We would see yellow the same, but actually if I could be inside your head, I would realize that your experience of green was utterly unlike mine.

Now, here is my point. It is easy for us to conceive that our subjective experiences of objective phenomena may differ wildly, but since we can never explain ourselves except in terms of objective phenomena -- or assumptions about relations between objective phenomena and subjective phenomena (such as that I dislike the experience I have when I make a face like you make when you dislike an experience) -- we will never know. Someone might even lack subjective phenomena altogether. To think this of a real person is clearly a silly assumption, but to think it of a person in a thought-experiment -- even of a universe of such people -- is quite manageable.

It is the existence of subjective phenomena at all -- not their physiological counterparts -- which is scientifically inexplicable. I think my chosen explanation can be rejected easily (and I do not expect you to change your view in this regard), but I think the fact that this is inexplicable cannot be rejected so simply. You may prefer not to explain them, regarding explanation as unnecessary, but I feel that the existence of such phenomena, and the fact that I can generate meaning about objective events, suggests that meaning is a part of the universe, because it is a part of me, who am a part of the universe.
 
andrewwyld said:
The whole point I am trying to make is that what you describe, while absolutely necessary from an engineering viewpoint (on this much we agree!) does not require anything anywhere to have a subjective viewpoint.
I think that in the model which I presented, the consciousness processor needs to have the illusion that it is experiencing the qualia in order for it to properly identify with the whole and to create the 'I' entity in the emotional measurements and planning phases. I don't see any easier way to engineer a functional sub-system which plans as if it were the subject (rather than just seeing itself as a small functional sub-system). I think that the long term planning part of the system needs to have an illusion of itself as being the subject of its plans. There is an enormous amount of evidence that suggests that much of our conscious experience is an illusion created by functionally lower parts of the brain. To some extent, these illusions are designed to allow the consciousness to experience the world as if it was the subject. Once again, I don't see any reason to look for a more mystical reason for subjective experience than "system engineering requirements".

andrewwyld said:
To show you what I mean, you are probably already aware that I may experience what you experience as red and green swapped. We would never know, since every object in the universe which you saw as red, I would also call red, since I had only ever heard it called that. We would see yellow the same, but actually if I could be inside your head, I would realize that your experience of green was utterly unlike mine.
Not a great example in this case since we could just measure the wavelength of the light reflected from the surface and spot the anomaly, but I accept the general point.

andrewwyld said:
Now, here is my point. It is easy for us to conceive that our subjective experiences of objective phenomena may differ wildly, but since we can never explain ourselves except in terms of objective phenomena -- or assumptions about relations between objective phenomena and subjective phenomena (such as that I dislike the experience I have when I make a face like you make when you dislike an experience) -- we will never know. Someone might even lack subjective phenomena altogether. To think this of a real person is clearly a silly assumption, but to think it of a person in a thought-experiment -- even of a universe of such people -- is quite manageable.
The philosopher's zombie! I don't know if this holds though. There is no guarantee that a system which lacked a high-level planning centre which operated under the illusion of subjectivity could replicate the behaviour of a system that had such a sub-system. It could be a basic requirement without which it is impossible to solve the engineering problem of managing such complex behaviour.

andrewwyld said:
It is the existence of subjective phenomena at all -- not their physiological counterparts -- which is scientifically inexplicable. I think my chosen explanation can be rejected easily (and I do not expect you to change your view in this regard), but I think the fact that this is inexplicable cannot be rejected so simply. You may prefer not to explain them, regarding explanation as unnecessary, but I feel that the existence of such phenomena, and the fact that I can generate meaning about objective events, suggests that meaning is a part of the universe, because it is a part of me, who am a part of the universe.
I hope that I have explained adequately above why I don't think the existence of the 'subjective' illusion needs mysterious explanations. The meaning of the objective events (or their 'emotional weight' as I would put it) is something that I also don't have too much difficulty in explaining. Once the behavioural requirements of the brain reach a certain level of complexity, evolution can't influence behaviour directly any more by adding stimulus-response rules. It can only fiddle with emotional weights attached to situations and produce tendencies to respond in certain ways. I consider it an extension of the idea that complex non-linear systems are beyond deterministic analysis and instead require probabilistic analysis.
 
Back
Top Bottom