Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

Kizmet said:
As a by-product of the evolution of other tools is a perfectly good explanation.

Because it's not necessary for it to be so.

And occams razor defines what one should do with the unnecessary.

I don't think you're right. I think it's a perfect non-explanation.

And not only does it not explain why it's the kind of universe that can support consciousness. It also doesn't explain why it's the kind of universe where consciousness can make a difference.

These are both things that need to be explained. Personally, I think Occam's razor is much abused, - but, if you want to invoke it, it means don't invoke a more complicated explanation than you need to for the things that need to be explained.

In the position you've put, I reckon there are two things that need to be explained but aren't.
 
Fruitloop said:
She sounds fantastic :oops:

She is. :oops:

The symbolic order and particularly writing for us are an amazing transformation of the potential of computation, in that they originate in but can also transcend our embodied nature (which is the real meaning of Freud's Death Drive, which no-one at the time, not even Freud, really understood). Perversion lives there, and we (as discursive subjects) live there with it. :)

Good spiritualist word there... 'transcends'.

Means 'goes beyond'...

means you kinda agree that there's somewhere to go that's 'beyond'.

:D
 
Demosthenes said:
I don't think you're right. I think it's a perfect non-explanation.

Oh well at least it was a perfect something.

:D

And not only does it not explain why it's the kind of universe that can support consciousness. It also doesn't explain why it's the kind of universe where consciousness can make a difference.

Any universe can. Can you conceive of an alternative?
 
Kizmet said:
Any universe can. Can you conceive of an alternative?

God no, - but at least I can tell you why I don't think any alternative is conceivable.

But tbh, at the mo, I'm finding the subject of the dominatrix more interesting. ;)
 
Kizmet said:
Up to a point. Otherwise why haven't more species done it?
How many species have "done it" is dependant on how you define it. Any reasonable definitions will conclude that many other species have probably "done it". Wherever you want to draw the line, the answer to your question is that simple, stimulus response behaviours and behavioural patterns go an awful long way. You need pretty sophisticated behavioural requirements before it's an advantage - consciousness is slow and ponderous, an ant who's worried about how injuries may affect his appearance and future reproductive chances is less likely to be a good fighter than one who has a 'fight' program hard-wired.

A good analogy is that of a high-level tennis player, if he controls his shots with his consciousness, he will be appalling (he will never return a serve as it will have hit the wall behind him by the time he's aware that it happened). He programs all the stimulus-response patterns that he will need into his sub-conscious through obsessive training and leaves his consciousness free to think about strategy. Sports psychology is all about turning off one's consciousness and allowing the programmed behaviour to act unimpeded.

It is also worth noting that almost everything you do is a stimulus response or a behavioural pattern - our concsiousness really has very little influence on the little details of the things that we do - it just formulates our long term strategy and lets the lower-level programmed stuff take care of the details.

This is also pretty much all experimentally proven. The philosophical debate really centres on the definition of consciousness - my definition would not be universally accepted.
 
gurrier said:
It was implicit in what I said - it's the functional division of your brain which deals with long-term planning and strategy, the stuff that is too complex too be encoded in stimulus response behaviours (move away from pain) or patterns (seek food).

It's obvious how having such a capability is evolutionarily advantageous and it is the way it is because that's the simplest way for evolution to engineer it.
Are you saying human consciousness is the functional division of the brain? That sounds like a category error!

I accept an elaborate physiological structure of the nervous system can be advantageous to an organism, and that it could help with long-term planning and strategy. But, given that the aims and available means are well-defined syntactically (!) all that could go in in the dark, as if processed by some electronic data processing machinery.

Indeed, the phemomena of blind-site demonstrates that this sort of thing can indeed happen (in the case of particular types of brain injury, for example). So why bother with consciousness at all?

What is it doing?
 
Demosthenes said:
God no, - but at least I can tell you why I don't think any alternative is conceivable.

... because consciousness always was and will be. :)

It's a decent answer... but open to one unanswerable question - 'Where did it come from?'

Whereas the physical reality of a functioning universe eventually developing an evolved consciousness leads only to the question 'where will it end?'

Any environment in which life can develop can develop consciousness given enough time.

But tbh, at the mo, I'm finding the subject of the dominatrix more interesting. ;)

As she says... 'where pain ends consciousness begins.'

:D

Accountant by day, she-devil by night.

Who'd a thunk it?

:)
 
Jonti said:
Are you saying human consciousness is the functional division of the brain? That sounds like a category error!
I am you know - but I don't think it's a category error at all. You could substitute "computer program" for "functional division" and it would probably be clearer what I mean. There's no reason to suspect that the brain does anything non-computable - none of the objections stand up - and simply loads of reasons for assuming that it can be modelled as a computer program. I'm saying that consciousness is a sub-routine (function, method, service, whatever) which does long term strategy and planning.

Jonti said:
I accept an elaborate physiological structure of the nervous system can be advantageous to an organism, and that it could help with long-term planning and strategy. But, given that the aims and available means are well-defined syntactically (!) all that could go in in the dark, as if processed by some electronic data processing machinery.

It does all go on in the dark! It looks the way that it looks from the inside because it has to look like that for the program to work. The consciousness subroutine needs access to all of your emotional data, memory data, sensory data - what we feel and perceive is simply an extraordinarily sophisticated data structure - which you could describe as a contextual model - and it's represented in the way that it is because there's such a volume of information which needs to be processed that you need all sorts of different levels and methods of representation. We don't see what we see, we see what our sub-conscious has mapped into our internal data structures (this is also experimentally proven).
 
gurrier Lions do this trained subroutines / higher level strategy when they hunt, no one could reasonably suggest they make a conscious decision on which claw to stick into the prey first, they do what they have learnt from when they were little cubs and have semi automated and they go for the kill as individuals and part of a team.

What was all that about human consciousness again?
 
gurrier said:
How many species have "done it" is dependant on how you define it.
My definition of 'it' in this context would probably include 'writing a book' :)

Any reasonable definitions will conclude that many other species have probably "done it". Wherever you want to draw the line, the answer to your question is that simple, stimulus response behaviours and behavioural patterns go an awful long way. You need pretty sophisticated behavioural requirements before it's an advantage - consciousness is slow and ponderous, an ant who's worried about how injuries may affect his appearance and future reproductive chances is less likely to be a good fighter than one who has a 'fight' program hard-wired.

A good analogy is that of a high-level tennis player, if he controls his shots with his consciousness, he will be appalling (he will never return a serve as it will have hit the wall behind him by the time he's aware that it happened). He programs all the stimulus-response patterns that he will need into his sub-conscious through obsessive training and leaves his consciousness free to think about strategy. Sports psychology is all about turning off one's consciousness and allowing the programmed behaviour to act unimpeded.

It is also worth noting that almost everything you do is a stimulus response or a behavioural pattern - our concsiousness really has very little influence on the little details of the things that we do - it just formulates our long term strategy and lets the lower-level programmed stuff take care of the details.

This is also pretty much all experimentally proven. The philosophical debate really centres on the definition of consciousness - my definition would not be universally accepted.

You just said it was evolutionarily advantageous and that it was obviously simple... now you're saying it's not always.

Which is all I meant by 'up to a point'.

:)
 
I'm not that happy with some of the above, but time is a bit short. Firstly flow is not non-conscious, it's more that it's unimpeded by verbal instructions from within the symbol-processing parts of the brain, or rather by self-consciousness, which is a discursive awareness of our own self-object. Having studied Buddhist meditation a fair bit, which was a major model for Csíkszentmihályi (along with music and rock-climbing which weirdly I also spent a lot of time on) the aim is definitely not a state of non-consciousness but more a kind of luminous but contentless presentness. (Sorry about that sentence, btw).

Neither is there a need for the assertion that something non-computable is going on, but therein lies exactly the problem of the hard problem in my opinion - that causally all you need is computation and yet there is (perhaps) this added dimension of how it is for me to compute in that way, and that this seems like something which is in a way extraneous and yet I know to be true, because I'm experiencing it right now. The extra-ness is the most mysterious part of it, not a way of downplaying it or explaining it away.

However, to paraphase David Chalmers I'm still an eliminativist on Tuesdays and Thursdays.
 
The question was 'can evolutionary theory explain human consciousness?'

Well it can explain the development of rudimentary consciousness given enough time.

What it can't explain, to my satisfaction, is why human consciousness is so far ahead of all other life on this planet

That's the bit I don't get. Why have we had an accelerated progress? If it was an accident when was it? What happened?

Or is there a tipping point?
If so... when was it?

Or were we pushed?
 
Jonti said:
Indeed, the phemomena of blind-site demonstrates that this sort of thing can indeed happen (in the case of particular types of brain injury, for example). So why bother with consciousness at all?

Sorry, could you expand on that- I am having trouble figuring out exactly what you mean here. Some of the posts on this thread are suspiciously jargon-filled- something that always rings alarm bells for me!

What do you think blindsight shows?
 
... that perception and an informed response to it can take place without conscious awareness.
 
Jonti said:
Are you saying human consciousness is the functional division of the brain? That sounds like a category error!
gurrier said:
I am you know - but I don't think it's a category error at all. You could substitute "computer program" for "functional division" and it would probably be clearer what I mean. ...
Whoa!

"Computer programs" (for the sake of precision and clarity, I mean algorithms that run on the kind of determinate engines commonly used for data processing) manipulate meaningless symbols according to a rigidly defined syntax.

But whatever-it-is-that-conscious-does is deeply concerned with intention and meaning. Semantics and not just syntax. That's just not the same sort of thing at all. The former can be done mechanically, the latter qua Godel, cannot.

I should warn you that, should you disagree, I may have to read you some of my poetry!!
 
Just thinking aloud here, the brain=computer view implies that the pattern of electrical(?) activity in the brain somehow gives rise to conscious awareness; and that this pattern is all that matters.

In other words, that if it could be replicated on another substrate (usually thought of as an awful lot of transistors etc) then consciousness would "automagically" emerge.

But there's a heck of a lot of assumptions hidden in that view; no evidence for it; and no trace of any kind of theoretical justification.

To prove just that last point -- how should I program one of my spare 'puters to make it even a teensy weensy bit conscious?
 
Jonti said:
Whoa!

"Computer programs" (for the sake of precision and clarity, I mean algorithms that run on the kind of determinate engines commonly used for data processing) manipulate meaningless symbols according to a rigidly defined syntax.

But whatever-it-is-that-conscious-does is deeply concerned with intention and meaning. Semantics and not just syntax. That's just not the same sort of thing at all. The former can be done mechanically, the latter qua Godel, cannot.

I should warn you that, should you disagree, I may have to read you some of my poetry!!
You can bind whatever semantics you want to whatever symbols you want. A robot's state variables, have whatever semantics the designer decides they will have. For example, it could have a "threat to survival" variable and the only thing stopping this having as good a semantic mapping into the real world as our own 'threat to survival' variable is our puny programming abilities compared to evolution's.
 
Jonti said:
... that perception and an informed response to it can take place without conscious awareness.

Which helps establish that human consciousness is something special how?
 
gurrier said:
You can bind whatever semantics you want to whatever symbols you want. A robot's state variables, have whatever semantics the designer decides they will have. For example, it could have a "threat to survival" variable and the only thing stopping this having as good a semantic mapping into the real world as our own 'threat to survival' variable is our puny programming abilities compared to evolution's.

it needs to be self programming.
 
Jonti said:
Just thinking aloud here, the brain=computer view implies that the pattern of electrical(?) activity in the brain somehow gives rise to conscious awareness; and that this pattern is all that matters.

In other words, that if it could be replicated on another substrate (usually thought of as an awful lot of transistors etc) then consciousness would "automagically" emerge. ...

I think there is a problem with this. I myself love to use computer analogies for brain type activity but I am always put down by the arguments that the brain is actually nothing like a computer.

For a start the brain is massively parallel wheras a computer has a largely sequential processor. Nothing AFAIK approaching the parellelness of the brain has ever yet been laid down in silicon or on circuitboards.

That said I do like the idea that the brain has underlying source code and lower level programming, which might equate to binary at the level of neurons and synapses, then much higher up - basic beliefs about self and perceptions of reality from which to model the world around us (but still encoded and not in english), this affects what we think in which is still quite abstract, then higher level brain processing languages which are actually to do with human language and communication internally and externally and so on etc .. .

For me it follows that questioning fundamental perceptions and beliefs perhaps all at the same time might be the equivalent of trying to F-Disk the brain at the same time as trying to keep the operating system going to do some work, something that I think may explain some mental problems I had myself some time ago :)
 
Jonti said:
Just thinking aloud here, the brain=computer view implies that the pattern of electrical(?) activity in the brain somehow gives rise to conscious awareness; and that this pattern is all that matters.

In other words, that if it could be replicated on another substrate (usually thought of as an awful lot of transistors etc) then consciousness would "automagically" emerge.

No more automagically than its embryonic emergence.

Jonti said:
But there's a heck of a lot of assumptions hidden in that view; no evidence for it; and no trace of any kind of theoretical justification.

You presumably don't believe that sperms and eggs have consciousness. I assume that you also accept that consciousness can't exist without a functioning brain. So you already accept that consciousness does indeed emerge at some stage of the brain's development. Since there's no reason to suspect that the brain can't be modelled as a computer programme, and many, many reasons to suspect that it can (if it can't its due to some hitherto unkown feature of the universe), until somebody can come up with a concrete objection and can give a mildly plausible explanation of what this unknown feature of the universe is, it's safe to assume that we're talking about a computer programme of such complexity and sophistication that we don't even have the analytic tools to understand it properly, never mind replicating it.

Every day another bundle of evidence lands on my side of the scales, while the other side remains empty. Nowadays, neuroscientists can trace electrical pulses from the retina, and identify the specific neurons where the patterns of excitation are abstracted into lines, then into shapes, then into specific objects - say a cup, then up to the consciousness where the only thing reported might be 'my favourite cup' - the consciousness doesn't actually "see" any of the detail, it paints the picture from its abstract conception of the cup.

So, we can actually see in minute detail a lot of the information processing that goes on in the brain. We _know_ that the brain processes information and sends signals to the body we have no evidence that it does anything else. We also know that it is of a design that is too complex for us to even formally analyse and we have every reason to suspect that its design and power would be expected to produce immensely sophisticated decision making - there's every reason to suppose that consciousness is a computer programme.

Jonti said:
To prove just that last point -- how should I program one of my spare 'puters to make it even a teensy weensy bit conscious?

You'd just write a programme which was responsible for coming up with plans to direct the selection and orchestration of the vast number of sub-conscious
routines that are hard-wired or been programmed through training and are available to you but are not part of you. Once you had your sub-routines in place, it'd be plain sailing ;-)
 
weltweit said:
I think there is a problem with this. I myself love to use computer analogies for brain type activity but I am always put down by the arguments that the brain is actually nothing like a computer.

For a start the brain is massively parallel wheras a computer has a largely sequential processor. Nothing AFAIK approaching the parellelness of the brain has ever yet been laid down in silicon or on circuitboards.

That's not the important difference. There is nothing computable with parallel processing that can't be computed on a single processor - indeed any parallel architecture can be simulated on a sequential processor - just much slower. The big architectual difference between the brain and the stuff that we build is the presence of numerous feedback loops and cross functional links in the brain.

We build systems by bundling stuff up in isolated functional units, with the inner workings as obscure as possible and we make every effort possible to make sure that we do not interfere with the internals of any other functional unit. We also attempt as much as possible to eliminate state from our architectures - we do not generally want to make the inputs to our functions depend on previous outputs. We do it like this because it renders it amenable to analysis. The complexity of our systems is already such that if we don't do it like this, they become far far too complex for us to programme.

The brain, by contrast, depends heavily upon feedback loops - with attention and qualia suppression both depending heavily upon feedback. In neural network simulations, as soon as we introduce a few feedback loops, we run into hideous problems with cycles and so on, problems which we can't even analyse properly. If we even tried to build the simplest thing in such a manner, it would have then equivalent of permanent, total, epilepsy.

Also, neurons tend to be tightly coupled to other neurons in functional clusters, but there are seemingly random interconnections all over the place - stray connections into some deep part of the brain from a visual-processing neuron. In computer programming terms, this is the equivalent of letting the state of all of your variables depend simultaneously on the state of hundreds of other variables, occasionally being influenced by a few million more - not exactly easy to figure all that out when writing code.
 
gurrier said:
... You'd just write a programme which was responsible for coming up with plans to direct the selection and orchestration of the vast number of sub-conscious routines that are hard-wired or been programmed through training and are available to you but are not part of you. Once you had your sub-routines in place, it'd be plain sailing ;-)
Yes, that's your *claim* -- I understand that's what you're claiming.

What theory lies behind the claim? Why do you imagine a sufficiently complicated electrical network becomes conscious?
 
weltweit said:
Which helps establish that human consciousness is something special how?
Not a word I used here, in the context of talking about the phenomena of "blind-sight".

Blind-sight shows, I suggested, that that perception and an informed response to it can take place without conscious awareness.
 
gurrier said:
..................... Also, neurons tend to be tightly coupled to other neurons in functional clusters, but there are seemingly random interconnections all over the place - stray connections into some deep part of the brain from a visual-processing neuron. In computer programming terms, this is the equivalent of letting the state of all of your variables depend simultaneously on the state of hundreds of other variables, occasionally being influenced by a few million more - not exactly easy to figure all that out when writing code.

Interesting, the feedback links and links across seperate sections is new to me, I was not aware of that.

I think humans (and other complex animals) are quite amazing for all the sensory inputs that are available to them and the nervous system that transmits it all.

We have sight colour, hearing, smell, balance, taste, temperature, weight, pressure, pain, millions of hair sensors, skin sensors etc and derive pleasure etc etc and somehow all these sensor or sensor systems manage to stay quiet when we do not need them or we are able to tune out the noise of unnecessary sensor inputs.

Indeed, mind over matter when that facility can be learnt, and I have managed to tune out real pain altogether on one occassion myself.

When we try to connect together the different systems on a modern car we increasingly use multiplex or bus systems rather than direct wiring and we often end up building a lot more intelligence into each sensing device such that it only sends data under certain conditions. But we arguably afaikr build a lot more intelligence into each sensing device and more complexity into the transmission system than exists in a single human nerve cell attached to a hair and the nervous system that takes its signal to the brain.

Arguably if you built a computer connected to all the manmade sensor devices that we have these days and a way of getting around it could be made to learn things on its own, indeed is it asimov the Japanese robot that does just that. It could be aware and it could learn but still arguably only within the parameters set by its programmers, thus aware it might be but conscious, for me that is another question.
 
weltweit said:
... asimov the Japanese robot ... could be aware and it could learn but still arguably only within the parameters set by its programmers, thus aware it might be but conscious, for me that is another question.
This line of thinking is just evidence and theory free speculation, but it has one hell of a grip on cognitive science right now.

Again, why should a sufficiently complex piece of data processing kit magically become aware? What theory of consciousness leads people to such a conclusion?

Show me the code!
 
Jonti said:
What theory lies behind the claim? Why do you imagine a sufficiently complicated electrical network becomes conscious?
What theory lies behind it? It is a theory! It's premised on the assumption that the brain is an information processor. It's also based on a reasonable knowledge of the state of the art in cognitive neuroscience and AI, in particular a familiarity with all of the stuff that the consciousness definitely doesn't do, or more properly doesn't need to do, since it happens outside of awareness. By a process of elimination you are left with long term planning and strategy.

The more you think about what information you would need access to in order to actually write such a program, the more it starts to look like what consciousness looks like from the inside.
 
That "a sufficiently complex electrical network will become conscious" is not exactly a theory of consciousness. It's more of an assertion. What's so special about electricity? Why didn't people say the same about specifically *clockwork* automata. Oh, wait ... they did :D

Such automata may compute, but they do not think, for they only compute with a rigidly defined syntax -- one which is quite devoid of meaning to the body of the engine. Semantics ("meaning") has no role to play in such devices. Such Turing machines simply cannot do things that any thinking mathematician can. And Godel showed this more than fifty years ago!

I'm not saying that "data processing" analogies are completely worthless for understanding brain mechanisms. But that they cannot be the whole story of consciousness seems undeniable.
 
Back
Top Bottom