Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

Jonti said:
The comment was addressed to gurrier.

I took your statement that causally all you need is computation to be equivalent to gurrier's position. I understand that to be that consciousness is just a pattern of information processing; when the right algorithms run on electronic data processing equipment (but actually regardless of the physical substrate exercising the algorithm) consciousness would emerge.

Well, I still don't agree. It could do, but it doesn't have to.
 
kyser_soze said:
Well it's pretty fucking easy to work out if you know what narcissicism means, so no.

OK then - philosophy is where we hold up an intellectual mirror to ourselves and what it means to be human, and no matter which way you cut it, and as demonstrated on here, we're not just animals, we're special animals etc.

Narcissistic.

It's actually about more than that.

Metaphysics, ie.
 
kyser_soze said:
I'd prefer different, especially since it comes with a pre-supposition of some kind of superiority (which let's face it is a Judeo-Christan hangover) over every other living thing on Earth.

Nonsense, on both counts.

Animals are way too superior when it comes to sheer power, strength, smell, eyesight etc. etc. - that which they have to be really good at to survive! We're NOT animals, I tell ya! :D

Christian? You should have told that to pre-Christians of Greece etc. I mean, honestly...

kyser_soze said:
Maybe re-heating Hegel would work too...

He's always HOT!!! :D Never really went completely cold... :p

Seriously, no one serious takes that stuff literally, FFS... [Really...:cool: ]
 
Jonti said:
The comment was addressed to gurrier.

I took your statement that causally all you need is computation to be equivalent to gurrier's position. I understand that to be that consciousness is just a pattern of information processing; when the right algorithms run on electronic data processing equipment (but actually regardless of the physical substrate exercising the algorithm) consciousness would emerge.

Well yeah I agree with you there. The causal structure has to mirror the computational structure, otherwise you end up with the (Dennet?) absurdity that a rock implements every finite-state automaton.
 
We're NOT animals, I tell ya!

Oh, but we are. A different type, but we're an animal and nothing you've ever argued for the converse has convinced me - even the 'special' category smacks too much of ancient-world thought to me.

OK, point taken about the Greeks...indeed, most other ancient civilisation philosophies...
 
Fruitloop said:
Well, I still don't agree. It could do, but it doesn't have to.
Hey, I did say if and the right algorothms! :p

And if it could ... by what theory? What's the justiification for the belief that just processing data could cause consciousness to emerge :confused:
 
Fruitloop said:
Well yeah I agree with you there. The causal structure has to mirror the computational structure, otherwise you end up with the (Dennet?) absurdity that a rock implements every finite-state automaton.
Heh! Dennet does take things a tad too far on occasion :D

He seems to think the colour-blind (achromatic) neurologist could nevertheless understand what "redness" is; but I think she would not understand my perception of red at all. And nor would she understand a poem like "Silver" -- and all the semantic mark-up in the world would not help her bridge the explanatory gap.

One thing Dennet is really excellent on, is the demolition of the "epiphenomenal" view of consciousness. He points out that epiphenomenalism cannot explain why someone says they are conscious. The reason he gives is that an epiphemomenal consciousness cannot affect behaviour, including verbal behaviour, in any way!
 
phildwyer said:
Why do you contrast intelligent design with natural selection? The two are often found within the same theory, as in Paley for example. I suspect that you are identifying intelligent design with creationism or Biblical literalism. That is a serious error. My objection is not to natural selection per se, but to Darwin's monocausal and unidirectional reductionism.

Intelligent design presupposes designs. These designs have some form or other. No matter how small or piecemeal they might be, they are presented as givens. They cannot be described in terms of another design or they would not be an explanation but merely part of an explanation - they can have no meaning for us. They just are. Any explanation that terminates with a set of given designs has no determination of what those designs are - they are just rude empirical facts.

I have never seen Hegel argue against materialism, reductionism (unidirectional or otherwise), monocausalism or anything else like that. However he does argue again and again against unmediated ideas, notions of things existing purely in themselves and the type of crude empiricism that equates form with essense.

If evolution of life is multicausal with intelligent design popping in every now and then, then why is human cultural evolution not subject to intelligent design? Why are there no radical contingencies in human history for Hegel?

Hegel once bemoaned a time when the mere possibility of imagining something differently was sufficient good grounds to reject it - what is real is rational and what is rational is real. Intelligent design is always a possibility, but never more. It might exist alongside other mechanisms, it might not. This is not Hegel this is Hume, scepticism and the notion that all ideas are of equal value.

But anyway that's Hegel. He gets a bad press but has some great moments of lucidity.
 
Jonti said:
Heh! Dennet does take things a tad too far on occasion :D

He seems to think the colour-blind (achromatic) neurologist could nevertheless understand what "redness" is; but I think she would not understand my perception of red at all. And nor would she understand a poem like "Silver" -- and all the semantic mark-up in the world would not help her bridge the explanatory gap.

One thing Dennet is really excellent on, is the demolition of the "epiphenomenal" view of consciousness. He points out that epiphenomenalism cannot explain why someone says they are conscious. The reason he gives is that an epiphemomenal consciousness cannot affect behaviour, including verbal behaviour, in any way!

It's seems to be difficult for intelligent people to think about this really a lot without going a bit mental. I agree with him to the extent that elimitavism seems to solve a lot of problems, but 'hard' aliminativism also seems to create a lot more, and softer eliminativism doesn't answer many useful questions.

Although, going along with an ultra-hard eliminativism I don't think the above argument must be compelling. I mean maybe consciousness has the same ontological status as Sherlock Holmes or unicorns; no-one wonders why behaviour is affected by Sherlock Holmes (all those tourists at 42 Baker Street for example) without Sherlock Holmes actually existing. Or at least, it seems stupid but not paradoxical.
 
gorski said:
:eek:

BAAAAAAHHHHHHHHH!!!!!!!!!!!!!:D :rolleyes: How people don't know just how little they know... And then the narcissism starts!!!:p :D
I'm happy to be corrected. In fact I'm keen that you do. Where does consciousness/the idea come from according to Hegel then?
 
Jonti said:
Well, does the physics and chemistry of real nerve cells involve non-computational processes or not?

The thing is, strong AI does indeed make a whole raft of assumptions. it's up to the proponents of strong AI to be explicit about these assumptions, and to justify them. I agree there is a metaphysical position of rigid determinism implicit in the claims of strong AI; that this is another assumption; and that it is unhelpful in this particular context. I think that does need to be tackled head-on, but I don't want to make this post too lengthy.

The strong AI claim is that stirring pure information (whatever that is) about, regardless of its physical representation, can cause consciousness to emerge (presumably the consciousness is nevertheless somehow a property of the physical system in which the information is sloshing around). We're usually invited to imagine this data sloshing around inside some kind of electronic data processing kit, an arithmetical calculating machine of some sort.

Such engines are determinate, so even if a consciousness were to "emerge" from the churning of data inside the circuitry, that consciousness it would be unable to choose in any way. The conscious algorithms imagined by strong AI would be ineffectual, helpless witnesses, quite unable to influence their world. That's not the kind of consciousness that could have any role in evolution.

It's not quite absurd, but it does seem perverse to imagine a consciousness that is unable to make any choices. It just brings us back to the question of why does consciousness exist, if it does nothing? On the other hand, it is quite absurd to think that meaningful choices can be made in the absence of consciousness.

So it seems to me that churning information gets us nowhere. I'm more inclined to the view that consciousness adds meaning to data. Or to phrase that a little differently, consciousness accompanies not the processing of information, but the creation of information.

I think you are talking about something quite different here. I don't think strong AI needs to be deterministic, we can have perfectly good algorithms with a random component.

For me the most odd thing is that if there is an algorithm (deterministic or not) then it can be implemented anywhere.

But to answer your first question, I don't know. It seems very far fetched to say that there are non-computational processes going on. Possible but unlikely.
 
kyser_soze said:
I'd prefer different, especially since it comes with a pre-supposition of some kind of superiority (which let's face it is a Judeo-Christan hangover) over every other living thing on Earth.
We could kill just about every other living thing on earth if we wanted to.

But really, it depends how you're defining superiority
 
Spion, I am not evasive, I write a lot and don't mince my words but can't right now.

Early Phenomenology, it's online. Have a go, you'll be very pleasantly surprised just how "materialistic" the old "idealist" is... ;) And given good faith/open mind - you'll learn a heluva lot! :cool:

At least potentially! :D

Kyzer - on the other hand - will never understand the meaning of qualitative difference - animals, living beings, Humans, God...

Cheers!:cool:
 
Fruitloop said:
Well yeah I agree with you there. The causal structure has to mirror the computational structure, otherwise you end up with the (Dennet?) absurdity that a rock implements every finite-state automaton.

I think that was Putnam.
 
gorski said:
Spion, I am not evasive, I write a lot and don't mince my words but can't right now.

Early Phenomenology, it's online. Have a go, you'll be very pleasantly surprised just how "materialistic" the old "idealist" is... ;) And given good faith/open mind - you'll learn a heluva lot! :cool:
No, come on. In your own words, please, or a quickly-digestible reference. This is a bulletin board not an MA seminar. To constantly say 'go and read this' just makes me think you haven't got a clue. So, come on, prove me wrong
 
Fruitloop said:
Zizek on what it means to be a Hegelian film critic (first 30 seconds only). Remind you of anyone you know?



:D
Teehee, nice one. It's a ludicrous position to take, but I do quite like his stuff
 
Kyzer - on the other hand - will never understand the meaning of qualitative difference - animals, living beings, Humans, God...

Meh. I do recognise the difference, just not that we're 'special' as you seem to think we are.
 
IMO the claim of strong AI was never that computation on its own is sufficient for consciousness. Such a theory should perhaps be renamed 'brain in a vat' AI or something. Computation + Dasein might suffice, but not computation on its own.
 
gorski said:
Of what you, in your confusion, can "get"...;) :cool:

Possibly. I don't think there is very much more. Like Darwinian theory, it is fundamentally very simple, but with labourious and careful application, describes things that are very complex. It just happens that some of your favourite Hegelian baubles are the result of a complex succession of straightforward manoevers. I wish you would see the elegance of Hegel's method not just the boring intricacies of his system.

But anyway that's enough Hegel. I don't see the relevance. As Spion says, he never addressed how consciousness evolved, (except in a social context of course).
 
Knotted said:
As Spion says, he never addressed how consciousness evolved, (except in a social context of course).
I'm not sure that he explained its origin or evolution in that social context either. I'm happy to wait for our Young Hegelians to proffer an exegesis though :)
 
Jonti said:
Well, does the physics and chemistry of real nerve cells involve non-computational processes or not?
There is nothing to suggest that there is anything non-computable going on.

Jonti said:
The thing is, strong AI does indeed make a whole raft of assumptions. it's up to the proponents of strong AI to be explicit about these assumptions, and to justify them. I agree there is a metaphysical position of rigid determinism implicit in the claims of strong AI; that this is another assumption; and that it is unhelpful in this particular context. I think that does need to be tackled head-on, but I don't want to make this post too lengthy.

The assumption is that the brain can be modelled as a computer programme - there is, as I've said before, a whole heap of evidence suggesting this is the case and absolutely nothing to suggest that it is not the case. The sensible scientific stance is to tentatively assume that the brain can be modelled as a computer programme and pursue this line of investigation until it hits a wall. If the assumption is wrong, then we would expect to find that our investigations would be fruitless and would fail to explain observed phenomena through the computational model. This has not been the case - quite the opposite in fact, the assumption that the brain can be modelled as a computer programme is in fact the theoretical lever which underscores most of the vast amount of hard, established facts about how the brain operates.

Jonti said:
The strong AI claim is that stirring pure information (whatever that is) about, regardless of its physical representation, can cause consciousness to emerge (presumably the consciousness is nevertheless somehow a property of the physical system in which the information is sloshing around). We're usually invited to imagine this data sloshing around inside some kind of electronic data processing kit, an arithmetical calculating machine of some sort.
Firstly, it's not pure information - it's algorithms, data structures and inputs and outputs. The idea that we can create abstracted models of computation independent of the physical substrata is not even an assumption - it's a truism - the physical substrata or our computers are designed precisely so that they will obey, as faithfully as possible, the mathematical abstraction of computation. It's the very basis for the entire discipline of computer science (and there can hardly be a discipline which better demonstrates the applicability of its theories in the real world).

Jonti said:
Such engines are determinate, so even if a consciousness were to "emerge" from the churning of data inside the circuitry, that consciousness it would be unable to choose in any way. The conscious algorithms imagined by strong AI would be ineffectual, helpless witnesses, quite unable to influence their world. That's not the kind of consciousness that could have any role in evolution.
Rubbish. You can introduce whatever non-determinateness that you want into a computer programme - race conditions, randomness, whatever - it's totally simple. There are, for example, simple plates which you can attach to your computer which detect particle impacts and introduce true entropy. You can also design algorithms which contain non-deterministic race conditions or depend on computations which are of sufficient complexity to produce chaotic outpus.

The second claim, of the poverty of consciousness algorithms makes no sense to me. If your programme has, for example, a very complex set of goals, of different levels of immediacy and importance (say like a person), then any event in the world has the potential to move the system closer to some of these goals and further away from others, then these events will have real meaning to the system. These events, obviously, include things that the system does and the system's actions will therefore have real meaning to itself. If the system's goals are incredibly complex, and events may influence the closeness to these goals in complex ways, then the meaning that events have for the system is complex, subtle and sometimes ambiguous.

Jonti said:
It's not quite absurd, but it does seem perverse to imagine a consciousness that is unable to make any choices. It just brings us back to the question of why does consciousness exist, if it does nothing? On the other hand, it is quite absurd to think that meaningful choices can be made in the absence of consciousness.
Meaningful choices can, of course, be made in the absence of consciousness. A transistor can make a meaningful choice. A higher-level example is the choice to withdraw one's hand from pain - which operates without the input or anterior awareness of the consciousness (indeed in some species this choice is hard-wired into the neurons of the spinal chord - the signal doesn't even have to reach the brain for the decision to be made).

Jonti said:
So it seems to me that churning information gets us nowhere. I'm more inclined to the view that consciousness adds meaning to data. Or to phrase that a little differently, consciousness accompanies not the processing of information, but the creation of information.

You are quite confused about the nature of computation - giving meaning to data is a subset of "churning information".
 
gurrier said:
Rubbish. You can introduce whatever non-determinateness that you want into a computer programme - race conditions, randomness, whatever - it's totally simple

Take it from me, it's not just simple, it's damn near unavoidable!
 
Gurrier - you've still failed to answer (or indeed address) the David Chalmers zombie argument. We know that all the former is possible, we're doing it right now. Is that consciousness? If so then eliminativism is true and there is no question to be answered.
 
Back
Top Bottom