Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

Jonti said:
This line of thinking is just evidence and theory free speculation, but it has one hell of a grip on cognitive science right now.

Again, why should a sufficiently complex piece of data processing kit magically become aware? What theory of consciousness leads people to such a conclusion?

Show me the code!

Can't Jonti :) dont have the code !!

Asimov like creations are certainly aware of all the sensor inputs they are created with, indeed they could be argued to be conscious of the distance to the wall on their right perhaps by dint of distance information being passed to their processor by an ultrasonic sensor that was measuring the distance.

But that is consciousness on a very limited scale, of a few hard wired input devices ... I am sure that the human consciousness that people have been discussing in this thread, I am sure that that is supposed to be much much more significant, though I am having very real trouble defining just what it is..
 
Jonti said:
That "a sufficiently complex electrical network will become conscious" is not exactly a theory of consciousness. It's more of an assertion. What's so special about electricity? Why didn't people say the same about specifically *clockwork* automata. Oh, wait ... they did :D
That's actually a category error ;) The programme is the important thing - the algorithms, data structures and the inputs and outputs. You could build it with valves or mirrors if you wanted.

Jonti said:
Such automata may compute, but they do not think, for they only compute with a rigidly defined syntax -- one which is quite devoid of meaning to the body of the engine. Semantics ("meaning") has no role to play in such devices.
Semantics are a diversion. As I explained above, you can bind whatever symbols you like to whatever semantics you want - the hard part is to know how to map the semantics from the real world.

Jonti said:
Such Turing machines simply cannot do things that any thinking mathematician can. And Godel showed this more than fifty years ago!
Most thinking mathematician's, and virtually all thinking cognitive scientists reject Penrose's conclusions. Godel may have shown that there are certain classes of stopping problem which are not computably decidable, but neither he nor Penrose demonstrated that there is anything non-computable going on in human brains.
 
This post is a bit of an aside:

The original thread question was "Can Evolutionary Theory Explain Human Consciousness?"

For me that immediately raised the question:

What exactly is human consciousness and how can it be defined?

I have just been back and re-read the thread picking bits that seem to have some logic to my mind which I include below. It still seems one could pick and mix between these and come up with quite different definitions of what exactly human consciousness is.

Anyhow I include the below (hope nobody minds) in case there are others like me strugglng with definitions.

........................................

Definitions of human counsciousness from the thread so far:

Nikolai: self-aware

Kizmet: We didn't evolve consciousness. We evolved tools. Those tools give us consciousness as a by-product of their function.

goldenecitrone: studies of bonobos and chimps suggest they have a similar form of consciousness to our own.

gorski & Nikolai: Read Hegel, Or Sartre, Hartmann, Merlo-Ponty

goldenecitrone: Dasein (the german for "being there")

nosos: Part of our existence is defined by our understanding of our own existence. This is what conciousness is. - - human beings are self-interpreting language animals. We don't just have self-awareness in the sense of understanding that if a bus hits me rather than the other cat then I'll die and it won't. We have self-awareness in the sense of self-interpretation:

littlebabyjesus: We should not consider consciousness as simply present/absent. It is certainly something that develops well after birth, and continues to develop. I am more conscious now than I was when I was 10, and when I was 10 I was more conscious than I was when I was 2.

Kizmet: Consciousness is of oneself.. and awareness is of stimuli.

nosos: I like to think of consciousness as existing on a spectrum from disengagement (abstract reflection, a sense of being "inside ourselves") to the sort of extreme state of engagement described as Samadhi or flow.

Lord Hugh: I am going to define consciousness as the activity of the brain, perhaps in a certain area. What area that is, I'm unsure, but were I to speculate, it would be in / inclusive of the associatory cortex. This is the place in the brain where our sensory inputs get linked together. I believe that this would be the logical place for "consciousness" as we experience it to happen:

Kizmet: You will be conscious of a number of factors.. these may or may not be the specific stimuli.. but that means you possess consciousness. But you're a human.. so you always do.

Fruitloop: What I suspect is that there is a continuum of consciousness, and that our consciouness depends on the basic embedded interaction of sense, processing and motivation that the mosquito displays when it seeks you out once the lights go off, plus memory plus the symbolic order plus the ability to introspect our own mental states.

Knotted: There are obvious advantages to our ability to problem solve for example. Is that an activity that requires 'consciousness'? Of course. Something which is not conscious cannot have problems and therefore has nothing to solve.

gurrier: it's the functional division of your brain which deals with long-term planning and strategy, the stuff that is too complex too be encoded in stimulus response behaviours (move away from pain) or patterns (seek food).
 
Oh at some point in the thread someone mentioned the mirror test in which apparently a young human or an animal was presented with a mirror and watched to see if it recognised that the reflection in the mirror was it itself.

When I was young we had two jack russel terriers, a mother and daughter. The mother was an adventurous type who loved getting down fox and badger holes, always rushing out to explore the natural world, she ignored things like mirrors as if she knew there was nothing there of any interest or reality.

The daughter, much younger, was not nearly so adventurous perhaps even was a scaredy cat, but she was fascinated by her own reflection and on top of that she used to watch television intently for objects she recognised, other animals, people she did not like, she was very animated when watching and would bark loudly at cats whenever they appeared on the box.

So same creatures, one passed the mirror test (probably) the other simply ignored it because she had better things to do!
 
gurrier said:
Since there's no reason to suspect that the brain can't be modelled as a computer programme, and many, many reasons to suspect that it can (if it can't its due to some hitherto unkown feature of the universe), until somebody can come up with a concrete objection and can give a mildly plausible explanation of what this unknown feature of the universe is, it's safe to assume that we're talking about a computer programme of such complexity and sophistication that we don't even have the analytic tools to understand it properly, never mind replicating it.

You have to be careful when you assume that.

Yes the function of the brain can be 'reproduced' by a sufficiently well written and powerful program.

But that doesn't mean that's how it performs those functions.

The function of computation is to take any particular task.. break it down to it's component parts and work through it.

It's a method of solving almost any problem.
 
but neither he nor Penrose demonstrated that there is anything non-computable going on in human brains.

Oh, dunno... Art, Philosophy, even Science [!!!!], Morals, Feelings, Values, the need to be Recognised and Respected, the need for Freedom, the need to Love and Be Loved, Friendship, the Love of Mankind and our Environment... that sorta thing...

Or will you quickly sort it all out for us and then PCs will acquire awareness and consciousness and rule the world?!?
 
gurrier said:
It's obvious how having such a capability is evolutionarily advantageous

No it is not. In fact the reverse is obvious. It is obvious that consciousness is an extremely self-destructive capability for a species to possess. Not only is consciousness the cause of great misery to individual organisms, it has also provided the human species with the means to destroy itself. That fact alone proves that evolutionary theory is inadequate to explain consciousness.
 
The more you think about what information you would need access to in order to actually write such a program, the more it starts to look like what consciousness looks like from the inside.

Now, in these circumstances, I don't know whether to laugh my head off or cringe...:rolleyes: :D

Bold statements. You know what it is - exactly and from the inside. And gradually we will be able to achieve it. Actually, not very long now...:rolleyes:

[Since there's no reason to suspect that the brain can't be modelled as a computer programme, and many, many reasons to suspect that it can (if it can't its due to some hitherto unkown feature of the universe), until somebody can come up with a concrete objection and can give a mildly plausible explanation of what this unknown feature of the universe is

Well, that's that then... Any day now.........:D

Because:

there's every reason to suppose that consciousness is a computer programme.

Echhhhh........
 
gorski said:
Now, in these circumstances, I don't know whether to laugh my head off or cringe...:rolleyes: :D

Or point out, again, that they ought to read Hegel. They won't, of course, and to be brutally frank, that means there ain't much point in engaging with them. We are dealing with fanatics here my friend...
 
And here it is, for all to inform themselves... ;)

http://www.marxists.org/archive/index.htm

http://www.marxists.org/reference/archive/hegel/index.htm - "Phenomenology" should be a minimum, I guess... :)

Hegel’s Phenomenology of Mind
CONTENTS
Synopsis
Preface
Introduction
A. Consciousness
I. Sense-Certainty, This, & Meaning
II. Perception, Thing, & Deceptiveness
III. Force & Understanding
B. Self-Consciousness
IV. True Nature of Self-Certainty
A. Lordship & Bondage
B. Unhappy Consciousness
C. Free Concrete Mind
(AA). Reason
V. Certainty & Truth of Reason
A. Observation as Reason
a. b. c.
B. Realization of rational self-consciousness
a. b. c.
C. Individuality
a. b. c.
(BB). Spirit
VI. Spirit
A. Objective Spirit: the Ethical order
a. b. c.
B. Culture & civilization
I. World of spirit in self-estrangement
a. b.
II. Enlightenment
a. b.
III. Absolute Freedom & Terror
C. Morality
a. b. c.
(CC). Religion
VII. Religion in General
A. Natural Religion
B. Religion as Art
a. b. c.
C. Revealed Religion
(DD). Absolute Knowledge
VIII.Absolute Knowledge
 
A computer has no consciousness of our world because it is not embedded in it the way we are – no drives, no aversions, no death - nothing to play for, basically. Our consciousness on the other hand starts with this and only much later (on both a personal and an evolutionary scale) do we get symbolic systems capable of self-reflection etc. So the question of why a conscious entity has a life-word is completely backwards – beetles are the successful norm and we are the freaks.

Personally I think that the concentration of AI on making computers that interact with our word like us is pretty wrong-headed - they have their own embeddedness in a different (man-made) environment and the problem of bringing them to interaction with our environment is one of robotics not AI. Unfortunately at the moment the raw processing power is less than a mouse, and the capacity for adaptation beyond predetermined parameters probably about that of a house-fly, so what do you expect? It's a matter of time though and progress in moving forward on all fronts.

Probably eventually we would have great difficulty convincing them that we have consciousness and qualia, since we are basically made of ham. How could such a thing have an internal life? :D
 
Marx said:
One of the most difficult tasks confronting philosophers is to descend from the world of thought to the actual world. Language is the immediate actuality of thought. Just as philosophers have given thought an independent existence, so they were bound to make language into an independent realm. This is the secret of philosophical language, in which thoughts in the form of words have their own content. The problem of descending from the world of thoughts to the actual world is turned into the problem of descending from language to life.

...

The philosophers have only to dissolve their language into the ordinary language, from which it is abstracted, in order to recognise it, as the distorted language of the actual world, and to realise that neither thoughts nor language in themselves form a realm of their own, that they are only manifestations of actual life.

German Ideology, Chapter 3
 
gurrier said:
That's actually a category error ;) The programme is the important thing - the algorithms, data structures and the inputs and outputs. You could build it with valves or mirrors if you wanted.


Semantics are a diversion. As I explained above, you can bind whatever symbols you like to whatever semantics you want - the hard part is to know how to map the semantics from the real world.


Most thinking mathematician's, and virtually all thinking cognitive scientists reject Penrose's conclusions. Godel may have shown that there are certain classes of stopping problem which are not computably decidable, but neither he nor Penrose demonstrated that there is anything non-computable going on in human brains.

Well, I'm not sure you're right about that. In any case, it's not much of an argument that most people reject penrose's conclusions. It's not really any better than Penrose supports his conclusions.

And I think there's something he's onto. There's a demonstration of pythagoras' theorem, - not a proof, a demonstration, by getting a set of squares of two different sizes that tesselate, and then drawing lines from corner to corner of the squares making triangles, and larger squares,. Then you "see" that it just has to be the case that the area of the square on the hypotenuse is equal to the sum of the areas of the two smaller squares. It's just obvious. But it's not a proof. You just see it, and yet it's easier to follow than a formal proof. But a computer definitely wouldn't get it.

I was quite surprised the other day to find that it's really very difficult to define a reciprocal relationship in prolog.

If you have something like, friend(tom,bill). - and that means tom is a friend of bill, - obviously that doesn't mean for the computer that bill is a friend of tom. But if you try to give the computer the information. friend(X,Y) :- friend (Y,X). (meaning- if there's some y that's a friend of x, then x is a friend of y, - ) the computer will go into an endless loop.

I don't think semantics are a diversion, - All human understanding takes place in a temporal context. Just try for a moment to imagine how you could give a computer a functional understanding of the meaning of past present and future, and their relationships with all the different tenses, and the different significances of past, present and future, - and try to see how you could do it in a non-circular way, grounding it, so that at thte end, the computer doesn't just connect a bunch of equally meaningless concepts, - and you may get some idea of the scale of the problem.
 
Fruitloop said:
Isn't the problem that prolog is only first-order predicate logic? That and the fact that it's mental, of course ;)

Well, yeah, I'm sure it's not entirely surprising that this kind of thing is problematic, and it's not a knock-down argument against AI by any means, I was just trying to use it to illustrate the point that it is difficult to get meaning out of purely syntactic relations.
 
Fruitloop said:
Sure. It's impossible, probably.

Why are you doing prolog, and have you done any Scheme?

I'm doing prolog because I'm crazy enough to think that I've got insights that'll make it possible to simulate intelligence and talk to computers in English, - In a nutshell, I think Chomsky deliberately bamboozled the experts by creating a sharp distinction between syntax and semantics, and thus set up an insoluble problem as far as natural language processing is concerned.

When the boffins invented third-generation programming languages, - they generally thought they'd be programming in english before long, - but after trying for a while, the general conclusion became that it's impossible.

I reckon it might not be, and so I quite want to learn to program so I can try some of my ideas out, because it's not much use me telling them to anyone, because if I'm right, they could always just nick them. (though I'd be happy to form a company with some programmers, if my ideas were protected.)

I'm a rubbish coder, at the mo, - but I think I might be quite good at program design. And there might be a lot of money in my ideas, - e.g., making little AI commanders that you can talk to in a limited domain, and leave to run your virtual empire competently while you go to work or go on holiday.

Prolog's just the AI programming course on offer here, - I don't know anything about scheme, what is it?
 
Fruitloop said:
I would have said that the ability to problem solve doesn't require consciousness in the sense that we're talking about, it just requires computation. For an example of the kind of thing I'm talking about have a look at the Chalmers paper A Computational Foundation for the Study of Cognition

Well I don't think that's obvious. Solving a particular problem or even learning to solve a particular problem should be computational. However understanding a problem in the first place is essentially semantic - we have to understand that something is a problem to us in the first place we have to have a notion of meaning.

Computation is syntactical. There may be a way to generate semantics computationally but that seems very difficult.

I've scan read the above article. I'll look at it more carefully later. Also if prompted I'll illustrate what I mean above.

However to go back to what I was saying, if you have a generalised problem solving machine. That is a machine that can recognise and judge different priorities in different and novel contexts then surely this machine is conscious? What's missing?

Of course it depends on what we mean when we talk about consciousness and I don't suppose any of us know what we mean.
 
I googled Scheme, - a variant of Lisp apparently, - which I know equally little about, - except I got the impression people generally tink prolog's better than Lisp.
 
It's a variant of Lisp that's a direct implementation of the lambda calculus, which is a formal language for expressing functions. Like Prolog it's quite revelatory in terms of how it works, how it makes you write stuff, but only crazy people write large applications in it.

A good intro to both it and programming in general is the classic Structure and Interpretation of Computer Programs, which is now free on the web:

http://mitpress.mit.edu/sicp/

Even if you can't be bothered with the examples I can't recommend highly enough giving it a skim-read.

There's also Schelog, which is a Prolog-scheme embedding, but we really are entering murky water here :D
 
Knotted said:
Well I don't think that's obvious. Solving a particular problem or even learning to solve a particular problem should be computational. However understanding a problem in the first place is essentially semantic - we have to understand that something is a problem to us in the first place we have to have a notion of meaning.

Computation is syntactical. There may be a way to generate semantics computationally but that seems very difficult.

I've scan read the above article. I'll look at it more carefully later. Also if prompted I'll illustrate what I mean above.

However to go back to what I was saying, if you have a generalised problem solving machine. That is a machine that can recognise and judge different priorities in different and novel contexts then surely this machine is conscious? What's missing?

Of course it depends on what we mean when we talk about consciousness and I don't suppose any of us know what we mean.

mm, I've had similar thoguhts to this, myself, - but I'm not quite sure what their significance is.
 
An imaginary machine for finding mathematical theorems could stumble on the fibonacci sequence and then spend eternity on it, because the motivation is as important in its absence as in its presence.
 
Can Evolutionary Theory Explain Human Consciousness?
phildwyer said:
Obviously not. Human consciousness can, however, explain evolutionary theory.

That's interesting.

phildwyer can you expand on that? :)
 
He means 'we worked it out.'

:p

It's bollocks, of course.. because, technically - if you believe in him, God probably worked it out.

:D
 
I think most of the answers here have been staring down the wrong end of the telescope. They start by assuming that we know what consciousness is and what is required is a convincing explanation of how it evolved.

My view is that you should go about it in exactly the opposite direction. We know that we evolved and the more we know about that process and about how the brain works the better our understanding of what consciousness is.
 
Demosthenes said:
And I think there's something he's onto. There's a demonstration of pythagoras' theorem, - not a proof, a demonstration, by getting a set of squares of two different sizes that tesselate, and then drawing lines from corner to corner of the squares making triangles, and larger squares,. Then you "see" that it just has to be the case that the area of the square on the hypotenuse is equal to the sum of the areas of the two smaller squares. It's just obvious. But it's not a proof. You just see it, and yet it's easier to follow than a formal proof. But a computer definitely wouldn't get it.

If a computer had the ability to compare areas visually and heuristically it could come up with the same conclusion. The problem is that when we 'just know' certain things, we don't actually know them - for an awfully long time we just knew that the world was flat, the sun spun around it and things fell downwards - and we knew these things at least as much as anybody knows that 2 even numbers can never be added to give an odd number or whatever.

Basically, it's really clear that our brains do a huge amount of pattern matching. One of the things that we are hard-wired to identify is repeated patterns, if we see something that appears to be a cycle, we assume it is a cycle unless we can see anything that will interfere with the pattern, we assume it is recurring and mostly we're right, but sometimes we're wrong - absolutely nothing non-computable going on at all and simply programmable heuristics too.

Demosthenes said:
I was quite surprised the other day to find that it's really very difficult to define a reciprocal relationship in prolog.

If you have something like, friend(tom,bill). - and that means tom is a friend of bill, - obviously that doesn't mean for the computer that bill is a friend of tom. But if you try to give the computer the information. friend(X,Y) :- friend (Y,X). (meaning- if there's some y that's a friend of x, then x is a friend of y, - ) the computer will go into an endless loop.

You would be greatly incorrect to infer from this example that computers have any problem at all in capturing and representing semantic information or any type of relationships at all. There is a whole, vast area of research called the semantic web with languages like OWL, KAoS, Rei, SWRL, etc, etc, which are capable of expressing extremely complex ontological information.

Demosthenes said:
I don't think semantics are a diversion, - All human understanding takes place in a temporal context. Just try for a moment to imagine how you could give a computer a functional understanding of the meaning of past present and future, and their relationships with all the different tenses, and the different significances of past, present and future, - and try to see how you could do it in a non-circular way, grounding it, so that at thte end, the computer doesn't just connect a bunch of equally meaningless concepts, - and you may get some idea of the scale of the problem.
When I said that semantics are a diversion, I specifically meant that they are useless in distinguishing between human consciousness and AI. Any state in any computer programme can be associated with whatever semantics the designer wants to and this could easily be a much better mapping from the real world than the semantics that individual humans associate with various internal states. For heck's sake, half the computer scientists in the world are working on semantic web technologies - which are concrete ways of supplying all sorts of semantic structures to information that the computer has. What makes the semantic significance of consciousness's states different is that they are extremely rich, way, way, way richer than anything we can now model, and they are often extraordinarily good and sophisticated mappings from reality (i.e. the "worried about personal survival" state of human consciousness is often an almost perfect reflection of how the organism's survival is threatened).
 
Back
Top Bottom