Urban75 Home About Offline BrixtonBuzz Contact

Can Evolutionary Theory Explain Human Consciousness?

phildwyer said:
No it is not. In fact the reverse is obvious. It is obvious that consciousness is an extremely self-destructive capability for a species to possess. Not only is consciousness the cause of great misery to individual organisms, it has also provided the human species with the means to destroy itself. That fact alone proves that evolutionary theory is inadequate to explain consciousness.

Bollocks, like most things, consciousness can be either good or bad. Just as it's taken us to these dark places, so can it also be our salvation, it can save as well as curse, it all depends on how you look at it.

Quite clearly it has provided some surivial and breeding bonuses cos otherwise we wouldn't all be here to sit around saying 'It's X' 'No it isn't, read Hegel'.
 
gurrier said:
When I said that semantics are a diversion, I specifically meant that they are useless in distinguishing between human consciousness and AI. Any state in any computer programme can be associated with whatever semantics the designer wants to and this could easily be a much better mapping from the real world than the semantics that individual humans associate with various internal states. For heck's sake, half the computer scientists in the world are working on semantic web technologies - which are concrete ways of supplying all sorts of semantic structures to information that the computer has. What makes the semantic significance of consciousness's states different is that they are extremely rich, way, way, way richer than anything we can now model, and they are often extraordinarily good and sophisticated mappings from reality (i.e. the "worried about personal survival" state of human consciousness is often an almost perfect reflection of how the organism's survival is threatened).

Well, I think I disagree with what you specifically meant.

The example of temporality shows the existence of a bootstrapping problem,

it's as if you already need the experience of temporality for there to be a non-conceptual content that is rich enough to give something for temporal concepts to bite onto so that they can be non-circularly defined, in a way that means something.
 
Demosthenes said:
it's as if you already need the experience of temporality for there to be a non-conceptual content that is rich enough to give something for temporal concepts to bite onto so that they can be non-circularly defined, in a way that means something.

In greek?

:D
 
Kizmet said:
Joke.. was saying it looked like greek to me.. meaning there were a lot of words i just had to look up.. ;)
I didn't have to look up any of the words but still don't have a clue what it means :)
 
Kizmet said:
Joke.. was saying it looked like greek to me.. meaning there were a lot of words i just had to look up.. ;)

:)

The richness of possible interpretations I could make for "in greek", meant that in my case this particular computer went - malfunction- not interpretable.
 
Incidentally, I just checked and found over a thousand acm/ieee papers which are concerned with temporal ontologies - so either you're wrong about temporal concepts being unrepresentable by computers, which I suspect, or the ACM/IEEE peer-review process is making some big mistakes and publishing endless papers which solve impossible problems.
 
gurrier said:
I didn't have to look up any of the words but still don't have a clue what it means :)


Well, maybe if you have a go at imagining how you might give a computer a basic structure of temporality within which to interpret the significance of everything else, you might start to understand.
 
Demosthenes said:
:)

The richness of possible interpretations I could make for "in greek", meant that in my case this particular computer went - malfunction- not interpretable.
Ah, but where's the reset button?

:)

I tend to write as I would speak.. so most of my posts follow on from the last few words of whatever I quote.
 
gurrier said:
Incidentally, I just checked and found over a thousand acm/ieee papers which are concerned with temporal ontologies - so either you're wrong about temporal concepts being unrepresentable by computers, which I suspect, or the ACM/IEEE peer-review process is making some big mistakes and publishing endless papers which solve impossible problems.


Yes, but you haven't read any of them. If you did, you'd find out just how difficult the problem is.
 
Demosthenes said:
Well, maybe if you have a go at imagining how you might give a computer a basic structure of temporality within which to interpret the significance of everything else, you might start to understand.

I'm still not sure exactly what you're getting at.. but as with anything if a structure of temporality can be defined - it can be computed.
 
Fruitloop said:
I don't understand why you would want computers to represent temporal ontologies to be honest. :confused:

I wouldn't mind if my computer was still sorry for crashing last thursday.
 
Kizmet said:
I'm still not sure exactly what you're getting at.. but as with anything if a structure of temporality can be defined - it can be computed.

Well, I'm not sure you're right, - but in any case, I'm not sure that a structure of temporality can be defined, maybe it can only be experienced. or only defined, if experienced.

the kind of thing I'm getting at is the kind of thing knotted was getting at when he talked about the role of consciousness maybe being that it shows you that a problem's a problem, because it matters whether you can solve it or not,

The kind of thing I'm thinking about is sure you could give a computer networks of implications, - like past implies finished implies can't be changed implies recorded implies in memory, - but of course, not all things in the past are recorded, or memorised, though the other things might be true.

You might try future implies not yet, implies can be changed, implies It matters, - but it might not matter, even though it's in the future,
how do you get the computer to understand the basics.

You might think, you can program intelligent behaviour and decision-making by programming prioritsed goals, from which subgoals are derived, and conflicts resolved. And to some extent you can.

But how could you get a computer to understand the futurity of a goal. It's not like all goals are futural, - there are goals that are past, I don't have them any more. though I did from the perspective of the time. That may seem obvious to me and you, but it's only obvious because we already understand what it is to have a perspective
 
Demosthenes said:
Well, I'm not sure you're right, - but in any case, I'm not sure that a structure of temporality can be defined, maybe it can only be experienced. or only defined, if experienced.

If it can be given a structure.. then it can be defined. Surely?

the kind of thing I'm getting at is the kind of thing knotted was getting at when he talked about the role of consciousness maybe being that it shows you that a problem's a problem, because it matters whether you can solve it or not,

The kind of thing I'm thinking about is sure you could give a computer networks of implications, - like past implies finished implies can't be changed implies recorded implies in memory, - but of course, not all things in the past are recorded, or memorised, though the other things might be true.

You might try future implies not yet, implies can be changed, implies It matters, - but it might not matter, even though it's in the future,
how do you get the computer to understand the basics.

Was, is and will be. Fuzzy logic, Value weighted protocols and priority settings. The effects of tiredess and randomness.

Enough of these sub-routines and you could produce a reasonable facsimile of the function of the brain. If not the brain itself.

You might think, you can program intelligent behaviour and decision-making by programming prioritsed goals, from which subgoals are derived, and conflicts resolved. And to some extent you can.

But how could you get a computer to understand the futurity of a goal. It's not like all goals are futural, - there are goals that are past, I don't have them any more. though I did from the perspective of the time. That may seem obvious to me and you, but it's only obvious because we already understand what it is to have a perspective

As said it's all about states and using states as modifiers to other routines.

But, of course, because that's how you'd do it computationally.. doesn't mean that's how the brain does it.

I'm rather taken with the new theory of the brain's operation largely being based in electromagnetism and the subtle interaction of electromagnetic fields upon tissue providing at least 'some' of the bio-feedback loops.

Almost like short circuits. :)

Out of interest.. do you think we learn perspective?
 
Kizmet said:
Out of interest.. do you think we learn perspective?

Tbh
I don't know. I'm kind of convinced that baby consciousness and pre-birth consciousness is quite different from child and adult consciousness. (lack of memory before a certain age seems to point to that as well, though that's not my reason for being kind of convinced.)

It's as if the embodiment process creates the ego, - and then the embodiment gradually comes to dominate the pure consciousness, so that in a way, we forget who we are, and come to take the ego as the true self.
(i certainly do, usually,)

I've been quite struck by how as my daughter's grown up she's become less and less cosmic, only it seemed to start with her playing at being a little girl, and then gradually taking the role-playing as being the reality.

The electro-magnetic waves thing, yes, I think I kind of agree, as I suspect did Popper and Eccles, and I'm fairly sure Libet has been arguing this in his last book.

It does strike me as bizarre when people say that it's totally unreasonable to think that subatomic phenomena have nothing to do with understanding the brain. As far as I know, where you have no nerves you have no consciousness, - and the way in which nerves and neurons transmit signals is by electrical impulses, isn't it.? And understanding electrical impulses is impossible without recourse to subatomic physics afaik.

Also, even a very simple three layer artificial neural network, - if you want to understand it, apparently the most effective mathematical way of understanding how it works is to see it as creating a multi-dimensional space.

The brain therefore, must create a multi-multidimensional state space, and when you look at people trying to describe the maths of such systems, it looks not unlike the kind of maths that people bring to trying to understand the behaviour of some subatomic particles and systems.
 
Demosthenes said:
Yes, but you haven't read any of them. If you did, you'd find out just how difficult the problem is.
I would say that I've read a good few more than you have! The problem of getting a perfect mapping from natural language expressions to formal temporal models is complex, but not impossible and people don't write papers about it any more because it's now too trivial a problem, not because it's impossible.

Not only do we have all sorts of ways to represent time, we've even got a formal logic system to do it http://en.wikipedia.org/wiki/Temporal_logic

It's just so totally and utterly wrong to claim that you can't give computers perfectly good temporal models which allow them to distinguish between before and after and so on.
 
Well, sorry, I just got the impression that you hadn't read any of them because I couldn't see why you'd have had to do a search to find out they existed, if you'd already read them.

I think you're quite mistaken, and that you don't know much about that of which you are speaking, myself.

Of course, in twenty years, Perhpas I'll be proved wrong,

But it always seems to be twenty years away, - one AI problem solved after another, there is no denying it, - but the hard mysterious kernel never seems to get any less hard, nor any less mysterious,.

Aristotle's temporal logic fails to do justice to human use of temporal language, or to human experience of temporality.
 
Demosthenes said:
Well, sorry, I just got the impression that you hadn't read any of them because I couldn't see why you'd have had to do a search to find out they existed, if you'd already read them.
I thought that your point about computers being incapable of grasping temporal semantics might be undermined somewhat by me alerting you to the presence of hundreds of scientific papers showing exactly how you can do it.

Demosthenes said:
I think you're quite mistaken, and you don't know much about that which you are speaking of, myself.
I'm afraid that's another mark against your understanding of the subject matter. You can't even spot somebody who knows what he's talking about :D
 
Fruitloop said:
I don't understand why you would want computers to represent temporal ontologies to be honest. :confused:
We want computers to understand everything - including all of the temporal concepts and relationships used by people. We build ontologies - formal, structured models of the relationships between entities, classes and instances in the domain - because this allows computers to map our words to its internal formal model of time.
 
Demosthenes said:
Tbh
I don't know. I'm kind of convinced that baby consciousness and pre-birth consciousness is quite different from child and adult consciousness. (lack of memory before a certain age seems to point to that as well, though that's not my reason for being kind of convinced.)

It's as if the embodiment process creates the ego, - and then the embodiment gradually comes to dominate the pure consciousness, so that in a way, we forget who we are, and come to take the ego as the true self.
(i certainly do, usually,)

I've been quite struck by how as my daughter's grown up she's become less and less cosmic, only it seemed to start with her playing at being a little girl, and then gradually taking the role-playing as being the reality.

littlebabyjesus said something similar earlier about consciousness being on a scale.

I can kind of see that.. but it's also a state, isn't it?

One either is a conscious or is not a conscious being.

Whether you are specifically conscious at the moment of asking isn't relevant.. it's the potentiality of consciousness that denotes which you are.
In terms of growth.. of course a creature can have varying levels of consciousness.. but rather than these being seen as starta i think they are better seen as varying levels of abilities.

Granted your daughter has seemingly become more conscious as she has grown.. but isn't it also fair to say that when she was very little and recieved the maximum input and did the most processing - that she was also highly conscious?
 
gurrier said:
I thought that your point about computers being incapable of grasping temporal semantics might be undermined somewhat by me alerting you to the presence of hundreds of scientific papers showing exactly how you can do it.


I'm afraid that's another mark against your understanding of the subject matter. You can't even spot somebody who knows what he's talking about :D

I can spot someone who says he knows what he's talking about, and maybe even believes it.

Nonetheless, you are mistaken. Those papers do not give an adequate computational account of human understanding. They just attempt to.

But no formalisation will ever catch the effect of context.

If someone comes up to me and says, - what's on your mind, and I reply, I'm writing a book, or whatever, - he doesn't say, no you're not, you're talking to me.

But any formalisation of the meaning of the present tense would have to rule what I said as untrue. Or if it didn't, then it would fail to catch the basic notion of the present tense as what's happening now. So, there are two notions of the present, already, and how can you decide which one is right, - only by human intelligence, context and understanding of human temporality.

Which you might try to approximate to in a computer, - but it's fairly obviuos that it wouldn't get it, unless it already had the kind of existence that a human does.

It's so absurd, what you're saying really.

Can you not see the tremendous difficulty of trying to make some competitor for the Turing test, that could have a decent crack at answering the questions "do you exist?" "What do you think happens to you when you die?" (purely by virtue of getting the meaning of the sentence, thinking about it and constructing a reply - no pre-programmed responses)

ffs. Not that I deny that the brain is a kind of computer btw.

eta. If the problem's trivial and that's why no-one bothers to write papers on it any more, - it seems strange that there's hundreds of papers on the subject,
you'd have thought one would suffice for a trivial problem.
 
Kizmet said:
littlebabyjesus said something similar earlier about consciousness being on a scale.

I can kind of see that.. but it's also a state, isn't it?

One either is a conscious or is not a conscious being.

Whether you are specifically conscious at the moment of asking isn't relevant.. it's the potentiality of consciousness that denotes which you are.
In terms of growth.. of course a creature can have varying levels of consciousness.. but rather than these being seen as starta i think they are better seen as varying levels of abilities.

Granted your daughter has seemingly become more conscious as she has grown.. but isn't it also fair to say that when she was very little and recieved the maximum input and did the most processing - that she was also highly conscious?

God, I seem to keep being misunderstood at the mo. i don't think she's become more conscious, - I'd almost be inclined to say she's become less conscious, - but I wouldn't want to say that either, it's just that the quality of her consciousness has changed I guess. To be sure she was highly conscious when she was tiny, - you could feel it. But, on the other hand, in a sense, she was more than herself.

eta: But yes, - what you said at the beginning of your post, sound about right to me. But, it's undeniable that the functional form in which consciousness is embodied affects the quality of the consciousness. My guess is that when the brain is very immature, there's a lot of work in training it to be wired up right.

It's worth noting that although training a simple artificial neural network through means of an external teacher and teacher signal, error signal, and back-propagation, results in the networks achieving human like competence, and often in strikingly similar ways to the pattern of human development in that domain,
it's also the case that no-one has much clue of how the brain can implement teacher signal error signal and back-propagation, - indeed it's widely admitted that back propagation is neuronally implausible.

But isn't it striking that when you try to learn something new, you really have to use your consciousness to focus on teaching yourself to do the task.
 
Demosthenes said:
God, I seem to keep being misunderstood at the mo. i don't think she's become more conscious, - I'd almost be inclined to say she's become less conscious, - but I wouldn't want to say that either, it's just that the quality of her consciousness has changed I guess. To be sure she was highly conscious when she was tiny, - you could feel it. But, on the other hand, in a sense, she was more than herself.

The mode of consciousness I feel can change. This probably appears as 'levels'.. but it's not.. it's more of a 'quality' issue as you say.

It's a state... but once in that state there are many modes.

What makes the human 'mode' of consciousness so different to others on this planet, eh?

Answer me that one!
 
Kizmet said:
The mode of consciousness I feel can change. This probably appears as 'levels'.. but it's not.. it's more of a 'quality' issue as you say.

It's a state... but once in that state there are many modes.

What makes the human 'mode' of consciousness so different to others on this planet, eh?

Answer me that one!

The exact nature of the information-processing system in which universal consciousness is embodied.

(see the edit above)

possibly where you say mode, that's kind of what I meant by quality, I didn't mean quality in an evaluative sense.
 
I imagine we did by selecting our mates on the basis of who we found most attractive. (a highly subjective matter)

And maybe the universal consciousness sometimes caused mutations that set us off in the right direction.
 
Demosthenes said:
I imagine we did by selecting our mates on the basis of who we found most attractive. (a highly subjective matter)

And maybe by causing our children to have beneficial mutations.

Did you catch my earlier post with all the plaintive questions in it?
 
Demosthenes said:
I can spot someone who says he knows what he's talking about, and maybe even believes it.

Nonetheless, you are mistaken. Those papers do not give an adequate computational account of human understanding. They just attempt to.

I think you have just moved the goalposts a teensie wee bit. In fact you've moved pitches and started playing rugby. You are confusing the attribute of having a semantically accurate temporal model with the ability to understand natural language - just because you have an accurate semantic time model doesn't mean that you can map all statements in natural language accurately onto your semantic model - humans can't even do that as natural language is ambiguous.

Demosthenes said:
But no formalisation will ever catch the effect of context.
That's incorrect, but currently true in the general case, which will be true for a generation at least - it is however nothing to do at all with your original claim about semantic temporal models. In many cases, you can come up with formal contextual models which do capture the significant variables and do effectively capture context. It is just that human brains - without requiring input from consciousness incidentally - have wonderous capacities for marshalling vast quantities of different types of data into their language comprehension and general cognition.

Demosthenes said:
If someone comes up to me and says, - what's on your mind, and I reply, I'm writing a book, or whatever, - he doesn't say, no you're not, you're talking to me.

But any formalisation of the meaning of the present tense would have to rule what I said as untrue. Or if it didn't, then it would fail to catch the basic notion of the present tense as what's happening now. So, there are two notions of the present, already, and how can you decide which one is right, - only by human intelligence, context and understanding of human temporality.

First of all, it wouldn't take much intelligence to programme a computer to understand such a sentence - it would just need an ontology which told it that the present continuous can express activities which are ongoing over a range of time. If it knew how long a book took to write in general it would not only be able to recognise the ambiguity, but would correctly infer what you had meant. It's a pretty trivial problem actually - if you were being sarcastic, it would require a whole heap more contextual complexity.

However, the important point is that this has nothing to do with the computer's semantic temporal model. It's a simple natural language processing problem and we know that we won't solve that particular problem for the general case until we really can replicate brains. The problem is, as you say, context. Every piece of language understanding may require access to any arbitrary piece of memory or sensory stimulus. This data must be interpreted in a specific and highly structured way in order to use it to understand the phrase. We can increasingly build systems which have fantastically elaborate semantic models - within a few years virtually all known concepts will have been formally structured into ontologies. However, we are at least a generation away from having architectures and algorithms capable of emulating the brain's contextual modelling and recall abilities.
 
Back
Top Bottom