Urban75 Home About Offline BrixtonBuzz Contact

If a computer was powerful enough would it generate consciousness?

Interesting thread :)

I agree with just about everything axon has said here. Bugger :p

There are a lot of huge assumptions about consciousness that people seem to make automatically:That we need to understand how consciousness works before we can replicate it, that replicating a brain with non-biological computery parts will miss out whatever it is that generates consciousness, that a computer convincingly behaving as though it were concious wouldn't in fact be conscious...

We know that our consciousness is an amazing and fiendishly complex thing because we experience it, but I think that experience gets in the way and makes people jump to unnecessarily complicated conclusions. We look at the lump of meat in our heads that produces this thing that doesn't seem anything like meat and, without any concession to dualism, it seems that neurons must be doing something Almost Unimaginably Complicated. And there are so many neurons doing it, connected in so many different ways that it's easy to think that no amount of tracking their interactions will reveal a bigger picture. (In fact I believe the bigger picture is that there is no spoon bigger picture.)

I don't think there are many creationists on these here boards :D so we can probably all agree that our brains evolved. There was nothing guiding brains toward consciousness; nothing that needed prior knowledge of what consiousness is and how it works; it 'just' happened when all the right bits were selected for. So how do you replicate that? I know it sounds trivial at first to answer that you do it by replicating all the bits, but bear with me.

How do we know what needs replicating and what doesn't? You pick somewhere and see. A good place to start would be with the functional interactions of the component parts; neurons, glia (support cells), any parts that are connected and send signals to other parts. This can be done on a reasonable level. The materials that make up a car engine are subject to quantum interactions, but at a macro level of space and time it is not useful to consider those interactions to build an engine that works.

As axon described, neurons integrate a (sometimes huge) number of inputs that change the probability of sending a signal onward. It is possible to replicate this action, in software, for a group of artificial neurons. I think this has also been done in hardware. Given that this is possible, it is then 'simply' a question of finding out what is connected to what and how. No small task, but not one that requires some qualitatively different understanding of what neurons do apart from integrate, modulate and fire.

Create a functional replication of a conscious brain and you will have created a conscious brain.

If the question was about whether a PC type computer - one that can do clever stuff but in a different way to the brain - will attain conciousness once it has 'enough' processing power, then I think the answer is no.
 
Mation said:
The materials that make up a car engine are subject to quantum interactions, but at a macro level of space and time it is not useful to consider those interactions to build an engine that works.

<Gives Mation a big hug>
 
I found this article - 'the mystery of consciousness' - really interesting, just coincidentially stumbled across it this morning - its very relevant to what we are talking about.

Article 1

EDIT: Oh and this one too:

Article 2
 
Mation said:
Interesting thread :)

I agree with just about everything axon has said here. Bugger :p

There are a lot of huge assumptions about consciousness that people seem to make automatically:That we need to understand how consciousness works before we can replicate it, that replicating a brain with non-biological computery parts will miss out whatever it is that generates consciousness, that a computer convincingly behaving as though it were concious wouldn't in fact be conscious...

We know that our consciousness is an amazing and fiendishly complex thing because we experience it, but I think that experience gets in the way and makes people jump to unnecessarily complicated conclusions. We look at the lump of meat in our heads that produces this thing that doesn't seem anything like meat and, without any concession to dualism, it seems that neurons must be doing something Almost Unimaginably Complicated. And there are so many neurons doing it, connected in so many different ways that it's easy to think that no amount of tracking their interactions will reveal a bigger picture. (In fact I believe the bigger picture is that there is no spoon bigger picture.)

I don't think there are many creationists on these here boards :D so we can probably all agree that our brains evolved. There was nothing guiding brains toward consciousness; nothing that needed prior knowledge of what consiousness is and how it works; it 'just' happened when all the right bits were selected for. So how do you replicate that? I know it sounds trivial at first to answer that you do it by replicating all the bits, but bear with me.

How do we know what needs replicating and what doesn't? You pick somewhere and see. A good place to start would be with the functional interactions of the component parts; neurons, glia (support cells), any parts that are connected and send signals to other parts. This can be done on a reasonable level. The materials that make up a car engine are subject to quantum interactions, but at a macro level of space and time it is not useful to consider those interactions to build an engine that works.

As axon described, neurons integrate a (sometimes huge) number of inputs that change the probability of sending a signal onward. It is possible to replicate this action, in software, for a group of artificial neurons. I think this has also been done in hardware. Given that this is possible, it is then 'simply' a question of finding out what is connected to what and how. No small task, but not one that requires some qualitatively different understanding of what neurons do apart from integrate, modulate and fire.

Create a functional replication of a conscious brain and you will have created a conscious brain.

If the question was about whether a PC type computer - one that can do clever stuff but in a different way to the brain - will attain conciousness once it has 'enough' processing power, then I think the answer is no.

Thank you :D
 
I think one of the problems here is that people are abstracting consciousness from the world in which it inhabits. Consciousness is intentional and directed towards an object(s). It springs forth in the world, and is human in that it is social and is embodied with human purpose and endeavors – we become conscious of something when in impacts up on projects so to speak. Replication of the brain may be part of the process of creating consciousness – in the same way as our brains are already, in functional terms, replications of the same biological design - but this potential, the potential to think, reflect and find meaning (i.e. the characteristics of human consciousness) could not be set in motion abstracted from a social world, and the interpersonal developmental context in which it flourishes and grows. I think that we are only focusing on half of the problem here, and simplifying the matter through the mind as a computer analogy. Moreover, as I said giving a spectatorial third person account of the science of consciousness says nothing of the phenomenological experience of being a conscious entity in the world - here science has little to say.
 
Something to consider is the top down vs bottom up approach.

The OP is mainly concerned with a very brute force, bottom up method for developing consciousness: make a big enough neural net, feed it enough stimuli, and eventually "something" will emerge.

A.I. can also take a top down approach: implement high level planning systems (also known as "expert systems") that can solve quite abstract problems. One example of this is SOAR ("a general cognitive architecture for developing systems that exhibit intelligent behavior"), there are other similar ideas out there too.

I think you hit the "isn't it just a simulation though?" problem more with top down approaches, because you're programming the capability for complex, abstract problem solving off the bat - however, I don't really think it matters either way, or whether you're using silicon or organic matter: it's the end result that's important. Very little of our intelligence is built in or randomly generated from our brains - it's almost all learned, or "socially programmed" if you like during our formative years. So how is this different to writing software? It's no different at all.

To summarise, my own conclusions on this subject are that it's not particularly hard to generate or build a conscious system (I don't really think scale is an issue - we have lots of neurones and connections, but also an absolute crap ton of wasted or spare capacity), but that we're still not entirely sure what constitutes consciousness. There's ongoing research a-plenty in this field, such as recent work that if I recall found the 'flow of consciousness' is an illusion, but it's still a very big question mark.

Destination Void by Frank Herbert is a great sci fi book on this subject too :)
 
There's an article on this over at Technology Review ...
Artificial intelligence has been obsessed with several questions from the start: Can we build a mind out of software? If not, why not? If so, what kind of mind are we talking about? A conscious mind? Or an unconscious intelligence that seems to think but experiences nothing and has no inner mental life? These questions are central to our view of computers and how far they can go, of computation and its ultimate meaning--and of the mind and how it works.

They are deep questions with practical implications. ...
Follow the link for more :)
 
Thanks for the link.

That's an interesting description of a possible origin for creativity. Essentially, creativity requires emotion to make links between ideas, and Gelernter lays out very clearly the difficulty in programming emotion into a computer. What he's essentially saying is that artificial intelligence cannot experience nostalgia.

However, there are holes in his argument against Daniel Dennett, and his analogy with water and wetness is weak.

He correctly points out that whereas a large number of neurons can produce consciousness, no number of yeast cells could because yeast cells don't do the right kind of thing. Neurons have a facility for making connections - they can connect with thousands of other neurons and produce a vast network bringing about the complexity needed for consciousness. Yeast cells cannot do this, but there is a failure of logic when you use this as evidence that other kinds of things cannot either.

Certainly, the water analogy doesn't strengthen his argument in the way he thinks it does. You can't produce wetness with just any group of molecules, but the H2O molecule isn't the only one that produces wetness. He then says that there is no reason to believe that low-level computer instructions could produce consciousness. But if they can be connective enough, I would have thought that there would be very good reason to believe that they could. And he further questions the wisdom of thinking that digital technology was the best tool to investigate for producing consciousness. But a question: Is complexity needed to produce consciousness? If yes, and if neurons operate essentially in a digital manner, then I would think that reason enough to use computers for the task. What else would you use?
 
littlebabyjesus said:
More likely still, I think, those who actually create artificial intelligence will treat consciousness as an irrelevant illusion and won't worry about it in the slightest as they design intelligent machines.
How depressing :( - Hoorah, we have an army of intelligent robots to do all the farming and cleaning and mining! (yes they're in a state of constant psychological terror and intense pain, but don't worry we found out how to stop it getting in the way)
 
Mmmm, constant psychological terror and intense pain. Actually sounds a bit like being human. We can stop it getting in the way with chocolate ice cream/Eastenders/iPods/U75*l



* Delete as appropriate
 
Crispy said:
How depressing :( - Hoorah, we have an army of intelligent robots to do all the farming and cleaning and mining! (yes they're in a state of constant psychological terror and intense pain, but don't worry we found out how to stop it getting in the way)
That wasn't quite what I meant. I was thinking the other way round - that those who develop AI may do well to ignore the question of consciousness altogether, that it isn't a fruitful way of thinking.

There then arises, after intelligent robots have been created, the tricky question of whether or not they are conscious, and how exactly we can judge. Robot rights could indeed become a contentious issue.
 
I did see what you mean, I was just being silly :)

Robot rights will be hugely contentious. Do they get time off? The vote? Right to assemble? Stand for government? Crumbs!
 
One of the most convincing - and disconcerting - 'proofs' that consciousness is best treated as an after-the-fact illusion is that our consciousness of a decision we have made to act often only occurs after we have begun to act. This has been proved experimentally, and it shakes to its core our commonsense ideas about ourselves.

Eta: busy at work, but when I get a minute, I'll give you a reference

you're referring to the libet et.al experiments, I guess - the interpretation of which is not straightforward, and indeed quite controversial, - - a book I've got includes a discussion of libet 's work by edoardo bisiach, a cognitive neuropsychologist, who specialises in studies of spatial neglect and its significance for cognitive accounts of consciousness. - In his discussion, he says that he himself isn't clear what we should conclude from libet's experiments, - nor was libet, and that Eccles and Popper (1977) used libet's experiments as evidence in favour of a dualist metaphysics.
 
Crispy said:
Robot rights will be hugely contentious. Do they get time off? The vote? Right to assemble? Stand for government? Crumbs!

I think it would be wrong of us to feed robots crumbs. Nuts and bolts maybe, but definitely not crumbs. It would be far too demeaning.
 
Crispy said:
I did see what you mean, I was just being silly :)

Robot rights will be hugely contentious. Do they get time off? The vote? Right to assemble? Stand for government? Crumbs!

They will be able to run for government let alone vote.
Robot Richard Nixon will win the office of Earth President. Its true, it was on Futurama.

Would a computer be better at running the planet than elected officials?
According to one SciFi novel no and the computer in question engineered an underground human rebelion against itself because it came to this conclusion also.
 
Demosthenes said:
you're referring to the libet et.al experiments, I guess - the interpretation of which is not straightforward, and indeed quite controversial, - - a book I've got includes a discussion of libet 's work by edoardo bisiach, a cognitive neuropsychologist, who specialises in studies of spatial neglect and its significance for cognitive accounts of consciousness. - In his discussion, he says that he himself isn't clear what we should conclude from libet's experiments, - nor was libet, and that Eccles and Popper (1977) used libet's experiments as evidence in favour of a dualist metaphysics.
Yes, I was thinking of Libet.:)

I think it is indeed wise to be very cautious about drawing conclusions from such a result.

I don't know Eccles and Popper (1977) - how do they deal with the 'And then a miracle happens' moment as intention passes from one metaphysical realm to the other?
 
littlebabyjesus said:
Yes, I was thinking of Libet.:)

I think it is indeed wise to be very cautious about drawing conclusions from such a result.

I don't know Eccles and Popper (1977) - how do they deal with the 'And then a miracle happens' moment as intention passes from one metaphysical realm to the other?

tbh, I don't know, as I haven't read their book, and the very cursory mention of it by Bisiach didn't give me much of a clue.

I think the argument may have been something along the lines of libet et al's results are so bizarre that they ought to be impossible according to a physicalist account of mind, - apparently libet said something of the sort himself, without committing himself to any other account. and that Eccles and Popper may have been suggesting that the best explanation of the results is through a non-temporal non-physical mind causing physical events by reaching backwards in time.
 
There is an alternative explanation of quantum effects to the Copenhagen Interpretation which explains the paradox of Schrodinger's cat not by the 'mysterious action at a distance' that Einstein scorned, but by the movement of photons back in time. It explains the effect as well as the CI and is no more weird really.

I'm very suspicious of those who leap on difficult scientific problems and assert metaphysical explanations.

Time is a concept we understand very poorly. It's hard. Like many things, we don't have a deep understanding of it. Our explanations are only really 'analogies that work' in any case - it's a mistake to think we are directly describing reality. A little humility is required sometimes.
 
littlebabyjesus said:
There is an alternative explanation of quantum effects to the Copenhagen Interpretation which explains the paradox of Schrodinger's cat not by the 'mysterious action at a distance' that Einstein scorned, but by the movement of photons back in time. It explains the effect as well as the CI and is no more weird really.

I'm very suspicious of those who leap on difficult scientific problems and assert metaphysical explanations.

Time is a concept we understand very poorly. It's hard. Like many things, we don't have a deep understanding of it. Our explanations are only really 'analogies that work' in any case - it's a mistake to think we are directly describing reality. A little humility is required sometimes.

I don't really understand what you're saying here.

Can you explain what the copenhagen interpretation is and how it is said to explain the schrodinger's cat paradox.

How is "action at a distance" relevant to the paradox, and how is the movement of photons backwards in time potentially a better explanation.

About "being suspicious of people who leap on difficult scientific problems and
assert metaphysical explanations" What do you mean. ?

Do you mean that scientific findings are just irrelevant to metaphysical questions, or what?
 
This is a scientific paper I'm going to read when I;ve got the mental energy to get my head round it.

http://www.ucl.ac.uk/~uctytho/libet1.htm

It appears at the moment to be a paper written by a conventional scientist to refute both Libet's and Popper and Eccles' claims that Libet's findings cast doubt on mind-brain identity theory.

But I'm referencing it here just to demonstrate the surprising fact that libet thought his findings cast doubt on the identity of mind and brain, in complete contradiction to the conventional wisdom.

eta, oh and here's libet's response to honderich's paper, which looks equally incomprehensible at first sight.

http://www.ucl.ac.uk/~uctytho/Libethimself.html
 
Well, in fact, having read them now, they're easier to understand than they looked at first.

This paper sets them in context.

http://www.consciousentities.com/libet.htm

The story is: Before Libet did his experiments on decision-making - which are the ones lbj was talking about, and the ones discussed in wikipedia, he did some experiments on perception, using both skin stimulation and cortical stimulation.

He established that it takes 500 ms to reach neuronal adequacy for someone to experience something, - (be able to say they experienced it), but paradoxically, (see honderich's 2.1) if the stimulus was sufficiently lasting to produce a conscious percept, then subjects would report that they experienced the stimulus pretty much when it happened, - well before the pattern of neural activation reached the previously established threshold for neuronal adequacy. But if the stimulus was too shortlived to produce neuronal adequacy, then of course the people wouldn't notice them at all.
That should sound paradoxical, because it is. Libet's theory was that the subjects experienced the stimulus about 500 ms after, when they reach neural adequacy, but then retrospectively antedate when they experienced it.
Eccles and Popper seem to have gone for what Honderich describes as the second hypothesis, subjects really do experience the stimulus before the neurons are sufficiently activated for them to do so, - and that people are able to do this due to the activity of an immaterial mind that plays tricks with time, (or creates it?)

That's about the best I can do at explaining it. The way it looks to me is that libet's theory of delayed experience and retrospective antedating is a fudge, - but a fudge that he had to make in order to retain mainstream academic credibility as an experimental psychologist, which is perhaps why it's left to Eccles and Popper, to propose the alternative theory, and the anti-materialist metaphysical claim, which it seems to me accounts for the experimental data much more elegantly, if you don't rule it out automatically because you already know what is and isn't scientifically possible.

It does seem overall like an interesting case of how worldviews in general shape the presentation of scientific data, and the account given of them. Libet's experiments on decision making are better known than his experiments on perception, and are widely held to prove that free will is an illusion.

If you imagine an alternate reality, in which the mainstream consensus was that we are spiritual beings, and in which this view informed and motivated scientists, then it seems quite possible that the propaganda account of libet would be - the man whose experiments on perception proved the existence of the non-temporal immaterial soul .
 
Sometimes I notice I have a tune going around in my head, and I wonder how long it's been going on for. And then I wonder whether it really has been going on for that long, or have I just constructed the artificial memory of it having been going on for a while. And sometimes I just drink beer and zone out for a bit.
 
Awareness of the decision you have made is one thing, being aware of that awareness is another - which might well be thought to require some further time to develop.
This would appear to me to be a major flaw in Libet's work which renders any concrete conclusions impossible.
You now have the chain: decision> awareness of the decision>awareness of the awareness.

In fact, those who claim Libet's findings have consequences for free will seem to have missed the point somehow - as in the example of the tennis player. A pianist would be another good example - an improvising jazz musician, I would contend, is expressing her free will as directly as it is possible to do so. Free will unmediated by consciousness is not a contradiction at all. We do it all the time. I'm doing it now as I type these words. The idea is formulated and typed out. The words appear on the screen, and my awareness of them appears as they appear on the screen, not before. Just as our awareness of the words we speak in a conversation comes at the same time as we say them. 'Did I just say that?' we think sometimes. 'I surprise myself, I really do.'
 
NoEgo said:
I've always wondered what is it that creates consciousness....
I'd suggest ... meaning.

Perhaps consciousness happens when meaning occurs. Or, put another way, when information is created.
 
phildwyer said:
No it would not. Consciousness is and will always remain by definition the preserve of the human spirit.
That's a grand proclamation of faith. As you've demonstrated elsewhere, you imprison your mind inside the dogmatic assertion that thought is only possible through language, closing yourself off from any possible insight into the minds of others, human or not.

If I may patronise you further, you are often keen to point out others' ignorance of Hegel. Well, might I encourage you to read a little Schopenhauer. The important point that you miss is that linguistic rationalisation or interpretation of a decision comes after the decision has been made.
 
I think anyone who has learned, say, to play chess well (or, for that matter rock-climbing or even driving), will agree that the impulse to act comes first, and the articulation of that impulse comes a little later.

"I want to make this move," is the feeling, and "Ah, I now see why!" is the articulated thought.

What's amusing about the present state of philosophy is the way the start point (for most 'modern' philosophers, and certainly of the officious sort of 'philosopher' that currently infests these boards) is a sort of disembodied mind!

Well, duh! guys, if that's where you start, that's where you stay.

But if you start by saying "Why are some bodies conscious?" you may just find it a right and useful question to ponder. Some bodies, one might say, are conscious because they add meaning to their perceived world (their sensorium). And this adding of meaning to the given data of sense perception is what enables a an organism to choose (pace Dennett) its future, rather than just passively reacting.

That's a pretty big advantage for any body!
 
But in fact you're conflating the adding of meaning to sense data with being conscious. And although there may be some connection, they're not the same thing.

You could ahve a computer simulation, that added meaning to sense data, (or whatever was analogous to its sense data)- it might be a computer character in an internet rpg say, and part of its program would be to observe the behaviour of other players, and classify them as dangerous, friends, chancers, liars, or whatever, and to use this conceptualisation to guide the scripting of its next strategy with regard to them, - and as you say, by doing so, it might well get an advantage within the game, - but there's no reason to suppose that it would be conscious simply by virtue of having conceptualised its environment.
 
Back
Top Bottom