Mation
real life adventure worth more than pieces of gold
Interesting thread
I agree with just about everything axon has said here. Bugger
There are a lot of huge assumptions about consciousness that people seem to make automatically:That we need to understand how consciousness works before we can replicate it, that replicating a brain with non-biological computery parts will miss out whatever it is that generates consciousness, that a computer convincingly behaving as though it were concious wouldn't in fact be conscious...
We know that our consciousness is an amazing and fiendishly complex thing because we experience it, but I think that experience gets in the way and makes people jump to unnecessarily complicated conclusions. We look at the lump of meat in our heads that produces this thing that doesn't seem anything like meat and, without any concession to dualism, it seems that neurons must be doing something Almost Unimaginably Complicated. And there are so many neurons doing it, connected in so many different ways that it's easy to think that no amount of tracking their interactions will reveal a bigger picture. (In fact I believe the bigger picture is that there is no spoon bigger picture.)
I don't think there are many creationists on these here boards so we can probably all agree that our brains evolved. There was nothing guiding brains toward consciousness; nothing that needed prior knowledge of what consiousness is and how it works; it 'just' happened when all the right bits were selected for. So how do you replicate that? I know it sounds trivial at first to answer that you do it by replicating all the bits, but bear with me.
How do we know what needs replicating and what doesn't? You pick somewhere and see. A good place to start would be with the functional interactions of the component parts; neurons, glia (support cells), any parts that are connected and send signals to other parts. This can be done on a reasonable level. The materials that make up a car engine are subject to quantum interactions, but at a macro level of space and time it is not useful to consider those interactions to build an engine that works.
As axon described, neurons integrate a (sometimes huge) number of inputs that change the probability of sending a signal onward. It is possible to replicate this action, in software, for a group of artificial neurons. I think this has also been done in hardware. Given that this is possible, it is then 'simply' a question of finding out what is connected to what and how. No small task, but not one that requires some qualitatively different understanding of what neurons do apart from integrate, modulate and fire.
Create a functional replication of a conscious brain and you will have created a conscious brain.
If the question was about whether a PC type computer - one that can do clever stuff but in a different way to the brain - will attain conciousness once it has 'enough' processing power, then I think the answer is no.
I agree with just about everything axon has said here. Bugger
There are a lot of huge assumptions about consciousness that people seem to make automatically:That we need to understand how consciousness works before we can replicate it, that replicating a brain with non-biological computery parts will miss out whatever it is that generates consciousness, that a computer convincingly behaving as though it were concious wouldn't in fact be conscious...
We know that our consciousness is an amazing and fiendishly complex thing because we experience it, but I think that experience gets in the way and makes people jump to unnecessarily complicated conclusions. We look at the lump of meat in our heads that produces this thing that doesn't seem anything like meat and, without any concession to dualism, it seems that neurons must be doing something Almost Unimaginably Complicated. And there are so many neurons doing it, connected in so many different ways that it's easy to think that no amount of tracking their interactions will reveal a bigger picture. (In fact I believe the bigger picture is that there is no spoon bigger picture.)
I don't think there are many creationists on these here boards so we can probably all agree that our brains evolved. There was nothing guiding brains toward consciousness; nothing that needed prior knowledge of what consiousness is and how it works; it 'just' happened when all the right bits were selected for. So how do you replicate that? I know it sounds trivial at first to answer that you do it by replicating all the bits, but bear with me.
How do we know what needs replicating and what doesn't? You pick somewhere and see. A good place to start would be with the functional interactions of the component parts; neurons, glia (support cells), any parts that are connected and send signals to other parts. This can be done on a reasonable level. The materials that make up a car engine are subject to quantum interactions, but at a macro level of space and time it is not useful to consider those interactions to build an engine that works.
As axon described, neurons integrate a (sometimes huge) number of inputs that change the probability of sending a signal onward. It is possible to replicate this action, in software, for a group of artificial neurons. I think this has also been done in hardware. Given that this is possible, it is then 'simply' a question of finding out what is connected to what and how. No small task, but not one that requires some qualitatively different understanding of what neurons do apart from integrate, modulate and fire.
Create a functional replication of a conscious brain and you will have created a conscious brain.
If the question was about whether a PC type computer - one that can do clever stuff but in a different way to the brain - will attain conciousness once it has 'enough' processing power, then I think the answer is no.