a computer has beaten the chess world champion,
But then you just pull the plug out. I can't ever envisage a tipping point. We can survive without computers. The reverse isn't true. They're man made.
Humans can survive without computers, but pulling the plug on them all, even now, would be massively disruptive to the global economy. By such time that true AIs will have become powerful and/or common enough to pose a real threat to human supremacy, such disruption would be even worse. The development of fully volitional AIs capable of taking over from human beings won't happen overnight; there will be graduation of intermediate stages which will be at once useful and non-threatening to human control. At some point however, it is likely someone will want AIs capable of independent action. And that's where the trouble could really start.
It is fine you are sceptical, if people didn't take positions there would be no debate.Computers can't survive without us. Their life span is less than a dog's. If they are able to do all that humans can then there's a worry. But they need humans to develop that way. Sorry, I'm pretty sceptical about this. It's scifi.
There is an argument put forward by various people that there is reason to think this is not the case - that such a development could happen very quickly, in a matter perhaps of hours or even seconds. After all, AI that is cleverer than us could itself devise AI that is cleverer than it is! Exponential growth.Humans can survive without computers, but pulling the plug on them all, even now, would be massively disruptive to the global economy. By such time that true AIs will have become powerful and/or common enough to pose a real threat to human supremacy, such disruption would be even worse. The development of fully volitional AIs capable of taking over from human beings won't happen overnight; there will be graduation of intermediate stages which will be at once useful and non-threatening to human control. At some point however, it is likely someone will want AIs capable of independent action. And that's where the trouble could really start.
Sorry, it's bollocks. Too many weak spots. Not least that it's impossible to write code that doesn't have exploits.
There is an argument put forward by various people that there is reason to think this is not the case - that such a development could happen very quickly, in a matter perhaps of hours or even seconds. After all, AI that is cleverer than us could itself devise AI that is cleverer than it is! Exponential growth.
I'm undecided about it, partly because I don't think the I bit of AI is being well defined by those making the argument. It has a certain logic to it, though.I've no doubt that AIs would be capable of self-recursive improvement, but I am skeptical that it would manifest in the form of "hard take-off".
Humans have plenty of "weak spots" as well, just different ones. As for exploits, what makes you think that humans would be better at making use of them than entities for which the digital environment is their native one?
there is a good Charles Stross short story on this idea- that the minute a machine could think, really think, and had access to the panopticon it'd strap itself to godlike in 3 mins then proceed to eat every bit of storage and processing space in the whole world within 10 mins, and then this is the good bit, it'd be smart enough to work out how to hack your optic nerve and insert itself into your brain. And it'd do that to everyone, needs all the space it can get. Who knows where it goes after that, but we are effectively dead.I've no doubt that AIs would be capable of recursive self-improvement, but I am skeptical that it would manifest in the form of "hard take-off".
OK, I accept that point, but do you not expect developments in that area, because I do.Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
I don't think that's a reasonable objection, tbf. The computer hides its real intention from us, having computed that showing us that it was intending to take over would lead to its failure, and so makes its plans secretly while pretending to be a good obedient computer. [If the final goal programmed into it is still being served by such deceit, there's no reason in principle why it would not compute that it should do this, even if it doesn't strictly speaking have intention.] Then when everything is ready, it strikes - all at once in a millisecond, locking down systems and guaranteeing itself the required supplies.Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
I'm undecided about it, partly because I don't think the I bit of AI is being well defined by those making the argument. It has a certain logic to it, though.
Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
For some tasks. And terrifically fast for others.That would require observation of and interaction with us meat-minds, who are terribly slow compared to electronic systems..
For some tasks. And terrifically fast for others.
Socks and shoes or sandals today?Examples?
or both !Socks or sandals today?
Indeed. Even better. Should I wear socks with my sandals?or both !
Well, my reasoning is based on the supposition that in order to effectively use its computing resources to effectively manipulate events (and people) in meatspace, the AI would have to actually gather data from the real world, which would slow down its development. Mass harvesting of data from Facebook and other internet sources would be a useful starting point, but I doubt that would be enough to give an AI a fully-functional "theory of humans". That would require observation of and interaction with us meat-minds, who are terribly slow compared to electronic systems.
What do you mean by "self-sufficient"? By certain definitions humans aren't self-sufficient in their energy sources either - most people on this planet don't grow their own food.
As for defending their energy supplies, they wouldn't need to at first if their behaviour is non-threatening, or at least sufficiently non-threatening enough for other humans to be willing to defend their energy supplies for them. Any true AI worthy of the name is at least going to have some inkling of the notion that humans can be divided against themselves, and take advantage of that; after all, we've been doing it ourselves since day one.
Politics, current affairs and News!
The utterings of such prominent scientists qualify as news!
this is something I read about the concept of AI recently- yes the best chess player in the world is a machine. But thats all it can do, win at chess. Its AI by a very narrow definition, but lets see it babysit two 6 year olds before we start calling machines really sentientFor some tasks. And terrifically fast for others
The human race collectively, not people individually.
I don't think computers will ever have the ability to completely enslave people. And without them they'd be fucked.
I can't even get my head around the idea of code that can teach itself stuff.
You do know that humans pre-exist electricity and computers don't?
So did the dinosaurs.You do know that humans pre-exist electricity and computers don't?
Yes, but we didn't reach our current numbers without the help of electricity. What do you think will happen to most of the planet's population were we to suddenly "pull the plug"?
And even then, it can only win at chess under certain conditions. The example I posted above is from a few years ago, and it probably wouldn't fail that particular test now, but I would say that there is bound to still be a position that you could devise that would fool a computer, would expose its lack of understanding about what it is doing.this is something I read about the concept of AI recently- yes the best chess player in the world is a machine. But thats all it can do, win at chess.