Urban75 Home About Offline BrixtonBuzz Contact

Prof Stephen Hawking: thinking machines pose a threat to our very existence

Computers can't survive without us. Their life span is less than a dog's. If they are able to do all that humans can then there's a worry. But they need humans to develop that way. Sorry, I'm pretty sceptical about this. It's scifi.
 
a computer has beaten the chess world champion,

Computer chess programmes are an interesting case-study, I think, as they highlight how it can be quite hard to distinguish the base absence of understanding that underlies any chess computer's play. It is there, and can be exposed, though. Here is a chess position that defeated Deep Thought long after it had already grown strong enough to defeat chess grandmasters. I think the blunder bears some deep thought! (Position devised by William Hartson)

White to move. What should white do?

Screen shot 2015-06-28 at 16.12.52.png


It's reasonably easy for us to understand that we're defended here by the wall of pawns, and that we're miles behind in material, so the only thing to do is to shuffle the king around. It's a guaranteed draw. Deep Thought took the rook. Leaving it in a hopeless position.
 
Last edited:
But then you just pull the plug out. I can't ever envisage a tipping point. We can survive without computers. The reverse isn't true. They're man made.

Humans can survive without computers, but pulling the plug on them all, even now, would be massively disruptive to the global economy. By such time that true AIs will have become powerful and/or common enough to pose a real threat to human supremacy, such disruption would be even worse. The development of fully volitional AIs capable of taking over from human beings won't happen overnight; there will be graduation of intermediate stages which will be at once useful and non-threatening to human control. At some point however, it is likely someone will want AIs capable of independent action. And that's where the trouble could really start.
 
Humans can survive without computers, but pulling the plug on them all, even now, would be massively disruptive to the global economy. By such time that true AIs will have become powerful and/or common enough to pose a real threat to human supremacy, such disruption would be even worse. The development of fully volitional AIs capable of taking over from human beings won't happen overnight; there will be graduation of intermediate stages which will be at once useful and non-threatening to human control. At some point however, it is likely someone will want AIs capable of independent action. And that's where the trouble could really start.

Sorry, it's bollocks. Too many weak spots. Not least that it's impossible to write code that doesn't have exploits.
 
Computers can't survive without us. Their life span is less than a dog's. If they are able to do all that humans can then there's a worry. But they need humans to develop that way. Sorry, I'm pretty sceptical about this. It's scifi.
It is fine you are sceptical, if people didn't take positions there would be no debate.

But I think you are wrong to discount science fiction, some of the best science fiction extrapolates from where we are to the future, sure there is some fantasy stuff but when you consider that the inventor of the telephone, Bell, thought telephones would be so popular that eventually EVERY TOWN would have ONE, you can see just how wrong people can be with technology predictions.

And now not just every individual house has a phone, but in many countries every person has one also. And in some cases what phones they are, what technology and processing power in a small handset, more processing than was required to first land humans on the moon.
 
Humans can survive without computers, but pulling the plug on them all, even now, would be massively disruptive to the global economy. By such time that true AIs will have become powerful and/or common enough to pose a real threat to human supremacy, such disruption would be even worse. The development of fully volitional AIs capable of taking over from human beings won't happen overnight; there will be graduation of intermediate stages which will be at once useful and non-threatening to human control. At some point however, it is likely someone will want AIs capable of independent action. And that's where the trouble could really start.
There is an argument put forward by various people that there is reason to think this is not the case - that such a development could happen very quickly, in a matter perhaps of hours or even seconds. After all, AI that is cleverer than us could itself devise AI that is cleverer than it is! Exponential growth.
 
Sorry, it's bollocks. Too many weak spots. Not least that it's impossible to write code that doesn't have exploits.

Humans have plenty of "weak spots" as well, just different ones. As for exploits, what makes you think that humans would be better at making use of them than entities for which the digital environment is their native one?
 
There is an argument put forward by various people that there is reason to think this is not the case - that such a development could happen very quickly, in a matter perhaps of hours or even seconds. After all, AI that is cleverer than us could itself devise AI that is cleverer than it is! Exponential growth.

I've no doubt that AIs would be capable of recursive self-improvement, but I am skeptical that it would manifest in the form of "hard take-off".
 
Humans have plenty of "weak spots" as well, just different ones. As for exploits, what makes you think that humans would be better at making use of them than entities for which the digital environment is their native one?

Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
 
I've no doubt that AIs would be capable of recursive self-improvement, but I am skeptical that it would manifest in the form of "hard take-off".
there is a good Charles Stross short story on this idea- that the minute a machine could think, really think, and had access to the panopticon it'd strap itself to godlike in 3 mins then proceed to eat every bit of storage and processing space in the whole world within 10 mins, and then this is the good bit, it'd be smart enough to work out how to hack your optic nerve and insert itself into your brain. And it'd do that to everyone, needs all the space it can get. Who knows where it goes after that, but we are effectively dead.

spooky tale
 
Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
OK, I accept that point, but do you not expect developments in that area, because I do.
 
Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.
I don't think that's a reasonable objection, tbf. The computer hides its real intention from us, having computed that showing us that it was intending to take over would lead to its failure, and so makes its plans secretly while pretending to be a good obedient computer. [If the final goal programmed into it is still being served by such deceit, there's no reason in principle why it would not compute that it should do this, even if it doesn't strictly speaking have intention.] Then when everything is ready, it strikes - all at once in a millisecond, locking down systems and guaranteeing itself the required supplies.
 
I'm undecided about it, partly because I don't think the I bit of AI is being well defined by those making the argument. It has a certain logic to it, though.

Well, my reasoning is based on the supposition that in order to effectively use its computing resources to effectively manipulate events (and people) in meatspace, the AI would have to actually gather data from the real world, which would slow down its development. Mass harvesting of data from Facebook and other internet sources would be a useful starting point, but I doubt that would be enough to give an AI a fully-functional "theory of humans". That would require observation of and interaction with us meat-minds, who are terribly slow compared to electronic systems.

Until computers are capable of being self sufficient, they're fucked. They need electricity to live. How many weak spots is that? I could short out an 11kv supply. Any supply in fact. How would these machines defend against that? Without power they're piles of plastic and components. This conversation is stupid.

What do you mean by "self-sufficient"? By certain definitions humans aren't self-sufficient in their energy sources either - most people on this planet don't grow their own food.

As for defending their energy supplies, they wouldn't need to at first if their behaviour is non-threatening, or at least sufficiently non-threatening enough for other humans to be willing to defend their energy supplies for them. Any true AI worthy of the name is at least going to have some inkling of the notion that humans can be divided against themselves, and take advantage of that; after all, we've been doing it ourselves since day one.
 
That would require observation of and interaction with us meat-minds, who are terribly slow compared to electronic systems..
For some tasks. And terrifically fast for others. :)

We'll see. Quantum computing may well be a total game-changer in this anyway, in ways that we find it hard even to imagine for sci-fi stories.
 
or both !
Indeed. Even better. Should I wear socks with my sandals?

Another example is that chess problem I posted above. Deep Thought computed and computed and computed, and ended up doing a really stupid thing. To any competent chess player, an obviously stupid thing.

These aren't trivial considerations.
 
Just occurred to me if they do manage it they'll see this and then most people on this thread are completely fucked .

Which would be a great pity . Because they'll never get to experience what a boon to mankind it would be if computers managed to take over the world and improve us all . I've often thought to myself how great that would be and how I'd positively look forward to co operating with our new friends as they order stuff better than we can . Can't wait .
 
Well, my reasoning is based on the supposition that in order to effectively use its computing resources to effectively manipulate events (and people) in meatspace, the AI would have to actually gather data from the real world, which would slow down its development. Mass harvesting of data from Facebook and other internet sources would be a useful starting point, but I doubt that would be enough to give an AI a fully-functional "theory of humans". That would require observation of and interaction with us meat-minds, who are terribly slow compared to electronic systems.



What do you mean by "self-sufficient"? By certain definitions humans aren't self-sufficient in their energy sources either - most people on this planet don't grow their own food.

As for defending their energy supplies, they wouldn't need to at first if their behaviour is non-threatening, or at least sufficiently non-threatening enough for other humans to be willing to defend their energy supplies for them. Any true AI worthy of the name is at least going to have some inkling of the notion that humans can be divided against themselves, and take advantage of that; after all, we've been doing it ourselves since day one.

The human race collectively, not people individually. I don't think computers will ever have the ability to completely enslave people. And without them they'd be fucked. I can't even get my head around the idea of code that can teach itself stuff.
 
Politics, current affairs and News!

The utterings of such prominent scientists qualify as news!

On whatever ITV's morning show is called, they recently ran a piece over the "agony" One Direction fans face in an upcoming poll over whether to vote for 1D or the one who just left them. So "news" is a very subjective animal.
 
The human race collectively, not people individually.

Well in that case we are just as dependent on electricity as computers are, at least as things currently stand. "Pulling the plug" could well mean the deaths of millions if not billions of humans as well. Or do you think that 7 billion+ people could be sustained on this planet without electricity?

I don't think computers will ever have the ability to completely enslave people. And without them they'd be fucked.

The relationship would at least begin as a symbiotic one. Whether that relationship proves to be stable would depend on a number of factors.

I can't even get my head around the idea of code that can teach itself stuff.

Why not? It's already happening, at least to some degree. While it may be limited at the moment, the basis for more general learning capability is being constructed as we type.
 
You do know that humans pre-exist electricity and computers don't?

Yes, but we didn't reach our current numbers without the help of electricity. What do you think will happen to most of the planet's population were we to suddenly "pull the plug"?
 
Yes, but we didn't reach our current numbers without the help of electricity. What do you think will happen to most of the planet's population were we to suddenly "pull the plug"?

Where is this going? Pull what plug? We know how to generate electricity now. By various means.
 
I know about electrical maintenance as I do it for a living. I'm not expecting computers to be able to do it any time soon. It isn't production line work.
 
this is something I read about the concept of AI recently- yes the best chess player in the world is a machine. But thats all it can do, win at chess.
And even then, it can only win at chess under certain conditions. The example I posted above is from a few years ago, and it probably wouldn't fail that particular test now, but I would say that there is bound to still be a position that you could devise that would fool a computer, would expose its lack of understanding about what it is doing.

What understanding is is an important and difficult question, imo, and one whose existence some in AI simply deny - saying that understanding can be computed.

The opposite position, held by Penrose among others, whose arguments to me are very convincing, is that understanding cannot come from any kind of algorithmic computation. There are many examples of the human ability for non-algorithmic understanding that a computer cannot have, and Penrose uses Godel's incompleteness theorem to show how a computer can never have such a thing. We can see the truth behind a so-called 'godel statement'. We can understand it. But a computer can never compute that specific understanding using algorithms, and we can prove that it can't.

I do think quantum computing will be a game-changer, simply because we will be progressing closer to the way we and other life-forms work stuff out - parallel 'try everything, match patterns from other domains' processes that allow for leaps of thought, intuition, inspiration, metaphor.

Show me a computer that understands, and can generate and use, metaphor.
 
Back
Top Bottom