Urban75 Home About Offline BrixtonBuzz Contact

Prof Stephen Hawking: thinking machines pose a threat to our very existence

I always come to these threads with the knowledge of too many sci fi books at not enough real science but there is always the 'well what is cognition/intelligence/sentience anyway?'. Sooner or later we'll go down the linguistics rabbit hole.
 
Are you saying we have no idea what cognition and understanding are? For example, do you ever dither over approaching a stone and asking it a question? Or wonder whether your comb has feelings? Clearly not, so we have some, very good, sense of what these things are.
Let's take your stone example: if you took that uncontacted tribe out of the Peruvian jungle and put them in a room with a stone shaped like a robot, and that actually robot from the video I posted. Do you think they would consider them equal?

Don't you think they might wonder if the robot that's pushing things, and pushing things in an 'intelligent' way, might understand? I think they would. And I think it would fool millions, perhaps billions of other people, too.
 
I always come to these threads with the knowledge of too many sci fi books at not enough real science but there is always the 'well what is cognition/intelligence/sentience anyway?'. Sooner or later we'll go down the linguistics rabbit hole.
The idiot mod has dismissed it, but look at Goodstein's theorem. It really does bear thinking about. If you can see how it's true, you can see something that no computer can see, and no computer could see unless it were a completely different thing from what is currently thought about.
 
Perhaps emotion isn't the right word, but our understanding of the world is inseparable from our involvement in it. We do not merely process information, we see things in terms of our existence, as they relate to our projects.
 
The idiot mod has dismissed it, but look at Goodstein's theorem. It really does bear thinking about. If you can see how it's true, you can see something that no computer can see, and no computer could see unless it were a completely different thing from what is currently thought about.
gsce maths here m8, they lost me at trigonometry.

When I come to think of what we would be able to call AI all the ideas about scary near godlike ones that have birthed a new consciousness are good story fodder though.

But as I have said earlier in the thread, if its good enough to convince 100% of humans as to its sentience, well theres fleshbags that do that, Jeremy Hunt for example. Would he pass a voight kampffe? unlikely
 
Perhaps emotion isn't the right word, but our understanding of the world is inseparable from our involvement in it. We do not merely process information, we see things in terms of our existence, as they relate to our projects.
so basicaly self awareness, the awareness of ourselves as a thing seperate from the world and the stimulus that world gives us? 4-6 as a child iirc. Little bit longer to realise your parents don't run the world but thats the start
 
so basicaly self awareness, the awareness of ourselves as a thing seperate from the world and the stimulus that world gives us? 4-6 as a child iirc. Little bit longer to realise your parents don't run the world but thats the start
It's not a cognitive thing though - not a conscious belief. This is the stand that we take on the world that underlies everything else - it's what makes beliefs possible.
 
gsce maths here m8, they lost me at trigonometry
Fair enough. You'll have to take my word for it, then. It is a mathematical proof that we humans can see to be true, but that has no algorithmic proof, that no computer as currently designed would be able to see to be true. Technically speaking, any Turing machine attempting to solve it would not be able to stop.

That's very significant, I think. It's a practical, concrete example of how understanding works. We may not be able to define it very well, but we can observe it in action.
 
Fair enough. You'll have to take my word for it, then. It is a mathematical proof that we humans can see to be true, but that has no algorithmic proof, that no computer as currently designed would be able to see to be true. Technically speaking, any Turing machine attempting to solve it would not be able to stop.

That's very significant, I think. It's a practical, concrete example of how understanding works. We may not be able to define it very well, but we can observe it in action.
But who's to say there won't be problems/theories that come out of artificial intelligence that are not provable yet beyond our understanding? Might be piece of piss for the machines, but not for us.
 
It's not a cognitive thing though - not a conscious belief. This is the stand that we take on the world that underlies everything else - it's what makes beliefs possible.
that doesn't preclude the idea that AI doing that, that platform of unconscious self awareness you can find in animals and toddlers, from emerging given the right amount of tech and correct work on the rules that guide the thing etc. 'It hasn't happened in the last 40 years' isn't rally a good grounding for an argument against the possibility imo. 40 years ago there was no internets and the cold war still raged. Coldly. A lot can happen, spesh when tech is being driven faster than any manufactoring process in terms of research and dev.
 
What bothers me more isn't AI as such, which I think is a bit of a red herring for now, but what highly sophisticated machines can actually do once we wind them up and watch them go. The autonomous decisions they make don't need to be conscious or intelligent to be dangerous.
 
that doesn't preclude the idea that AI doing that, that platform of unconscious self awareness you can find in animals and toddlers, from emerging given the right amount of tech and correct work on the rules that guide the thing etc. 'It hasn't happened in the last 40 years' isn't rally a good grounding for an argument against the possibility imo. 40 years ago there was no internets and the cold war still raged. Coldly. A lot can happen, spesh when tech is being driven faster than any manufactoring process in terms of research and dev.
It's not just that it hasn't happened in the last 40 years, it's that AI experts have been predicting it imminently for the last 40 years, without really understanding what generalised intelligence actually is. For a computer to be sentient it can't just be a more and more complex Turing machine, but that's really all that they're building.
 
Killer robots: Tech experts warn against AI arms race
http://www.bbc.co.uk/news/technology-33686581
More than 1,000 tech experts, scientists and researchers have written a letter warning about the dangers of autonomous weapons.

In the latest outcry over "killer robots", the letter warns that "a military AI [artificial intelligence] arms race is a bad idea".

Among the signatories are scientist Stephen Hawking, entrepreneur Elon Musk and Apple co-founder Steve Wozniak.

The letter will be presented at an international AI conference today.
...

Everyone apart from the most blinked knows that the various military around the world are salivating about the prospects of robot soldiers/killers/drones/machines which will mean fewer friendly battlefield casualties, but for most right thinking people there are lots of worries about the use of advanced technology to kill even if this concern is mainly fuelled by Sci-fi!

I am not happy with the idea of killer robots!
Ban them I say! ban them!
 
Wednesday the International Joint Conference on Artificial Intelligence in Buenos Aires. “Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The letter distinguishes AI weapons, which can select and attack targets without human orders, from drones and cruise missiles whose targets are selected by humans. The letter also says that while artificial intelligence can make war zones safer for members of the military, weapons that can operate without human control would kick off “a global AI arms race.”
...
http://time.com/3973500/elon-musk-stephen-hawking-ai-weapons/

“AI technology has reached a point where the deployment of such systems is–practically if not legally–feasible within years, not decades, and the stakes are high,”
 
Back
Top Bottom