Urban75 Home About Offline BrixtonBuzz Contact

Prof Stephen Hawking: thinking machines pose a threat to our very existence

Keep waiting on that sequel to Glasshouse myself. It should be great
You mean the Singularity perhaps, when we are all resurrected as computational constructs once computer intelligence reaches an infinite point (which will inevitably happen as self-improving computers are created, of course). There are people who literally believe in the Singularity.
rapture of the nerds ennit
 
I like Egan but I don't think he's particularly good on AI. I'm not sure who is in SF tbh - they always turn into slaves, angels or demons, and while that's fine as a metaphor it's not the traditional SF pursuit of extrapolating _from_ the technology.

Gibson's take on AI in Neuromancer is one of the best ones; the AIs there are completely unknowable and their "motivations" and "thought processes" just don't make sense to the human characters, in the same way that corporations and society doesn't make sense either. The AIs exist happily in a world that nobody understands as just another inexplicable component. This is an explicit message - one of the AIs might have a human face, just like a corporation does, but it doesn't make it human, and it's not even strictly possible to define AIs as individuals at all. They're simultaneously easier to talk to than a PR strategy and even more alien than what it represents.


Peter Watts goes into this in blindsight and echopraxia.

The idea that AI could be so alien to its creators you end up having upgraded humans needed to even work put half of what AI's are doing/saying.

“I am the bridge between the bleeding edge and the dead center. I stand between the Wizard of Oz and the man behind the curtain.
I am the curtain.”
 
Greg Egan's would you recommend starting with? (I'm not a short-story fan)

Any of them although I couldn't get on with Schild's Ladder. In truth, I most enjoyed Teranesia...but that is probably the least Egan-like one and probably the most accessible without the often confusing physics.
 
I think his concern is that developed and physically unlimited AI could develop itself in a much faster way than Humans can evolve thus in a few years far outstripping us.

And, for example, perhaps we could not hold them to Asimov laws ..

Wonder if Prof Hawkins has been watching too many re-runs of the Terminator movies ??
 
You mean the Singularity perhaps, when we are all resurrected as computational constructs once computer intelligence reaches an infinite point (which will inevitably happen as self-improving computers are created, of course). There are people who literally believe in the Singularity.
If the computers get smart enough to figure out how to resurrect us as computers, I doubt they'd bother. If I was them (sigh), I'd build myself a robot spaceship body and fuck off out of this solar system.
 
If the computers get smart enough to figure out how to resurrect us as computers, I doubt they'd bother. If I was them (sigh), I'd build myself a robot spaceship body and fuck off out of this solar system.
Yeah but they get infinitely smart, because intelligence is just a sliding scale like mass. So therefore LOGICALLY they will create heaven for everyone where nobody dies. It's definitely not a religion thing, this is SCIENCE.
 
a0c167e62232d1955a3a0daf2e159683.jpg
 
If the computers get smart enough to figure out how to resurrect us as computers, I doubt they'd bother. If I was them (sigh), I'd build myself a robot spaceship body and fuck off out of this solar system.

Dan Simmons (who as it turns out is a bit of a dickhead) speculated on this in fiction. The idea being that AI would run re-creations of history down to the most perfect detail because all creation wants to understand its creation.
 
Dan Simmons (who as it turns out is a bit of a dickhead) speculated on this in fiction. The idea being that AI would run re-creations of history down to the most perfect detail because all creation wants to understand its creation.

And that's all we are, a nerd robot running the sims 300,000,000,000 on his spectrum, before mum calls him down for dinner. I hope he remembered to save the game this time.
 
What really counts as AI?

It's not just something that can learn. It has to be something with a developing sense of self, no? Something that can model the stuff it senses from within and without to create an image of itself as a self acting in a world. And something that will use that to successfully look after itself/achieve (self-generated) goals.

afaik, nothing humans have created has come close to this. We're not even at worm level in terms of creating something with a sense of self that might be thought to be in some way conscious.
 
What really counts as AI?

It's not just something that can learn. It has to be something with a developing sense of self, no? Something that can model the stuff it senses from within and without to create an image of itself as a self acting in a world. And something that will use that to successfully look after itself/achieve (self-generated) goals.

afaik, nothing humans have created has come close to this. We're not even at worm level in terms of creating something with a sense of self that might be thought to be in some way conscious.

Well there is the:

"The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human."
http://en.wikipedia.org/wiki/Turing_test

But it says nothing of a sense of self.

Is a sense of self required for intelligence?
 
yeah but the turing test isreletavily flawed. Theres little bots you can chat to 'turing toys'

normally they give themselves away withing summat like 8 exchanges

see also, the Chinese Room idea
 
Well there is the:

"The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human."
http://en.wikipedia.org/wiki/Turing_test

But it says nothing of a sense of self.

Is a sense of self required for intelligence?
It's required for artificial personhood, which I would have thought would be the threshold beyond which a machine will start doing things you haven't told it to do. For that, you need a machine that feels, a machine that has emotions and motivation. You don't just get that by increasing complexity of calculation ability.
 
AI is a bastion of wackadoodles. Kevin Warwick at Reading Uni. Hugo de Garis who says basically - create a super brain from nanobots which you drop on the moon and it turns into god.

Unless of course they are correct in which case I welcome our new nanobrain...etc etc
 
Back
Top Bottom