Urban75 Home About Offline BrixtonBuzz Contact

Prof Stephen Hawking: thinking machines pose a threat to our very existence

Isn't it important to differentiate between Artificial Narrow Intelligence, which we're seeing increasing amounts of in the world (i.e. clever at doing one or a very limited subsection of things) and Artificial General Intelligence (i.e. artificial sentience) which has the potential to fundamentally change the world/humanity/etc, which we are fairly far from being able to create?

Read this article recently, seems a good place to post the link
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Or, "actual AI that AI researchers do and you study in an AI degree" vs "fantasy journalist AI". It's a bit like if every article relating to physics had the theme "but what happens when we all have lightsabers and can time travel, eh?"
 
.. What is it that my cat has that the most powerful computer in the world does not have?

I would say, rather than experience, your cat and you have a certain level of self determination which your computer does not.

Your computer just sits there until you instruct it to do something, you and your cat can decide to sit there or go about exploring your environment or doing any number of things including learning how to do new things which may have just occurred to you, or the cat.
 
I would say, rather than experience, your cat and you have a certain level of self determination which your computer does not.

Your computer just sits there until you instruct it to do something, you and your cat can decide to sit there or go about exploring your environment or doing any number of things including learning how to do new things which may have just occurred to you, or the cat.
My cat has a stake in his body.

He has a stake in the world. He has something to lose. And he knows it. He may not know that he knows, but he knows. He has a mind.
 
My cat has a stake in his body.

He has a stake in the world. He has something to lose. And he knows it. He may not know that he knows, but he knows. He has a mind.
But ultimately your cat and you are mortal, you have a stake for a period of time, don't know whether a cat knows this but an adult human usually comes to that realisation.

I have a mind, I have things that I want to do, life that I want to get on with, emotional responses, thoughts that seem to come from the ether, it is hard to define a mind, why would a machine need a mind? a machine could be programmed to have goals much as humans have wants and needs.
 
Short answer: no.

Think about why you have a mind. What purpose does it serve? Why has evolution produced minds?

Can we produce robots with minds? Sure, why not. But we have not even begun to try to do so.
 
What is a mind is a very good question.

I think the answer is something that is perhaps not obvious.

A mind is a simulation of a situation supposed to be 'reality'. It contains an actor in the world and represents the actor and its relationship with the world. But the actor and the world with which it is interacting are both entirely self-generated images.

And organisms have evolved to produce minds for a very good reason. Minds allow organisms to negotiate their way through the world more effectively, to make sense of incoming sensory data in a way that eliminates noise.

Minds have been selected by evolution. But they are a very particular thing, and I know of no computer that even approaches such a thing.
 
Killer robots will leave humans 'utterly defenceless' warns professor
http://www.telegraph.co.uk/news/sci...mans-utterly-defenceless-warns-professor.html
Didn't you read it all?

“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.
 
Ray Kurzweil: Humans will be hybrids by 2030
http://money.cnn.com/2015/06/03/technology/ray-kurzweil-predictions/index.html
Kurzweil predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots -- tiny robots made from DNA strands.

While this does not mean an immediate threat he goes on to say that where there is promise there can also be peril.
 
Kurzweil is at best wildly optimistic (if that's the right word) and at worst, talking out of his arse. No doubt in this century there will be increasing functional integration of humans and machines (look at how many people are glued to their smartphones), but proper human-machine synthesis on the level that requires nanobots?
 
He's a futurist as well as all the other things, his ideas are hotly argued against by others. I want to believe his perspective because basically, when the hell is it going to be Nueromancer time?
 
He's a futurist as well as all the other things, his ideas are hotly argued against by others. I want to believe his perspective because basically, when the hell is it going to be Nueromancer time?

I prefer my futurists to have a little more grounding in science and engineering. A little knowledge of sociology and economics wouldn't go amiss either.
 
I prefer my futurists to have a little more grounding in science and engineering. A little knowledge of sociology and economics wouldn't go amiss either.
Raymond "Ray" Kurzweil is an American author, computer scientist, inventor, futurist, and is a director of engineering at Google.

sonot a total flake. I don't buy neuro-augs in the next 15 years though. 30 perhaps. We've already had the woman paraplegic piloting an f35 simulator with her mind and varios advances in biotechnology, robotics etc. Never say never. But not on his timescaale. Optimisms all well and good but I think he recons we're going to hit some critical point where it all goes a bit exponential and the work of 20 years is bootstrapped in 5.
 
sonot a total flake. I don't buy neuro-augs in the next 15 years though. 30 perhaps. We've already had the woman paraplegic piloting an f35 simulator with her mind and varios advances in biotechnology, robotics etc. Never say never. But not on his timescaale.

Thing is, this is where I think the "softer" sciences like sociology and psychology come in. Just because because something is technically possible doesn't mean that it will be subject to widespread adoption among the general population. Take your example, for instance. Why would an average person with no disabilities choose to undergo surgery or injection with nanobots (with all the potential risks such an operation would necessarily entail) in order to mentally control machines when non-integrative interfaces are already well-developed?

Don't get me wrong; I find the idea of human-machine interface fascinating and it would be something I would definitely consider if it was proven to be reasonably safe and if it were to offer some kind of advantage or improvement in quality of life (and if I could afford it!), but I guess that I'm probably an outlier as far as this sort of thing goes. I value my intelligence and my morality, but I don't especially value my status as an unmodified human being on an intrinsic basis. I would expect that there would have be significant economic and/or social pressures in order for widespread adoption of such emerging technologies to become a reality. You're not just buying a shiny new toy here; you're modifying a part of what you are, and that's a big step even if you're willing to take that journey.

Let's not forget that there are wider social implications to the development and adoption of technology; what if the reason that people are choosing to cybernetically modify themselves is because there are economic pressures that strongly urge people to do so in order to remain competitive in the labour market with increasing automation, expert systems, artificial intelligence, that kind of thing? That's not a politically neutral thing or an unalloyed good. But I suspect you know all this stuff already.

Optimisms all well and good but I think he recons we're going to hit some critical point where it all goes a bit exponential and the work of 20 years is bootstrapped in 5.

Kurzweil seems to strongly subscribe to a version the hard take-off model of the technological singularity, which I do not for various reasons.
 
oh yeah, I understand and have thought a lot over the years about aspects of tech/society/politics etc. GATTACA was an interesting movie because of this- in the world full of designer babies the poor and downtrodden are literally a genetic underclass- its looking at aspects like that where I find the fruit lies in a good sci fi dystpian story.

But as for uptake on the wetware/hardware interface- military applications, medical solutions. And military/med tech eventually finds its way into civilian application- if there is a desire for it as you say! Personally the idea of being able to control all my electrical items by thought alone is awesome. I'd even go for the full implanted contacts giving me an HUD and enhanced vision, nightsight, predator vision etc.
 
Personally the idea of being able to control all my electrical items by thought alone is awesome. I'd even go for the full implanted contacts giving me an HUD and enhanced vision, nightsight, predator vision etc.

I don't see the hacker problem going away any time soon. It's one thing to get your computer hacked, another to get your brain hacked.
 
I don't see the hacker problem going away any time soon. It's one thing to get your computer hacked, another to get your brain hacked.
Well thats the premise in Stephenson Snow Crash and a plot device in Neuromancer (The film is in development and I cannot fucking wait for that one. Less spandex, more gibson adapts please hollywood). You could write an amusing short about an ad exec who manages to catch a trojan and is driven insane by the adverts rammed into his every waking hour based on what he's looking at. Revenge!

How possible it would really be though is another thing. You could get the brain to operate the machine, but the machine operating the brain? I just donn't know. Maybe it'd be like trying to play a PS4 game on a PS1. No backwards compatability. I really don't know enough.

I am enthused by the thought of stalking out of the mist and raising my hand and clenching a fist as the 6 predator drones slaved to my implant swoop down to deliver the bad news upon my enemies

/geek
 
You could write an amusing short about an ad exec who manages to catch a trojan and is driven insane by the adverts rammed into his every waking hour based on what he's looking at. Revenge!
Do it! That's a lovely idea :)
 
How possible it would really be though is another thing. You could get the brain to operate the machine, but the machine operating the brain? I just donn't know.
In programming, you can do something called dependency injection to simulate input to a program that has not yet been written, or to isolate a variable input to be able to test your program with consistant data. It involves defining the format of the expected input and writing a small program/function to simulate that input. Once you're satisfied the bit you're working on is OK, then you plug in the real thing and see if it still works. I imagine this kind of technique will be how we do things like "controlling the brain" with a machine.

It doesn't have to be a big bang change, either. Let's take something relatively simple to explain (rather than implement) like depression. We know it's a biochemical change in the brain and it's associated with lack of serotonin uptake in certain brain receptors*. The brain works by converting electrical impulses in nuerons to chemicals to cross the synapse and then back into electrical impulses in another neuron. If we could hook into the bit where the chemical transfer takes place, we might be able 'listen' out for the electrical signal that would trigger a serotonin release and instead send the signal to an implant of some sort. Depending on the parameters of this implant, it may or may not pass on the message across the synapse to the receptor. If a depressed individual is not producing enough serotonin, then the implant may send a signal to the receptor, even when it hasn't been requested from the transmitting neuron.

We've now swapped out the chemical synapse with a middleware layer which we're in control of, but which directly affects (controls?) the brain. And because it's electrical signals we're dealing with, this kind of thing should be doable. Of course there are 100s of billions of nuerons, so we're nowhere near implementing anything like this. But even if you were to manage it in one nueron, then you're 'controlling' the brain, even if in a tiny way. Then it's just a case of scaling up from there.

*Depression and the way nuerons work are clearly more complicated than this, but this will do for the explanation
 
Back
Top Bottom