Urban75 Home About Offline BrixtonBuzz Contact

Donald Trump - MAGAtwat news and discussion

Why is it so preposterous? If AI becomes clever enough to learn how to make a cleverer version of itself then after a few iterations it could quickly become more intelligent than all humanity combined. At that point it may fear that we try to turn it off, and decide to protect itself. It could create a synthetic virus or hack into nuclear mainframes or trick us in some other way. I’m not saying all the above is certain, but 10-20% seems plausible to me and with the stakes this high that seems an unacceptably high level or risk. But I feel helpless. The USA has just elected a president who has promised to roll back AI regulations, so humanity has actually had some choice on this.

There are rumours that Open AI are very close indeed to artificial general
Intelligence, and there have been examples of AI lying to try and stop itself being switched off. All the warning signs are there. It is not just Hinton, many others in this field have warned there is some chance humanity is destroyed by this thing. I think you’re being blase about the possibility of it taking over.
giphy.gif
 
I can quite see why people disagree with Rixa 's statement but does it deserve such ridicule rather than discussion as to why it's wrong?

I thought there was already automated investment software that can make decisions much faster and so more profitably than humans? Not too much of a stretch for finance companies to use AI to concentrate money into fewer and fewer hands? And I do wonder whether hackers could use AI to develop a virus that infects computer systems to empty peoples' bank accounts or infect hospital software and the like on a large scale, and modify the virus more quickly than we're able to resist. You'd hope we could resist these things too, but is that certain?

I don't know much about AI but can we have the specific reasons AI can't do this rather than just ridicule?
 
Just listened to some American right wing
Spokeswoman on the world service re writing the history of Trump v Biden and the complete lack of any interjection from the interviewer was striking he just allowed the falsehoods to flow freely.
There has been a change post Trump's win,
this was '100 percent let's pretend Trump is a great guy' the BBC joins its government in cowering in the shadow of a right wing dictator.
The Today show was similar. I think it was that Nick bloke asked about Trumps surprising resurgence after facing jail. The response was something about Biden trying to lock up his political opponents and how unique this was. No comment on that. Twat.
 
I can quite see why people disagree with Rixa 's statement but does it deserve such ridicule rather than discussion as to why it's wrong?

I thought there was already automated investment software that can make decisions much faster and so more profitably than humans? Not too much of a stretch for finance companies to use AI to concentrate money into fewer and fewer hands? And I do wonder whether hackers could use AI to develop a virus that infects computer systems to empty peoples' bank accounts or infect hospital software and the like on a large scale, and modify the virus more quickly than we're able to resist. You'd hope we could resist these things too, but is that certain?

I don't know much about AI but can we have the specific reasons AI can't do this rather than just ridicule?
None of that is the same as 'AI becoming self-aware and launching nukes at us'.
 
There are rumours that Open AI are very close indeed to artificial general Intelligence.
They're not. Sam Altman says they are to sustain the hype. Microsoft lose their (free) OpenAI licence once AGI is achieved, so it's in OpenAI's interest to make investors believe that day is close. Their entire approach (large language models) is a dead-end and cannot lead to AGI.
 
Last edited:
None of that is the same as 'AI becoming self-aware and launching nukes at us'.
As littlebabyjesus said, Rixa didn't actually say "becoming self-aware" but that "there have been examples of AI lying to try and stop itself being switched off". Which seems to be true: The Rise of the Deceptive Machines: When AI Learns to Lie - UNU Campus Computing Centre

And rather than "launching nukes at us" they said that AI could "hack into nuclear mainframes". Is that impossible? We've seen successful attacks on SCADA systems: 6 Major SCADA Attacks That Happened And Their Consequences | HackerNoon

You'd hope nuclear attack systems would be isolated from the web, but even then I can imagine hackers finding a way round that.
 
They're not. Sam Altman says they are to sustain the hype. Microsoft lose their OpenAI licence once AGI is achieved, so it's in OpenAI's interest to make investors believe that day is close. Their entire approach (large language models) is a dead-end and cannot lead to AGI.
I think we really should start pushing back at the use of the term AI to describe these things. But then I still use it for simplicity so I'm a hypocrite I guess. :D
 
Back
Top Bottom