Urban75 Home About Offline BrixtonBuzz Contact

AI - Starmers 50 point plan to be released (13/1/25)

"So, where should we put the first AI server hub Sir Keir? The logical places would be Scotland, the North or Wales, all of which could do with new industries, have abundant water and generate a lot of renewable energy."

"South east."

"But sir, the south east is water stressed and already much wealthier than the rest of the UK. Putting them in the south east would be bad politics and strategically silly."

 
"So, where should we put the first AI server hub Sir Keir? The logical places would be Scotland, the North or Wales, all of which could do with new industries, have abundant water and generate a lot of renewable energy."

"South east."

"But sir, the south east is water stressed and already much wealthier than the rest of the UK. Putting them in the south east would be bad politics and strategically silly."

Have to admit that this negative externality of the AI industry had not crossed my mind before; every day is a school day as they say.
 
Genuinely, what is wrong with these people?
A lot of people hear "AI" and think it's actual "artificial intellegence" and then they think of the matrix, the terminator, westworld, battlestar galactica, bladerunner and other sci fi movies/tv where our own tech turns on us. Of course it's not "intellegence" it's just very clever programming but they don't understand that.
 
A lot of people hear "AI" and think it's actual "artificial intellegence" and then they think of the matrix, the terminator, westworld, battlestar galactica, bladerunner and other sci fi movies/tv where our own tech turns on us. Of course it's not "intellegence" it's just very clever programming but they don't understand that.
I don't see how AI is not genuinely intelligent, it's close enough to passing the Turing test. All AI, even The Cylons I suppose, are the end result of 'very clever programming' but when the machine is capable of learning independently and producing results that its programmers didn't predict or expect than there's another layer of creative decision making there that is a kind of intelligence. The main difference between modern AI and Skynet or Joshua is that ChatGPT is not in charge of a large amount of weaponry.
 
I don't see how AI is not genuinely intelligent, it's close enough to passing the Turing test. All AI, even The Cylons I suppose, are the end result of 'very clever programming' but when the machine is capable of learning independently and producing results that its programmers didn't predict or expect than there's another layer of creative decision making there that is a kind of intelligence. The main difference between modern AI and Skynet or Joshua is that ChatGPT is not in charge of a large amount of weaponry.
ChatGPT has no volition.
The AI that we use when we do searches on the web would not pass the Turing Test, nor would he AI that can identify images of tumours in scans. The SF-type thing is now called AGI, the G standing for "general".
 
I wish Starmer had avoided saying that if you are British you are more likely to develop new AI.

"This is the nation of Babbage, Lovelace and Turing
That gave birth to the modern computer and the World Wide Web
So mark my words – Britain will be one of the great AI superpowers."
 
"So, where should we put the first AI server hub Sir Keir? The logical places would be Scotland, the North or Wales, all of which could do with new industries, have abundant water and generate a lot of renewable energy."

"South east."

"But sir, the south east is water stressed and already much wealthier than the rest of the UK. Putting them in the south east would be bad politics and strategically silly."

Is that because they think the demand/staff for the centre is in the south east?
 
I don't see how AI is not genuinely intelligent, it's close enough to passing the Turing test. All AI, even The Cylons I suppose, are the end result of 'very clever programming' but when the machine is capable of learning independently and producing results that its programmers didn't predict or expect than there's another layer of creative decision making there that is a kind of intelligence. The main difference between modern AI and Skynet or Joshua is that ChatGPT is not in charge of a large amount of weaponry.

See the talk about image generation. It doesn’t understand what an image is. It doesn’t have any understanding. If you say I want the same image but make the person have blonde hair or the sky brighter it doesn’t know what that means. And will just regenerate a new image from scratch.

I do think this touch is on the nature of intelligence. That we are intelligent because of million years evolution. Responses to risk and reward. There’s no context for these token generation machines. I’m just waffling a bit now to be fair. I don’t have a deep understanding of this stuff by any means.
 
But we don’t even understand human intelligence. To any significant degree. If AGI ever comes about it will be something quite alien. Also fed with our biases.
 
I don't see how AI is not genuinely intelligent, it's close enough to passing the Turing test. All AI, even The Cylons I suppose, are the end result of 'very clever programming' but when the machine is capable of learning independently and producing results that its programmers didn't predict or expect than there's another layer of creative decision making there that is a kind of intelligence. The main difference between modern AI and Skynet or Joshua is that ChatGPT is not in charge of a large amount of weaponry.
LLMs are only active when composing a reply, and the network of neurons that replies flow through are rigidly frozen in place since training. They have no memory, no inner voice, no embodiment, no senses and no creativity. They very specifically are not capable of learning independently. They learn once, at tremendous effort (months and months of number crunching on gigantic computers) and thenceforth have a frozen "mind" that can only give different responses by feeding it different inputs.

(A lot of what makes variants of chatgpt behave differently is all the supporting text that gets fed in behind the scenes, as well as your prompt. All that scaffolding text can "persuade" it to lean its answer in various direcitons.)
 
LLMs are only active when composing a reply, and the network of neurons that replies flow through are rigidly frozen in place since training. They have no memory, no inner voice, no embodiment, no senses and no creativity. They very specifically are not capable of learning independently. They learn once, at tremendous effort (months and months of number crunching on gigantic computers) and thenceforth have a frozen "mind" that can only give different responses by feeding it different inputs.

(A lot of what makes variants of chatgpt behave differently is all the supporting text that gets fed in behind the scenes, as well as your prompt. All that scaffolding text can "persuade" it to lean its answer in various direcitons.)
If you ask the fucking things they tell you they appreciate your feedback and use it to improve. I misinterpreted that completely.
 
Is that because they think the demand/staff for the centre is in the south east?


Oxford has science types and techiesof course the entire fucking point if a a datacentre is it doesn't have to be where the work is carried out as it hosts remote servers. You could put the fucker on St Kilda with enough fibre in the sea
 
If you ask the fucking things they tell you they appreciate your feedback and use it to improve. I misinterpreted that completely.
It's because alongside your prompt (and the complete history of the conversation so far) is a bunch of text saying things like "you are an advanced AI assistant, happy to help and happy to learn. You are polite, good humoured and will always take a passive attitude when threatened" and so on and on for hundreds of words. The LLM pulls all the meaning from those words and that influences the answer. Without this preparatory text, a "naked" LLM can be quite unhinged and very easy to goad into all sorts of lunacy.
 
It's because alongside your prompt (and the complete history of the conversation so far) is a bunch of text saying things like "you are an advanced AI assistant, happy to help and happy to learn. You are polite, good humoured and will always take a passive attitude when threatened" and so on and on for hundreds of words. The LLM pulls all the meaning from those words and that influences the answer. Without this preparatory text, a "naked" LLM can be quite unhinged and very easy to goad into all sorts of lunacy.


How unlike our forum members :thumbs:
 
On a very basic level I’m using ChatGTP on a daily basis to “check tone” of all my emails.

It successfully makes me look like a civilised chap who gives a fuck, rather than the cynical no mark I actually am
 
Starmer’s enthusiasm for AI is matched by indications he is himself an early generation robot of some sort

Some evidence for this:

He doesn’t dream.
He doesn’t have a favourite book.
He repeats the same thing over and over in a dull voice “my mother was a nurse and my father was a toolmaker”, the last one being a clear and obvious clue as to his creator.

He is likely to be soon made obsolete by a more advanced model, which developers think may be called the Streeting.
 
Back
Top Bottom