Urban75 Home About Offline BrixtonBuzz Contact

AI - Starmers 50 point plan to be released (13/1/25)

I think there's a basic naming issue. When people say AI they can mean anything from a chatbot to something identifying symptoms and diagnosing people,or "improving"pictures on your phone or whatever that pothole thing is that they're talking about.

Basically people use it for anything "clever", because there's loads of funding in it cos loads of people like starmer don't know what it means and they're all desperate for something to come along and fix everything and make the line go up again.

And, once again, they've confused "good for the economy" with "good for businesses". If 2 million people lose their jobs then that isn't good for the economy.
 
Basically people use it for anything "clever"
Yep, and it's been deliberate obfuscation on the part of Big Tech, because saying "large language models which are really good at certain things but systemically unreliable in others and very definitely not even approaching human-like levels of reasoning or flexible problem solving " is not great for marketing to gullible politicians.

I mean fuck can you imagine how these things are going to struggle with implementing the constantly shifting, contradictory, self-defeating and cynical machinations of Westminster policy at ground level? Fudged data, 20-year-old computers running key departmental systems on Windows 95, the fact legislation can't possibly account for every edge case you come across etc etc? The mind boggles.
 
Last edited:
Computers are useful, I think we can all agree.
Neural networks are useful. Image recognition, live translation etc.
Large Language Models can be useful when there is a mountain range of good data to train on. I've heard enough programmers say with a straight face that it's revolutionised their work to believe them. Like having a second programmer to bounce ideas off or spit out boring bits of code, a constant buddy. Sounds great, and it's all thanks to the wealth of existing code and discussion of that code.

But the way it works is so horrifyingly inefficient it's like a bad joke. Constantly feeding the same data through the network to get the next token, taking kilowatts to do what a small part of the human brain does with microwatts. This power usage, and paying back the even more tremendous power cost of training, makes them fundamentally uneconomic for the mass market. They've run out of general purpose data to train them on so the capabilities can't improve, the costs aren't going down because the kW/FLOPS is limited by physics, and the general public just aren't willing to actually pay for what they can do.

Meanwhile the tech giants have sunk hundreds of billions into hardware and training, with nothing world-changing to show for it. It's a bubble, it will pop, and it will be ugly for those giants when it does. This is a foolish move by the govt.
 
Computers are useful, I think we can all agree.
Neural networks are useful. Image recognition, live translation etc.
Large Language Models can be useful when there is a mountain range of good data to train on. I've heard enough programmers say with a straight face that it's revolutionised their work to believe them. Like having a second programmer to bounce ideas off or spit out boring bits of code, a constant buddy. Sounds great, and it's all thanks to the wealth of existing code and discussion of that code.

But the way it works is so horrifyingly inefficient it's like a bad joke. Constantly feeding the same data through the network to get the next token, taking kilowatts to do what a small part of the human brain does with microwatts. This power usage, and paying back the even more tremendous power cost of training, makes them fundamentally uneconomic for the mass market. They've run out of general purpose data to train them on so the capabilities can't improve, the costs aren't going down because the kW/FLOPS is limited by physics, and the general public just aren't willing to actually pay for what they can do.

Meanwhile the tech giants have sunk hundreds of billions into hardware and training, with nothing world-changing to show for it. It's a bubble, it will pop, and it will be ugly for those giants when it does. This is a foolish move by the govt.
This x100 essentially. When I was at uni back in the early 90's one of our lecturers had a thing for expert systems. To my my what is currently being touted as AI it pretty much a jumped-up version of that with lots more processing power available. Imo we are a long way from genuine AI and by that I mean a true general-purpose AI.
 
A really good example of the sheer level of inefficiency is if you watch ai try to recreate Minecraft. And I do mean recreate, over and over and over again. Because every time you move in the game it's generating the next frame from scratch, guessing based on a sample. Every time.

The player's comment about "no object permanence" here is spot on, and absolutely destroys its use value in any number of fields.

 
yeah i don't know much technical detail about AI either but i suspect its the new bitcoin.

No, because it has a use. It's rather over hyped at the moment, I suspect in a couple of years we will stop using the term (as it's not actually AI) but it will just be built in to everything we use. But you won't think this is AI. It will be a smarter search, a better way to analyse large amounts of data etc
 
I think the reports are that at a consumer level people really don't give a fuck. You'll have noticed how nearly every computer, phone or washing machine will have something in the jargon about AI, but that's not really cutting through and tempting people into the products. The phone industry probably needed a new buzzword as phones have reached such a high standard now where else can they go?

In the background there problem are ways it can be useful but I don't know enough about it. I'm all for resisting it when it's threatening jobs though.


Every single financial thing I've seen on it has pretty much pointed out that currently it seems to be a bubble. The owners of Chat GPT have had serious problems and so have many other companies when they have been figuring out how to scale it up. It is a bit concerning that this seems to be the best the government can come up with, but as above I don't really know enough.

I expect the people who make AI tools face the problem of having their work ripped off by others without credit or payment. But then that's what AI does to everyone else so tough shit for them I guess.
 
Computers are useful, I think we can all agree.
Neural networks are useful. Image recognition, live translation etc.
Large Language Models can be useful when there is a mountain range of good data to train on. I've heard enough programmers say with a straight face that it's revolutionised their work to believe them. Like having a second programmer to bounce ideas off or spit out boring bits of code, a constant buddy. Sounds great, and it's all thanks to the wealth of existing code and discussion of that code.

But the way it works is so horrifyingly inefficient it's like a bad joke. Constantly feeding the same data through the network to get the next token, taking kilowatts to do what a small part of the human brain does with microwatts. This power usage, and paying back the even more tremendous power cost of training, makes them fundamentally uneconomic for the mass market. They've run out of general purpose data to train them on so the capabilities can't improve, the costs aren't going down because the kW/FLOPS is limited by physics, and the general public just aren't willing to actually pay for what they can do.

Meanwhile the tech giants have sunk hundreds of billions into hardware and training, with nothing world-changing to show for it. It's a bubble, it will pop, and it will be ugly for those giants when it does. This is a foolish move by the govt.

That's why Open AI are charging $200 PCM for their Pro service. :D Claw something back before the penny drops.
 
There's a quite useful breakdown on Adam Conover's show across two episodes which goes into depth about both the market situation driving Big Tech's AI push and the broader state of things for those with a bit of podcast time on their hands. Zitron is a bit over-bearish about predicting burst bubbles but the base analysis seems pretty sound.





(That'll be it from me on YouTube links)
 
This x100 essentially. When I was at uni back in the early 90's one of our lecturers had a thing for expert systems. To my my what is currently being touted as AI it pretty much a jumped-up version of that with lots more processing power available. Imo we are a long way from genuine AI and by that I mean a true general-purpose AI.
Nah LLMs are nothing like Expert Systems. That was a pre neural network approach where scientists thought that all of human knowledge could be encoded in a systematic way with logical rules, categories, relationships etc. (CAT is a subset of MAMMAL, CAT chases MOUSE, CAT is DOMESTICATED and so on). Big efforts were made and there are still some huge ES databases out there.

LLMs find their own relationships between words and word fragments inside a very high-dimensional space. Places and directions in this space have "meaning" but it all emerges by random association during training, represented by a sea of numbers. It's completely opaque to an outside observer and critically cannot self-modify. If you want to truly encode new information in an LLM, you have to add that information to the training data in sufficient volume for it to be heard over the noise, and re-train.

I think this is the best short explainer for how they work:

(with tons more detail in his other videos if you're curious)

Some other approach is needed for a truly useful learning AI. Something that doesn't have separate training data and input data, so that it can genuinely learn and adapt.

analyse large amounts of data etc
But LLMs are only any good at information that resembles the information they've been trained on. Get outside the big hump in the middle of the normal distribution and they get much more loosey-goosey, turning into bullshitters. So if your job is just moving boring, non-novel information around and making powerpoints about it, a) a LLM can probably do it and b) it's probably a Bullshit Job (in the Graeber sense).
 
A really good example of the sheer level of inefficiency is if you watch ai try to recreate Minecraft. And I do mean recreate, over and over and over again. Because every time you move in the game it's generating the next frame from scratch, guessing based on a sample. Every time.

The player's comment about "no object permanence" here is spot on, and absolutely destroys its use value in any number of fields.


Yeah that was brilliant.
 
But LLMs are only any good at information that resembles the information they've been trained on. Get outside the big hump in the middle of the normal distribution and they get much more loosey-goosey, turning into bullshitters. So if your job is just moving boring, non-novel information around and making powerpoints about it, a) a LLM can probably do it and b) it's probably a Bullshit Job (in the Graeber sense).

What I'm curious to see is how good it is for querying a SIEM which is plugged into thousands of endpoints, servers and networking gear. So maybe not novel. But potentially quite useful. We'll see anyway. It's not installed yet.
 
Or if the dam thing could look at everyone's calendars and tell me when I can get all the people I need in the same room, that would be quite handy.
You Are All Available On Friday At 4.45pm. I Have Scheduled Your Meeting And Made It Mandatory.

"I'm ill"

Based On Recent Workplace Studies This Is Not Accurate. Your Role Has Been Terminated.
 
Reading this thread has eased my stress a little so thanks to the u75 boffins.

I also read something about how the huge amounts of power needed for AI means some areas will have to build better infrastructure that can generate and carry that power. Apparently some musk ai computer site is threatening to cause blackouts in memphis. The UK has a more robust power network compared to the creaky USA ones but do we have the capacity for the extra energy?
 
Reading this thread has eased my stress a little so thanks to the u75 boffins.

I also read something about how the huge amounts of power needed for AI means some areas will have to build better infrastructure that can generate and carry that power. Apparently some musk ai computer site is threatening to cause blackouts in memphis. The UK has a more robust power network compared to the creaky USA ones but do we have the capacity for the extra energy?
Well,that's why they're talking about building "mini nuclear reactors".
 
It's all so mad.

Just so Google can tell me the contents of the top search result without me having to open the top search result.
 
There's a quite useful breakdown on Adam Conover's show across two episodes which goes into depth about both the market situation driving Big Tech's AI push and the broader state of things for those with a bit of podcast time on their hands. Zitron is a bit over-bearish about predicting burst bubbles but the base analysis seems pretty sound.





(That'll be it from me on YouTube links)

This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
 
This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
From the show notes:

1.
ChatGPT burns obscene amounts of cash daily with little return, Google's AI dispenses useless and sometimes dangerous advice, and a recent study showed that tech companies will soon run out of new training data to improve their AI models. If AI is really so costly, unreliable, and limited, what happens to the industry that has bet so big on it? This week, Adam talks with journalist and influential tech critic Ed Zitron of wheresyoured.at to discuss the impending burst of the AI bubble, the hubris of Silicon Valley, and how we suffer under big tech's "Rot Economy."

2.
Adam sits with Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and co-authors of "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference." Together, they break down everything from tech that's labeled as "AI" but really isn’t, to surprising cases where so-called "AI" is actually just low-paid human labor in disguise. Find Arvind and Sayash's book at factuallypod.com/books
 
This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
For the Zitron interview, the first half is all about Enshittification in tech (chasing Number Go Up while actively making the actual products worse).
The second half is about the AI bubble.

You could read Zitron instead of listening to him.
1st half: Never Forgive Them
2nd half: The Subprime AI Crisis

He doesn't half write a lot of words though so get a comfy chair!
 
Back
Top Bottom