ska invita
back on the other side
I doubt it , I think its literally scrabbling around for a techno fix to the economyIs this really about starmer trying to get musk on side?
I doubt it , I think its literally scrabbling around for a techno fix to the economyIs this really about starmer trying to get musk on side?
Yep, and it's been deliberate obfuscation on the part of Big Tech, because saying "large language models which are really good at certain things but systemically unreliable in others and very definitely not even approaching human-like levels of reasoning or flexible problem solving " is not great for marketing to gullible politicians.Basically people use it for anything "clever"
This x100 essentially. When I was at uni back in the early 90's one of our lecturers had a thing for expert systems. To my my what is currently being touted as AI it pretty much a jumped-up version of that with lots more processing power available. Imo we are a long way from genuine AI and by that I mean a true general-purpose AI.Computers are useful, I think we can all agree.
Neural networks are useful. Image recognition, live translation etc.
Large Language Models can be useful when there is a mountain range of good data to train on. I've heard enough programmers say with a straight face that it's revolutionised their work to believe them. Like having a second programmer to bounce ideas off or spit out boring bits of code, a constant buddy. Sounds great, and it's all thanks to the wealth of existing code and discussion of that code.
But the way it works is so horrifyingly inefficient it's like a bad joke. Constantly feeding the same data through the network to get the next token, taking kilowatts to do what a small part of the human brain does with microwatts. This power usage, and paying back the even more tremendous power cost of training, makes them fundamentally uneconomic for the mass market. They've run out of general purpose data to train them on so the capabilities can't improve, the costs aren't going down because the kW/FLOPS is limited by physics, and the general public just aren't willing to actually pay for what they can do.
Meanwhile the tech giants have sunk hundreds of billions into hardware and training, with nothing world-changing to show for it. It's a bubble, it will pop, and it will be ugly for those giants when it does. This is a foolish move by the govt.
yeah i don't know much technical detail about AI either but i suspect its the new bitcoin.
I think the reports are that at a consumer level people really don't give a fuck. You'll have noticed how nearly every computer, phone or washing machine will have something in the jargon about AI, but that's not really cutting through and tempting people into the products. The phone industry probably needed a new buzzword as phones have reached such a high standard now where else can they go?
In the background there problem are ways it can be useful but I don't know enough about it. I'm all for resisting it when it's threatening jobs though.
Every single financial thing I've seen on it has pretty much pointed out that currently it seems to be a bubble. The owners of Chat GPT have had serious problems and so have many other companies when they have been figuring out how to scale it up. It is a bit concerning that this seems to be the best the government can come up with, but as above I don't really know enough.
Computers are useful, I think we can all agree.
Neural networks are useful. Image recognition, live translation etc.
Large Language Models can be useful when there is a mountain range of good data to train on. I've heard enough programmers say with a straight face that it's revolutionised their work to believe them. Like having a second programmer to bounce ideas off or spit out boring bits of code, a constant buddy. Sounds great, and it's all thanks to the wealth of existing code and discussion of that code.
But the way it works is so horrifyingly inefficient it's like a bad joke. Constantly feeding the same data through the network to get the next token, taking kilowatts to do what a small part of the human brain does with microwatts. This power usage, and paying back the even more tremendous power cost of training, makes them fundamentally uneconomic for the mass market. They've run out of general purpose data to train them on so the capabilities can't improve, the costs aren't going down because the kW/FLOPS is limited by physics, and the general public just aren't willing to actually pay for what they can do.
Meanwhile the tech giants have sunk hundreds of billions into hardware and training, with nothing world-changing to show for it. It's a bubble, it will pop, and it will be ugly for those giants when it does. This is a foolish move by the govt.
I'm a lowly data admin/processor for a local council benefits section.What kind of work do you do?
Nah LLMs are nothing like Expert Systems. That was a pre neural network approach where scientists thought that all of human knowledge could be encoded in a systematic way with logical rules, categories, relationships etc. (CAT is a subset of MAMMAL, CAT chases MOUSE, CAT is DOMESTICATED and so on). Big efforts were made and there are still some huge ES databases out there.This x100 essentially. When I was at uni back in the early 90's one of our lecturers had a thing for expert systems. To my my what is currently being touted as AI it pretty much a jumped-up version of that with lots more processing power available. Imo we are a long way from genuine AI and by that I mean a true general-purpose AI.
But LLMs are only any good at information that resembles the information they've been trained on. Get outside the big hump in the middle of the normal distribution and they get much more loosey-goosey, turning into bullshitters. So if your job is just moving boring, non-novel information around and making powerpoints about it, a) a LLM can probably do it and b) it's probably a Bullshit Job (in the Graeber sense).analyse large amounts of data etc
A really good example of the sheer level of inefficiency is if you watch ai try to recreate Minecraft. And I do mean recreate, over and over and over again. Because every time you move in the game it's generating the next frame from scratch, guessing based on a sample. Every time.
The player's comment about "no object permanence" here is spot on, and absolutely destroys its use value in any number of fields.
But LLMs are only any good at information that resembles the information they've been trained on. Get outside the big hump in the middle of the normal distribution and they get much more loosey-goosey, turning into bullshitters. So if your job is just moving boring, non-novel information around and making powerpoints about it, a) a LLM can probably do it and b) it's probably a Bullshit Job (in the Graeber sense).
You Are All Available On Friday At 4.45pm. I Have Scheduled Your Meeting And Made It Mandatory.Or if the dam thing could look at everyone's calendars and tell me when I can get all the people I need in the same room, that would be quite handy.
Or if the dam thing could look at everyone's calendars and tell me when I can get all the people I need in the same room, that would be quite handy.
Well,that's why they're talking about building "mini nuclear reactors".Reading this thread has eased my stress a little so thanks to the u75 boffins.
I also read something about how the huge amounts of power needed for AI means some areas will have to build better infrastructure that can generate and carry that power. Apparently some musk ai computer site is threatening to cause blackouts in memphis. The UK has a more robust power network compared to the creaky USA ones but do we have the capacity for the extra energy?
I don't trust it. So I scroll through all the ads to check the actual results anyway.Tbf they enshittified the top results so thoroughly I can see why having an AI summariser seemed like it'd be useful.
There's a quite useful breakdown on Adam Conover's show across two episodes which goes into depth about both the market situation driving Big Tech's AI push and the broader state of things for those with a bit of podcast time on their hands. Zitron is a bit over-bearish about predicting burst bubbles but the base analysis seems pretty sound.
(That'll be it from me on YouTube links)
This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
From the show notes:This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
For the Zitron interview, the first half is all about Enshittification in tech (chasing Number Go Up while actively making the actual products worse).This looks interesting but both together it's two and a half hours long. Anyone fancy fleshing out Rob Ray's summary so I don't have to watch it?
"AI agents mean Salesforce won’t hire software engineers in 2025, apparently"As well as Zitron, this blog is worth a look: Pivot to AI