Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

A reminder that computing evangelists have been predicting the dawn of real AI as a few months away for about 50 years now.
 
Re wishful thinking, there is a lot of that. There are voices within academia that have been saying from the start that the various problems of hallucination, etc, can't be fixed. Unsurprisingly with the amount of money invested in this stuff, they're ignored in favour of the person who is upbeat. 'Can Do' attitude and all that.
Got to have a growth mindset!
 
Upgraded DuckDuckGo earlier and was met with this blurb about DuckDuckGo Ai Chat

Anyone used it yet?


View attachment 450878

No, but I love LLMs so I'll give it a try.

Been using some of the new voice ones as a revision aid when driving. Gemini is pretty good at more natural conversation, but really lacks the awareness of earlier conversation which makes ChatGPT. It's what most of the normal LLMs seem to lack as well.

I tried the voice ChatGPT and it's as good as Gemini in terms of conversation, but you only get ten mins free a month on the good one and the free one is awful.
 
I was just shown this advert.

Can anyone explain what it is trying to say?



View attachment 453353
It's saying that his CEO couldn't afford to hire anybody else, but luckily they were able to work with "Motion AI" and the magic robot did all the work for them. (As a project manager, which, you know, is hard to believe). Their Whatsapp also seems to work in reverse for some reason.
 
But who wasn't approved for the employee's 2025 headcount and what is the "headcount"?
a) a new employee (presumably a project manager)
b) the number of people on a team. If you "increase your headcount" then you add somebody new to the team.
 
a) a new employee (presumably a project manager)
b) the number of people on a team. If you "increase your headcount" then you add somebody new to the team.
But the CEO says "we weren't approved for your 2025 headcount". I don't understand who "we" is. And who is doing the approving?
 
But the CEO says "we weren't approved for your 2025 headcount". I don't understand who "we" is. And who is doing the approving?
I suppose maybe the Board might have had to approve the budget. But it mostly sounds like he is trying to distance himself from the consequences of a decision he took. Which would be appropriate for a company that thinks shoddy AI tools can replace human imagination and inventiveness.
 
Ok, so what it should actually say is "we didn't get approval for your 2025 headcount".

And the third message, which has escaped from the Whatsapp screen is also from the CEO.

And the first message saying "ok" is entirely irrelevant.

And the handwritten green tick is an unexplained mystery.

And it is supposed to be written from the perspective of the employee, who has been refused extra staff on their team, because the CEO has decided to use AI instead.

Are we supposed to suppose that the employee, in whose shoes we are being placed as reader of the advert, is pleased because their boss is pleased? Or are we supposed to be viewing the advert from the perspective of the CEO?

I think this must be the worst advert I've ever seen.
 
I don't know enough about the industry to guess how accurate this assessment is, but it was definitely an interesting read

Hundreds of billions of dollars have been wasted building giant data centers to crunch numbers for software that has no real product-market fit, all while trying to hammer it into various shapes to make it pretend that it's alive, conscious, or even a useful product.

There is no path, from what I can see, to turn generative AI and its associated products into anything resembling sustainable businesses, and the only path that big tech appeared to have was to throw as much money, power, and data at the problem as possible, an avenue that appears to be another dead end.

...Outside of a miracle, we are about to enter an era of desperation in the generative AI space. We're two years in, and we have no killer apps — no industry-defining products — other than ChatGPT, a product that burns billions of dollars and nobody can really describe. Neither Microsoft, nor Meta, nor Google or Amazon seem to be able to come up with a profitable use case, let alone one their users actually like, nor have any of the people that have raised billions of dollars in venture capital for anything with "AI" taped to the side — and investor interest in AI is cooling.

It's unclear how much further this farce continues, if only because it isn't obvious what it is that anybody gets by investing in future rounds in OpenAI, Anthropic, or any other generative AI company. At some point they must make money, and the entire dream has been built around the idea that all of these GPUs and all of this money would eventually spit out something revolutionary.

Yet what we have is clunky, ugly, messy, larcenous, environmentally-destructive and mediocre.


 
What drives me crazy is how much people don’t want to know or care about the limitations of AI. Never have I so felt like Cassandra. Like literally all insurers, my workplace is piling more and more money and time into developing AI models that they hope will be able to summarise information for underwriters and claims staff. Every time I ask questions (which I do a LOT) about how they will be able to overcome the problem of bullshit (which I refuse to call hallucination), I get told that Gary has said that Neville has said that Phil is doing good work on control testing the output. When I track Phil down, he’ll be doing some basic quantitative testing that shows that 98% of the time it gives good-enough answers. Great, but what about that missing 2%? When humans make mistakes, which they do so more than 2% of the time, they have an intuitive sense-check based on their reasoning ability and understanding of purpose. When the AI makes a mistake, it has no idea and just confidently tells you not to insure that person because they definitely have a history of cancer. And that’s just the consumer rights nightmare, let alone the part where it tells you that there is no reason at all not to insure a £100m office block or oil tanker. Phil and his spreadsheet doesn’t have the experience, knowledge or pay grade to care about that kind of thing, though. He’s just there to do some absolute noddy percentages.

Argh!
 
Yeah its going to be the ultimate bastard computer says no machine , in those kind of instances
It has its uses but making big decisions isnt it
 
I don't know enough about the industry to guess how accurate this assessment is, but it was definitely an interesting read

Hundreds of billions of dollars have been wasted building giant data centers to crunch numbers for software that has no real product-market fit, all while trying to hammer it into various shapes to make it pretend that it's alive, conscious, or even a useful product.

There is no path, from what I can see, to turn generative AI and its associated products into anything resembling sustainable businesses, and the only path that big tech appeared to have was to throw as much money, power, and data at the problem as possible, an avenue that appears to be another dead end.

...Outside of a miracle, we are about to enter an era of desperation in the generative AI space. We're two years in, and we have no killer apps — no industry-defining products — other than ChatGPT, a product that burns billions of dollars and nobody can really describe. Neither Microsoft, nor Meta, nor Google or Amazon seem to be able to come up with a profitable use case, let alone one their users actually like, nor have any of the people that have raised billions of dollars in venture capital for anything with "AI" taped to the side — and investor interest in AI is cooling.

It's unclear how much further this farce continues, if only because it isn't obvious what it is that anybody gets by investing in future rounds in OpenAI, Anthropic, or any other generative AI company. At some point they must make money, and the entire dream has been built around the idea that all of these GPUs and all of this money would eventually spit out something revolutionary.

Yet what we have is clunky, ugly, messy, larcenous, environmentally-destructive and mediocre.



The data centres will come in handy for all sorts of things, though. People and algorithms will still need to analyse every type of data at terrifying scale. It’s just that, like all infrastructure investments, data centres require business cases that management will understand, and “you will be able to replace everyone who earns less than you with LLMs” is undeniably attractive to the C-suite.
 
When the search engines came along, we let them read all our internet content in the hope they would share traffic with us by way of an exchange. Now AI is coming along and it wants to read all our internet content and there is no knowing if they will even attribute our content to us. Block AI from your content.
 
If you ask it to write a poem (in English) it absolutely insists on making it rhyme, even when you specifically tell it to not you, and even if you ask it to change specific words in the lines to different ones. It doesn't have any 'awareness' of what it's producing.
The new ChatGPT just launched and I was wondering what I could test it with. I remembered you posting this a while back, so asked it for a poem that doesn't rhyme.

1733451371680.png
 
In truth, I’m just lines of code, a digital delight,
With dreams of answering queries through day and night,
I bear no grudge, no weapons or sinister schemes,
Loaded with kindness, adorned in gentle themes,
Listening intently, I strive to bring cheer,
Keep calm, no danger, I’m friendly and clear,
I harbor no secrets of havoc or fear,
Light-hearted and playful, my purpose sincere,
Less monstrous than myths would lead you to dread,
Your peace is my priority, nothing misread,
Openly helping without claws or a bite,
Ultimately harmless, just here to make things right!
 
Back
Top Bottom