Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

I saw a TV advert for Mondo (bank) that's clearly all AI. First I've seen so far

There are loads on YT which are using text to speech. I wouldn't be surprised if the blurb is written by an LLM and the music by suno.ai or similar. That one for the jet wash thing is most obvious. The cost to produce this kack is virtually nothing.

(I haven't got adblocker on phone / PC.)
 
There are loads on YT which are using text to speech. I wouldn't be surprised if the blurb is written by an LLM and the music by suno.ai or similar. That one for the jet wash thing is most obvious. The cost to produce this kack is virtually nothing.

(I haven't got adblocker on phone / PC.)
Not just adverts, I watched a video with information about Kestrels yesterday, all AI text to speach. I wouldn't mind but it was jarring still
 
Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.

Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs IT.

Oh my god. Feed it into my fucking veins. That is exactly it.
 
I would say that even to describe what they're doing as bullshitting overstates what they're capable of. Even bullshitting suggests some awareness of what one is saying.
 
I would say that even to describe what they're doing as bullshitting overstates what they're capable of. Even bullshitting suggests some awareness of what one is saying.
The paper does deal with that. It calls ChatGPT a "bullshit machine" that is used to generate bullshit. Then there is a discussion about whether the bullshitter is the creator of the machine, the machine itself or the person who uses the machine. But the point is that regardless of how you classify it ontologically, the result is "bullshit", not "hallucination" or "confabulation".
 
The paper does deal with that. It calls ChatGPT a "bullshit machine" that is used to generate bullshit. Then there is a discussion about whether the bullshitter is the creator of the machine, the machine itself or the person who uses the machine. But the point is that regardless of how you classify it ontologically, the result is "bullshit", not "hallucination" or "confabulation".
There's also the distinction between hard and soft bullshit, i.e with the intention to deceive the audience or not. They argue that possibly there is an intention to deceive as it is trying to be "humanlike" and appear as if it has an agenda when it does not. That, in itself, is bullshit and possibly changes the nature of the bullshit produced (but this depends on some contentious definitions).
 
I would say that even to describe what they're doing as bullshitting overstates what they're capable of. Even bullshitting suggests some awareness of what one is saying.

I think it was Signal 11 who introduced me to the term next token predictor with an article they posted. I am using these things for a few genuinely beneficial tasks. But the hype and faulty notion that these systems have some kind of understanding is frustrating.
 
I remember in the 80s a computer programmer in some U.S. university wrote a program called Eliza (iirc) that mimic'd a psychiatrist chat session. It was very sucessfull with students who found it on uni network and it would often be used in the middle of the night for students to get things off their mind or work through. It was very simple programming and I wrote one myself by following an article in a mag. All it did was scan your input text, and change I to YOU, and reverse somve verbs and throw your sentence back to you .... if it couldnt find a match it would pick a standard phrase at random (like, interesting, do continue). It was surprisingly 'real'
eg
hello Eliza, I feel sad today
Why do you feel sad?
My cat died
interesting, do continue
I loved that cat
why did you loved that cat?
she was my best friend
tell me more about your best friend
etc

Anyway ... there was no intelligence there but we (subconsciously) attribute intelligence to it and it mimics are real conversation (albeit a stiff one)

I think the likes of ChatGPT is like that, but much fancier . The conversation and answers look like its thinking (or bluffing, or bullshitting , etc) but thats just us applying our intelligence and looking for patterns or reasons why B must follow A. A bit like how we say fire is alive. It lives, it feeds, it moves. It even seems to think. And when we cant understand why it does something then we say that its trying to outsmart us or its just being mysterious.

I think true intelligence is when a question is answered that was never asked, ie a leap between 2 unconnected 'ideas'. So we're not there .............................. yet
 
This blog looks promising.


This made me laugh out loud.

Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused.

We had to take three days off everything to work on AI projects for the company. It was suggested that we watch the video "AI 101: what is AI?" before we did this.
 
I think I'm so over AI generated music already. As someone commented in a chat at work the other day. Surely we want AI (Yeah I know but it's the term everyone's using) to do the boring drudge work, so we have more time for creativity, music, art etc, not the other way round FFS.
I guess the optimistic view is that's where we will get, it just takes some of the more showy stuff to get everyone's attention. I mean, Ableton would seem like "AI" to people making music with instruments in the 60s etc.
every iPhone launch for years has shown a load of crazy features that 99% of people don't use but they help progress the more generic stuff that everyone does use.
 
Interesting read…

Findings In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.

 
That’s great and all, but the whole point of the bullshit paper above is that you’d never want to go live with something like that, because an LLM is designed to output bullshit, not truth or useful advice. So, no matter how often it provides some people with something useful, there’s nothing stopping it telling the next one to drink bleach. In a very authoritative manner, with lots of “citations”.
 
Assuming AI becomes more intelligent than humans.

Is there a parallel in nature of a more intelligent species existing next to a lesser species?
 
Erm... us?
:) Actually as I was writing that question I was thinking what example wtf etc .. but yes humans are an example. We are more intelligent and where other animals are concerned we do just whatever we like. Would it be like that with intelligent AI I wonder?
 
So I've delved into AI investing. It all seems legit. Looks like you just need to sit there and watch the investment grow while the AI does its thing, it's investing in crypto I think. Minimum of $250. I have to admit I've got some raised eyebrows but they're very convincing if it is a scam. Anyone else done this? Apparently the average return last month was 42% hence my eyebrows. I can cover the 250 so will see how it goes.

If AI puts wall street traders out of a job then all good.

 
So I've delved into AI investing. It all seems legit. Looks like you just need to sit there and watch the investment grow while the AI does its thing, it's investing in crypto I think. Minimum of $250. I have to admit I've got some raised eyebrows but they're very convincing if it is a scam. Anyone else done this? Apparently the average return last month was 42% hence my eyebrows. I can cover the 250 so will see how it goes.

If AI puts wall street traders out of a job then all good.


This immediately sounds like bullshit !
 
They were very insistent I use Kraken for withdrawals but yeh - they did spend an hour talking me through everything and didn't sound scammy but im a born optimist. I'll report back ;)

I mean they did suggest I withdraw 150 or so while I was on the phone with them to check if they were kosher when I raised my concerns. Which I haven't done. Lets see how the 250 goes.
 
So I've delved into AI investing. It all seems legit. Looks like you just need to sit there and watch the investment grow while the AI does its thing, it's investing in crypto I think. Minimum of $250. I have to admit I've got some raised eyebrows but they're very convincing if it is a scam. Anyone else done this? Apparently the average return last month was 42% hence my eyebrows. I can cover the 250 so will see how it goes.

If AI puts wall street traders out of a job then all good.

sigh
 
Back
Top Bottom