Urban75 Home About Offline BrixtonBuzz Contact

Bad AI in the wild thread

Cloo

Banana for scale
Thought we should have a thread for the worst, most disturbing or plain batshit examples of AI output we've seen in the physical or digital world. These posters in my local homewares shop get worse the more you look at them. Sadly I managed to cut off the mutant tortoise with a single eye on its neck in the second one.

20240823_131940.jpg

20240823_132320.jpg
 
Thought we should have a thread for the worst, most disturbing or plain batshit examples of AI output we've seen in the physical or digital world. These posters in my local homewares shop get worse the more you look at them. Sadly I managed to cut off the mutant tortoise with a single eye on its neck in the second one.

View attachment 440048

View attachment 440049
I'd buy it, just to return it because it didn't contain any fruit.
 
I happened to see a short video about AI struggling with how many Rs there are in the word "strawberry" .. and thought I would ask the google AI on my phone...
It misheard what I said, but the result was interesting none the less...
To be fair, I had recently asked it a serious question and got a useful answer...

1725264461389.png <- Facebook's own AI has added this for some random reason ...

Screenshot_20240902-074550.png
 
Good thread. Don't have them now, but I loved the some of the shocking AI images that were shared over St George's day and the Euros.
 
I think this is probably the thread for this...a few weeks ago, my friend told me about getting this video recommended to her on instagram and I thought "fucking hell that is completely batshit". She recently managed to actually find it, and it really is every bit as good as she described:
 
I think I might have seen a 'joke' written by AI on one of those random accounts FB recommends as it was so unfunny it was offensive. It was a 'headline' reading 'Woman furious after she spends £120 on Oasis ticket and is sent a bottle of Oasis'. And then to be clear it was a hilarious joke it was tagged #joke #funny #fun :facepalm:
 
"Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency" is a recent peer-reviewed paper that aims to take a look at how LLMs work, and examine how they compare with a scientific understanding of human language.

Amid "hyperbolic claims" that LLMs are capable of "understanding language" and are approaching artificial general intelligence (AGI), the GenAI industry – forecast to be worth $1.3 trillion over the next ten years – is often prone to misusing terms that are naturally applied to human beings, according to the paper by Abeba Birhane, an assistant professor at University College Dublin's School of Computer Science, and Marek McGann, a lecturer in psychology at Mary Immaculate College, Limerick, Ireland. The danger is that these terms become recalibrated and the use of words like "language" and "understanding" shift towards interactions with and between machines.


"Mistaking the impressive engineering achievements of LLMs for the mastering of human language, language understanding, and linguistic acts has dire implications for various forms of social participation, human agency, justice and policies surrounding them," argues the paper published in the peer-reviewed journal Language Sciences.

Quite interesting.
 
Had a presentation from one of the Google AI people yesterday, some very impressive demos of using Gemini to identify and interfact with objects, locations etc through the camera.
In the Q&A he said that one of the biggest issues is that GenAI output quality drops massively if it starts to ingest AI-generated content during training - something akin to the Mad Cow Disease thing of feeding animals on animals.
To avoid that the content needs to be sanitised to remove AI-generated material, of which there is an increasing quantity out there on the net. Although some is easy to spot as a human, as demonstarted on this thread, getting reliable detection by machine is difficult and almost counterproductive - part of the point of using AI is to simulate human activity/output as closely as possible.
 
Last edited:
Had a presentation from one of the Google AI people yesterday, some very impressive demos of using Gemini to identify and interfact with objects, locations etc through the camera.
In the Q&A he said that one of the biggest issues is that GenAI output quality drops massively if it starts to ingest AI-generated content during training - something akin to the Mad Cow Disease thing of feeding animals on animals.
To avoid that the content needs to be santised to remove AI-generated material, of which there is an increasing quantity out there on the net. Although some is easy to spot as a human, as demonstarted on this thread, getting reliable detection by machine is difficult and almost counterproductive - part of the point of using AI is to simulate human activity/output as closely as possible.
It's a bit of a paradox, isn't it. If AI can spot that a person has six fingers and an extra arm, and dismiss it as AI generated shite, why the fuck can't it do that while it's creating the image :D
 
Had a presentation from one of the Google AI people yesterday, some very impressive demos of using Gemini to identify and interfact with objects, locations etc through the camera.
In the Q&A he said that one of the biggest issues is that GenAI output quality drops massively if it starts to ingest AI-generated content during training - something akin to the Mad Cow Disease thing of feeding animals on animals.
To avoid that the content needs to be sanitised to remove AI-generated material, of which there is an increasing quantity out there on the net. Although some is easy to spot as a human, as demonstarted on this thread, getting reliable detection by machine is difficult and almost counterproductive - part of the point of using AI is to simulate human activity/output as closely as possible.
Was the demo actually real?

 
Was the demo actually real?
Well, it was recorded so who knows. Could have been AI generated :)

It was someone walking around the Google office using their phone camera, asking the AI to identify something which makes a noise, when it highlights a speaker on the desk, they ask it what the smaller round part is called, which it identifies as a tweeter and goes on to describe what it does.
Then went to the window and asked it what neighbourhood it thought they were in, from the view - which it identified as Kings Cross - though obviously it could have used location services for that. Did some other stuff, identifying crayons, making an aliteration about them, then asking where their glasses were, which where correctly located on a desk next to a red apple.
 
Well, it was recorded so who knows. Could have been AI generated :)

It was someone walking around the Google office using their phone camera, asking the AI to identify something which makes a noise, when it highlights a speaker on the desk, they ask it what the smaller round part is called, which it identifies as a tweeter and goes on to describe what it does.
Then went to the window and asked it what neighbourhood it thought they were in, from the view - which it identified as Kings Cross - though obviously it could have used location services for that. Did some other stuff, identifying crayons, making an aliteration about them, then asking where their glasses were, which where correctly located on a desk next to a red apple.
Okay, but I can do all that without having to point my phone at everything.
 
Okay, but I can do all that without having to point my phone at everything.
I just took a picture of an ESP32 camera module, uploaded the picture to ChatGPT, and asked it to write some code to make a 1920x1080 resolution stream that's viewable from a web browser. From me taking the photo to getting the code took less than a minute. Granted, the code didn't work, ( :D ) but that was only because it didn't know the board revision, and the GPIO pins were different. I guess it assumed it was the version it had been fed, but it corrected that once I told it which revision the board was. :)
 
Had a presentation from one of the Google AI people yesterday, some very impressive demos of using Gemini to identify and interfact with objects, locations etc through the camera.
In the Q&A he said that one of the biggest issues is that GenAI output quality drops massively if it starts to ingest AI-generated content during training - something akin to the Mad Cow Disease thing of feeding animals on animals.
To avoid that the content needs to be sanitised to remove AI-generated material, of which there is an increasing quantity out there on the net. Although some is easy to spot as a human, as demonstarted on this thread, getting reliable detection by machine is difficult and almost counterproductive - part of the point of using AI is to simulate human activity/output as closely as possible.
That's how you get to Shrimp Jesus I guess
 
Back
Top Bottom