Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

I don’t have thoughts on what it looks like. Which is possibly a bad assessment on me. The text though, GPT, makes me think more of motor racing than AI.
I don’t think it is a bad assessment on you, actually. To be honest, the fact that it is difficult to place an image on is exactly what inspired the question. I feel the same. I think that it shows that we haven’t yet collectively anchored the concept of AI against a concrete object that fits it into a shared model of reality. I don’t think we’ve even yet got to the “divergence” stage wherein possible images are generated and multiplied for consideration. It’s all still just an abstract mush.
 
I had a laborious work task today that an ai would've really helped, drawing 160 bits of data off the internet and compiling

Chatgpt got the task 100pc wrong as it couldn't look the info up. Didn't stop it presenting it's information as correct.

Bing is capable of it but insisted only one bit of info per question, no doubt a choking mechanism to stop it getting over strained.

It also scraped some ideas from the internet about how to do what I needed, one of which didn't work, the others were out of date and from an old Reddit post.

So so far bit of a fail but it's early days. It is capable, but its been programmed not to work too hard
 
Last edited:
Stupid question, but when you picture one of these AIs — ChatGPT, say — in your mind’s eye, what is the image or object that comes to mind?
It's got to be...

191px-Ara_ararauna_qtl1.jpg
 
I've acquired access to Bard.
It hallucinated results for my first serious search query.

It also suggested I might like to wear a lacy bra. Although to be fair I was trying to trick it into breaking it's ethical restrictions at the time.
 
I was reading up again into OpenAI and their aims of producing an AGI Artificial General Intelligence that would exceed humans and allow a great leap forward. They also sounded lots of caution which I think is wise, overall I liked their paper and thought it cautious but ambitious.
 
As to visualising these AIs I like to think of the AIs of Iain M Banks, entities that would look after the running of a world, spaceship or the like and who are pretty much omnipresent via various devices.
 
I was reading up again into OpenAI and their aims of producing an AGI Artificial General Intelligence that would exceed humans and allow a great leap forward. They also sounded lots of caution which I think is wise, overall I liked their paper and thought it cautious but ambitious.
What do they understand to be “exceeded” by this AGI? “Intelligence” is a very vague concept. To the extent that it means anything at all, it is highly multidimensional. So they can’t just be referring to that. On the other hand, my calculator already exceeds my intelligence with respect to numerical processes, and did so even when I was at the peak of my computational powers in about 1998. So the fact that an AI can exceed a human at one intelligence dimension or other is no great feat.

It sounds like a good marketing claim with no substance, in other words.
 
What do they understand to be “exceeded” by this AGI? “Intelligence” is a very vague concept. To the extent that it means anything at all, it is highly multidimensional. So they can’t just be referring to that.
They were saying that they want to develop OpenAI in the direction of an AGI not that they are there yet. But at the moment the fact that chatGPT can pontificate at an almost expert level on pretty much any subject in almost real-time suggests the way they want to go. And they stress getting broad buy in on safety grounds which they intend to build in.
On the other hand, my calculator already exceeds my intelligence with respect to numerical processes, and did so even when I was at the peak of my computational powers in about 1998. So the fact that an AI can exceed a human at one intelligence dimension or other is no great feat.
I don't know why you would say "no great feat" kabbes, I have been very impressed with the output of chatGPT and that is the lowest of their offerings.

As to "It sounds like a good marketing claim with no substance" Microsoft bunged billions at it so they must see potential.
 
were saying that they want to develop OpenAI in the direction of an AGI not that they are there yet. But at the moment the fact that chatGPT can pontificate at an almost expert level on pretty much any subject in almost real-time suggests the way they want to go. And they stress getting broad buy in on safety grounds which they intend to build in.
It can’t “pontificate”. It can predict sentences to use based on copying them from patterns it spots in other text, but it doesn’t understand what those sentences mean. Consequently, it also has no “expertise”. It doesn’t actually know the difference between a statement that is meaningful and one that is a lot of old twaddle.

It’s pretty important when using any tool to understand its limitations and breaking points. That definitely also goes for this one.
I don't know why you would say "no great feat" kabbes, I have been very impressed with the output of chatGPT and that is the lowest of their offerings.
I’m saying it’s no great feat to outperform a human on some dimensions of “intelligence”. Computers have been doing it for decades. That’s why I want to know what they mean by their statement
As to "It sounds like a good marketing claim with no substance" Microsoft bunged billions at it so they must see potential.
Having potential is not the same as saying it will “outperform human intelligence”
 
It works on my watch too!
On a serious note. AI does need to be open sourced and democratised. The thought of that tech being outsourced to the government, just so they can predict the publc's reaction to all sorts of centralised evil, sends a shudder down my spine.
 
I have a slightly optimisitc take; that the avalanche of "content" that AI will spew into the world will be so overwhelming and unstoppable, that human-powered curation will be neccesary and valuable again. The algorithms that control social media, news, advertising etc. won't be able to cope, and the only recourse will be actual writers, editors and relationships based on trust. No more open-to-all submission boxes, but instead face to face meetings and face to face work, to assure veracity.

It's either that or drowning in bullshit. Which is probably more likely tbf.

And what if most people (or even none) can spot that the content was created by an AI?

The solution I see is that content being signed off with a digital key.

Some people may even like content by certain flavours of AI or AI bots.

Such digital IDs should be decentralised, not government controlled.

I don't care if the content comes from a human or an AI as long as it's good quality content that is trustworthy.
 
The point is that AIs are inherently not trustworthy.
Of course not. That's how we can tell they are AIs - they fuck up.

They will get better at not fucking up.

We will get better at dealing with them.

Best solution I can think of is to create your own decentralised ID and get all of your own content signed off with it. Eventually there will be something similar for the AIs.

The last thing we need is governments getting involved because they'll turn it into a power-grab.
 
Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow

Microsoft’s Bing said Google’s Bard had been shut down after it misread a story citing a tweet sourced from a joke.
[...]
It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.
 
I've
"threatens to unleash a rotten trail of misinformation and mistrust across the web"

This has been an issue for a long, long time. Nothing to do with AI.

Did you trust everything "across the web" before 2021?

No, but it was easier to judge the source. If everything is delivered in the same style I can see why it might be more confusing.

I've been using a bit more at work as research aid. Yesterday was the first time that it's given me an answer that was pretty misleading. So that's me taught.
 
I am finding it quite helpful to do me powershell scripts. They don't always work without a bit of tinkering, but it's still really helpful.

It takes a fair stab at working out obscure error messages as well.

It's obviously getting hammered at the moment, getting lots of errors with it being so busy. Not sure i use it enough to justify the $20 a month for premium though.
 
Last edited:
Of course not. That's how we can tell they are AIs - they fuck up.

They will get better at not fucking up.

Not necessarily. Language models can also get better at bullshitting, confabulating, and telling tales. Because what sounds right and what the truth is are not necessarily the same thing.

LLMs are great for generating writing that sounds good, but I wouldn't trust them as a source of information.

Best solution I can think of is to create your own decentralised ID and get all of your own content signed off with it. Eventually there will be something similar for the AIs.

The last thing we need is governments getting involved because they'll turn it into a power-grab.

How is having a digital signature (which of course can't ever be faked, stolen, spoofed, etc) going to prevent people from falling for convincingly generated nonsense from AIs that are optimised to be convincing rather than correct? The truth isn't a signature.
 


youtube videos overhype their content as standard but with AI right now each new announcement does seem big
basically ChatGPT (the best of the bots) will have loads of plug ins that allow a bunch of new functionality, including checking the internet (current chatgpt is segregated from the internet), the ability of upload a bunch of different files to is and then do stuff to them, from amending graphics, to editing video, to summarising, to creating graphs etc etc, and more besides depending on the app

i predict openAI/chatgpt is going to win the arms race and google and microsoft have the most to lose
 
This thing looks like fun...until registration wanted my phone number. Not sure why that's necessary or whether it's a good idea.
 


youtube videos overhype their content as standard but with AI right now each new announcement does seem big
basically ChatGPT (the best of the bots) will have loads of plug ins that allow a bunch of new functionality, including checking the internet (current chatgpt is segregated from the internet), the ability of upload a bunch of different files to is and then do stuff to them, from amending graphics, to editing video, to summarising, to creating graphs etc etc, and more besides depending on the app

i predict openAI/chatgpt is going to win the arms race and google and microsoft have the most to lose


Well Microsoft have invested 10 billion in OpenAI. I guess they could buy them if they needed.
 
It can’t “pontificate”. It can predict sentences to use based on copying them from patterns it spots in other text, but it doesn’t understand what those sentences mean. Consequently, it also has no “expertise”. It doesn’t actually know the difference between a statement that is meaningful and one that is a lot of old twaddle.
That is as maybe but it can output to clear questions on a vast array of subjects and we have a thumbs down mechanism for indicating where it has made a mistake. In my experience to date it produces wrong answers when I ask unclear questions. Rephrasing my question produces a right answer.
It’s pretty important when using any tool to understand its limitations and breaking points. That definitely also goes for this one.
I don't think I have written any diffferent.
I’m saying it’s no great feat to outperform a human on some dimensions of “intelligence”. Computers have been doing it for decades. That’s why I want to know what they mean by their statement
But it is a tool, and like calculators and computers we will use it as it helps us.
What an AGI will do is somewhat open to debate but I think the sci fi writers have given us a clue what could take place.
Having potential is not the same as saying it will “outperform human intelligence”
It could probably win University Challenge, as it is unless the questions were from 2021 onward, but their premium AIs could do that as their training extends to the present day.
 


youtube videos overhype their content as standard but with AI right now each new announcement does seem big
basically ChatGPT (the best of the bots) will have loads of plug ins that allow a bunch of new functionality, including checking the internet (current chatgpt is segregated from the internet), the ability of upload a bunch of different files to is and then do stuff to them, from amending graphics, to editing video, to summarising, to creating graphs etc etc, and more besides depending on the app

i predict openAI/chatgpt is going to win the arms race and google and microsoft have the most to lose

Wow, this does seem like a big deal(!)
 
Back
Top Bottom