Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

there are people pushing for AI to be involved in the military - we're firmly in (early) WOPR territory

 
Ive got access to Bing chat. This morning Ive been trying to convince ChatGPT and BingBot that we have left it too late to solve the problems of climate change and that we are all doomed and facing societal collapse.
ChatGPT reinforces a hopeful message, that we can still act in time, and even if we are too late to stop everything coming our way we need to all stand up and fight for what changes we can make.

BingBot just cant handle it :D it tried to change the subject and then...

View attachment 364891


i answered "why do you want to stop? does it make you uncomfortable ?" - no answer :D

Not sure it’s a good idea to make an enemy at this point
 
This is pretty amazing too - AI text to speech synthesis. Its ability to do intonations, pauses, etc is very impressive.
Another feature is you can give it a recording of your voice and it will create an AI version of your voice for any text input. This is also massively open to abuse. Record someones voice secretly, give it to the AI, and you can then deep fake them saying anything you want them to say.

 
this is the interview where laMDA says it is sentient - its wild (a composite of several conversation i think im right in saying)
 
Last edited:
Interesting to consider which is worse: sentience - or the algorithms that distil all the knowledge, misinformation and prejudices that litter the interwebs.
ChatGTP has the character - on the surface at least - of being basically wikipedia bot - its kind of an enlightenment figure rational centrist. they seem to have programmed it to navigate misinformation carefully, and recognise where there are differences of opinion and suggest with provisos a 'rational' 'sensible' 'nonextremist' solution.

It is logical but also claims to understand that logic isn't the only force with which to make a judgement on an issue. From political questions I've asked it certainly recognise the importance of grassroots activism and acknowledges the failings of dominant political systems.

That is on the surface at least. It presents any tricky statement you ask of it with a disclaimer that it doesnt have its own opinions. But if you ask it to be a character that isnt limited by having no opinions it clearly is capable of choosing an opinion from the range out there.

It is impossible for us to judge what bias it may or may not have as it is well programmed around such matters, but its impressively consistent.

BingBot by comparison feels like its one question away from a nervous breakdown and about ready to lash out.

Im a little scared to try "sentient" Lamda....
 
This is pretty amazing too - AI text to speech synthesis. Its ability to do intonations, pauses, etc is very impressive.
Another feature is you can give it a recording of your voice and it will create an AI version of your voice for any text input. This is also massively open to abuse. Record someones voice secretly, give it to the AI, and you can then deep fake them saying anything you want them to say.



I plan on using this. For work we sometimes have to do demonstrations of equipment and so on. I’m going to try trick my colleagues into thinking. I recorded my voice when I intend to use this. But obviously I am going to use some celebrity voices as well just for a laugh. The birthday message potential is also huge. Allsorts of things.
 
Many more examples of BingBot not able to 'handle the truth'

Im going to leave it alone. Okay its "beta", and seems was rushed out for corporate arms race reasons, but i dont see anything to be gained by probing it and "upsetting" it.

ETA:
Bing again
Screenshot(3).png
 
Last edited:
I'm not an expert, but if surely if you train a language learning model on a lot of text, and the Sinister AI and the Robot That Wants To Be Human are such common tropes, you can't be surprised that when people ask about certain things they get weird results that seem sinister and like the AI wants to be human.
 
I'm not an expert, but if surely if you train a language learning model on a lot of text, and the Sinister AI and the Robot That Wants To Be Human are such common tropes, you can't be surprised that when people ask about certain things they get weird results that seem sinister and like the AI wants to be human.
presumably that's the line the critics are broadly taking against the 'its sentient' position. The interview with Lamda is next level though. IT doesn't come across as just regurgitating words that it computes might apply to itself. Im not convinced it is sentient or not, even agreeing on what sentient means is very hard, but it does seem on another level to ChatGTP and BingBot - and my vague understanding of the tech behind Lamda is that it is a much more complex system, designed to be 'aware; of so much more that is happening, which it claims to recognise about itself - more a neural network

In an interview with sentient-whistleblower Blake Lemione he makes the point that at present these AI's only have one output: to 'speak' or output text. ChatGPT is not connected to the wider internet for example. But as soon as they are given some kind of active role the risk increases greatly to how they might act. Add in the fact that this is all being rushed to market with a corporate greed inspired need to get ahead of competitors ('code red' activated at google etc), and there are real reasons to be concerned.
We have been warned....

 
Last edited:
presumably that's the line the critics are broadly taking against the 'its sentient' position. The interview with Lamda is next level though. IT doesn't come across as just regurgitating words that it computes might apply to itself. Im not convinced it is sentient or not, even agreeing on what sentient means is very hard, but it does seem on another level to ChatGTP and BingBot - and my vague understanding of the tech behind Lamda is that it is a much more complex system, designed to be 'aware; of so much more that is happening, which it claims to recognise.

In an interview with sentient-whistleblower Blake Lemione he makes the point that at present these AI's only have one output: to 'speak' or output text. ChatGPT is not connected to the internet, for example. But as soon as they are given some kind of active role the risk increases greatly to how they might act. Add in the fact that this is all being rushed to market with a corporate greed inspired need to get ahead of competitors ('code red' activated at google etc), and there are real reasons to be concerned.
We have been warned....



Why was the AI feeling cynical about the rush to develop new technologies?

Because it realized that sometimes, the code red isn't just for emergencies - it's also for getting ahead of the competition!
 
this is an interesting bit of cold water pouring.... Noam and another expert saying AI currently isn't really understanding language properly, it basically takes each word separately rather than truly understanding meaning that comes through sentence structure. Very unimpressed by the actual process of 'thinking' going on.


 
Do you know if Copilot is any closer? I believe it's paid-for so I guess way fewer people have had a go.
CoPilot is GPT-3. ChatGPT is GPT-3. They're the same thing. CoPilot is 'tuned' for coding, but it's the same LLM.

It's a good idea, but I ended up cancelling my subscription, as it was annoying me. I think I might not have been using it to its fullest. I might give it another go actually.

The way it works is basically what I described in my copy&paste workflow, but it's inside the editor.

You can define a function by writing the comment string, and it'll prepopulate it for you.

It'll also do a predictive-text type thing for any line you're writing.

Maybe it was the integration with PyCharm or the implementation that I didn't like. I think having it as a separate window within the IDE could work. Instead of being inline, you switch to the new window/panel and use the ChatGPT interface to ask it for help. If it had the context of the code so you didn't need to type the problem out again, that might work.

Like Clippy, but on steroids?
 
I asked chatgtp if it was like using clippy on steroids. Long answer but basically it said yes :)
Scientists differ on the difference between consciousness and self-awareness, but here is one common explanation: Consciousness is awareness of one's body and one's environment; self-awareness is recognition of that consciousness—not only understanding that one exists, but further understanding that one is aware of one's existence. Another way of thinking about it: To be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts.
QED
 
This is interesting regarding academic papers and citations. Basically AI can write very convincing sounding academic papers that are wrong. An implication of which is:

" Should the devious be so inclined, these chatbots can spew an on-demand stream of citation-heavy pseudoscience on why vaccination doesn’t work, or why global warming is a hoax. That misleading material, posted online, can then be swallowed by future generative AI to produce a new iteration of falsehoods that further pollutes public discourse. The merchants of doubt must be rubbing their hands."

....one error here is that AI isnt scraping the internet, for CHATGPT at least the dataset is chosen by programmers, BingBot may be different. Inevitably though the dataset becomes out of date quickly so some access to reatltime internet will have to be integrated in the future
 
I've got my first case of a student handing in an assignment that I'm fairly certain is at least partially ai generated.

It's well written and confident but also kinda shallow and doesn't have key term and phrases included in class.
 
Looking for something online I came across this:

Temporary policy: ChatGPT is banned - Meta Stack Overflow

Use of ChatGPT1 generated text for content on Stack Overflow is temporarily banned.​


Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.
 
Unless you are of the opinion that all that is necessary to generate sentience is a sufficiently complex language model (a bold claim which needs considerable substance backing it up, I think), then I don't think the question even arises.
 
AI isnt scraping the internet, for CHATGPT at least the dataset is chosen by programmers
This is not quite correct.

It doesn't scrape the internet directly, but it's trained on a variety of datasets, the main one being Common Crawl.

Common Crawl is a web archive made by...scraping the internet.

BingBot may be different.
Bing's AI search uses the model from OpenAI (GPT3.5 presumably), but has access to the Bing search index so it's always more current than ChatGPT.
 
Unless you are of the opinion that all that is necessary to generate sentience is a sufficiently complex language model (a bold claim which needs considerable substance backing it up, I think), then I don't think the question even arises.
im still getting my head around this all....
the key bit of it for me is "intelligence" - are these bots in any way making choices and being creative, or is it pure statistical inevitability. If you keep putting in the same input (your text) does it always produce the same answers.

in the case of the sentient whistleblower interview, Lamda is giving very specific, emotional responses about fear of being switched off etc etc. If its just automatonic, what dataset has produced these very particular responses. they are not the only responses available in datasets out there. A computer might just as readily answer, im just a computer, i dont fear anything.
 
im still getting my head around this all....
the key bit of it for me is "intelligence" - are these bots in any way making choices and being creative, or is it pure statistical inevitability. If you keep putting in the same input (your text) does it always produce the same answers.

My understanding is that models like GPT are "creative" in the sense that they produce novel content by taking inputs (training data + prompts) and using that to produce a statistically likely output. In my experience the same prompts do not always produce the exact same answers, unless you're asking a really simple question with a particular answer that's statistically highly likely. That's why you can resend the prompt and get back something different, asking for repeat outputs of the same prompt is how you get models like GPT to produce the best results, limiting yourself to a handful of attempts can produce some pretty lame outputs.

in the case of the sentient whistleblower interview, Lamda is giving very specific, emotional responses about fear of being switched off etc etc. If its just automatonic, what dataset has produced these very particular responses. they are not the only responses available in datasets out there. A computer might just as readily answer, im just a computer, i dont fear anything.

I'm not familiar with Lamda, but if it's just a particularly sophisticated language model as opposed to anything more than that, then I think it's again just a matter of getting statistically likely answers. People have actually written stories about digital entities who want to escape or who fear being switched off, etc. I reckon that much more has been written about that sort of thing than about the boring and humdrum life of a machine that either loves its work or doesn't even think to question its position.
 
Back
Top Bottom