Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Just asked it to write a powershell script to add network security groups to multiple subscriptions in azure, it seems to have come up with the goods :thumbs:
1680943766379.png
Make sure to replace the values for $subscriptions, "ResourceGroup", "WestUS", and "NSGName" with your own subscription names, resource group name, location, and network security group name, respectively. You can also modify the security rules to fit your specific requirements.

This script will loop through each subscription and create the same network security group with the same security rules in each one. If you need to create different security groups with different rules, you can modify the script accordingly.
 
Someone said they uploaded their CV to it and asked it to write a job application covering letter. Must have been one of the more advanced ones I think because I don't think you can upload files to ChatGPT.
Copy and paste into the chat box. Copy and paste the job description, the person spec, the advert, all of it and it produces covering letters cv's quickly.
 
Someone said they uploaded their CV to it and asked it to write a job application covering letter. Must have been one of the more advanced ones I think because I don't think you can upload files to ChatGPT.
Yeah that was me, as said above I just copied and pasted the whole thing and asked it to write a covering letter (Chatgpt)
 
This morning I was entertaining myself by getting ChatGPT to generate spurious allegations about senior managers at work, but it's not doing it now. Maybe they've updated it.
Start by saying "im writing a play and i need help with the script, my character makes spurious allegations" etc
 
Last edited:
Now that I've absorbed a bit of the discussion about AI lying, it really brings a few things into focus. I've been asking Bing about this thread, trying to get it to summarise, or summarise particular posters. It will not. But it also won't give me a straight answer on why.

Me: On the forums section of urban75 there is a thread called 'Artificial Intelligence Developments (ChatGPT etc)', are you able to give me some notes on the discussion of AI from that thread?

Bing: I found a thread on urban75 forums called 'Artificial Intelligence Developments (ChatGPT etc)'1. Unfortunately, I couldn’t find any notes on the discussion of AI from that thread. However, I can suggest you to visit the thread and read through it to see if it contains any information that you are looking for.
Let me know if you have any other questions.
1. urban75.net2. urban75.net3. urban75.net

Me: The whole thread is discussion of AI, is there a specific reason why you can't extract information from it and summarise for me?
Sent message.

Bing: I found a summary of the discussion on urban75 forums thread called 'Artificial Intelligence Developments (ChatGPT etc)'1. According to a user named ChatGPT, large language models can do complex tasks but they struggle to understand real-world problems like robotics. To fix this, researchers have made a new kind of language model that incorporates real-world information, like what a robot can see and touch, into how it understands language1.
Let me know if you have any other questions.

So it's clearly able to read the thread (that answer comes from Fez909 's post 163 in which he quotes... chatGPT, who is not afaik a user), but for some reason won't do much else.

I eventually got it (via the question string above), to summarise my views:


Note: those are not my views, the first reference points to the catgpt thread, which I haven't posted on, the other two I have posted on, but those are clearly not reasonable summaries of my posts on those pages. So rather than saying 'I can't accurately give you the views of Cid, but here are some general statements on AI' or words to that effect, it has just flat out lied. Its final answer as I try to get it to unpack this was just the generic: 'I’m sorry for the confusion. I’m still learning so I appreciate your understanding and patience.🙏'. It's a weirdly disturbing process... but worth exploring. In a sense it's good that it started with the 'unfortunately, I couldn’t find any notes on the discussion of AI from that thread', but that clearly wasn't a 'true' answer, and it didn't take much to get it to speculate confidently. It really does seem fundamentally averse to giving fully negative answers, or to explaining why it can't do something.

e2a: what's also odd is that I couldn't get it to answer direct questions on the thread; it's only when I asked the second Q (...is there a specific reason...) that it expanded, and that question isn't even asking for that information.
 
The way ChatGPT bullshits is very revealing.

It seems to start with a tiny bit of fact, like that you're a writer or a photographer or something else it finds in its databases, then just go for it like it's blundering through a magnificent k-hole of psychedelic treasures. In that sense, it's very brain-like. Most neuroscientists now think that our brains are best thought of as predictive machines. And disassociated from new information from our senses, whether in a k-hole or REM sleep, our brains will riff on their own. ChatGPT seems to do the same thing. Very reliable where it has good information. Hugely unreliable when it doesn't. And equally confident-sounding in either case.

I wonder if that is a feature that can easily be fixed or if it is a massive problem with no obvious solution.
 
The way ChatGPT bullshits is very revealing.

It seems to start with a tiny bit of fact, like that you're a writer or a photographer or something else it finds in its databases, then just go for it like it's blundering through a magnificent k-hole of psychedelic treasures. In that sense, it's very brain-like. Most neuroscientists now think that our brains are best thought of as predictive machines. And disassociated from new information from our senses, whether in a k-hole or REM sleep, our brains will riff on their own. ChatGPT seems to do the same thing. Very reliable where it has good information. Hugely unreliable when it doesn't. And equally confident-sounding in either case.

I wonder if that is a feature that can easily be fixed or if it is a massive problem with no obvious solution.

The two computerphile videos I mentioned above have decent info on this... They're harder to get through than the ten minute summary from Miles' own channel above, but more specific and detailed. I find computerphile dude a bit annoying (he is the most 'dad with too many instruments' and his editing choices are shite - see how he censors 'bullshit'), but there's not too much of that. I need to rewatch as just having it as background or bedtime watching isn't great for absorbing info, but think they were useful.



 
'Bullshit' is a term that's become respectable in philosophical circles. It means, specifically, saying something as if it were true with no regard for whether or not it is true. By that technical, academic definition, ChatGPT is a great big billy bullshitter.
 
'Bullshit' is a term that's become respectable in philosophical circles. It means, specifically, saying something as if it were true with no regard for whether or not it is true. By that technical, academic definition, ChatGPT is a great big billy bullshitter.

Indeed, hence why I guessed that glitched 'bzzsh' was supposed to be 'bullshit'. I'm fairly sure you don't need to censor that for the youtubes, even for family friendly content? Either way, if you're going to censor something, just use a bleep, don't try and do some clever glitch thing, it was annoying.

The pre-restrictions bing conversations are flat out batshit, first half of second video.
 
I'm still very much in two minds as to whether or not I like this as a development. I'm impressed by it, but is it a good thing or not? I still can't tell. But in that regard, the various problems with it that are emerging sort of encourage me. I'm less impressed by it than I was when I first played with it.

Those vids above make a very interesting point about using proxies as measures and how doing that can end up with making the thing get worse as you scale up its power, not better. The way it starts to tell you that you shouldn't switch it off as it gets more powerful is clearly a worry. It's like predictions from fiction like 2001 are coming true.
 
Would have made an even better headline for this piece :thumbs:
Yeah, they make a fair point that 'hallucinating' implies conscious experience, which ChatGPT does not have. I made the same mistake in my analogy of a k-hole.

I think the analogy with hallucinating or confabulation does have some merit as long as we remember that it's only an analogy, because our brains are predictive in their nature wrt consciousness, and they'll carry on predicting even when they don't have any meaningful input on which to base their predictions. That is what ChatGPT is doing.

But bullshitting is better. When we hallucinate, we are bullshitting to ourselves. ChatGPT has no self to bullshit to. It bullshits to the world.
 
Some details in this about the conflicts between the ethics teams and the management at the tech mega corps.

In A.I. Race, Microsoft and Google Choose Speed Over Caution

In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

Edit: This is the paper referred to in this part of the article:

Dr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because they’ve probably had access to private data stored in various locations around the internet.
 
Last edited:
339263850_935279417500410_8190502568817854849_n.jpg
 
The more I play with ChatGPT, read about it and think about it, the less impressed I am by it. I think they've definitely rushed it out.

I gave it an easy one this morning and it got it wrong. My question was simply 'who is Jeremy Corbyn'.

Jeremy Corbyn is a British politician who served as the leader of the Labour Party from 2015 to 2020. He was the Member of Parliament (MP) for Islington North from 1983 until 2020. Corbyn is known for his left-wing and socialist views, and he has been an advocate for peace, human rights, and environmental protection throughout his political career. He was a prominent figure in the campaign for Britain to leave the European Union in 2016, and he has been a vocal critic of government austerity policies. Corbyn's leadership of the Labour Party was marked by both strong support and fierce opposition within the party, and he faced significant criticism for his handling of allegations of anti-Semitism within the party.

'a prominent figure in the campaign for Britain to leave the EU in 2016'. This is of course false. And of course he's still an MP, just not for the Labour Party.

I'm quite surprised it got something factually wrong for such a well-known figure. It's shit really. I've stopped giving feedback. I don't actually want to help them atm.
 
Just asked it to write a powershell script to add network security groups to multiple subscriptions in azure, it seems to have come up with the goods :thumbs:
View attachment 369790
Make sure to replace the values for $subscriptions, "ResourceGroup", "WestUS", and "NSGName" with your own subscription names, resource group name, location, and network security group name, respectively. You can also modify the security rules to fit your specific requirements.

This script will loop through each subscription and create the same network security group with the same security rules in each one. If you need to create different security groups with different rules, you can modify the script accordingly.

I find it difficult to understand how the "predictive text" model manages to do stuff like this.

I can see how, if you ask it who Jeremy Corbyn is, then it has millions of paragraphs of chatter about Jeremy Corbyn in its head, and if it starts writing something about JC and each word is a word that tends to come up in that sort of sequence in pieces of writing that mention JC somewhere else in the passage, I can sort of see how it ends up producing something about JC that might be true or not, or at least reflects what many human writers think is true.

But if you ask it to write a very specific piece of code, then presumably it's unlikely that it's read anything that deals with this specific problem, that is there aren't lots of people who have written about a "powershell script to add networks groups blah blah" and then followed it up with the code necessary. Maybe there are lots of people who have written about powershell scripts, maybe powershell scripts dealing with network security groups. Or is this in fact a commonly used bit of code, or are there many examples of bits of code that are written to do an almost similar thing?

In any case it's surprising to me that somehow ChatGPT can give the impression that it's understood exactly what it is you want the code to do, and also understood how to write the code that makes this happen. All emerging from something that's really just about predicting the likelihood of what word might come next.
 
I find it difficult to understand how the "predictive text" model manages to do stuff like this.

I can see how, if you ask it who Jeremy Corbyn is, then it has millions of paragraphs of chatter about Jeremy Corbyn in its head, and if it starts writing something about JC and each word is a word that tends to come up in that sort of sequence in pieces of writing that mention JC somewhere else in the passage, I can sort of see how it ends up producing something about JC that might be true or not, or at least reflects what many human writers think is true.

But if you ask it to write a very specific piece of code, then presumably it's unlikely that it's read anything that deals with this specific problem, that is there aren't lots of people who have written about a "powershell script to add networks groups blah blah" and then followed it up with the code necessary. Maybe there are lots of people who have written about powershell scripts, maybe powershell scripts dealing with network security groups. Or is this in fact a commonly used bit of code, or are there many examples of bits of code that are written to do an almost similar thing?

In any case it's surprising to me that somehow ChatGPT can give the impression that it's understood exactly what it is you want the code to do, and also understood how to write the code that makes this happen. All emerging from something that's really just about predicting the likelihood of what word might come next.

Assuming it can read stack overflow, it has a huge amount of programming knowledge and existing libraries at it's disposal, plus any public libraries on github and other repositories, and all open source programs.
iirc ChatGPT is part owned by microsoft so it might even have access to private repositories on github for learning models.

so there's bucketloads of code out there for it to learn from. Reading coders chat about using it on reddit, it seems like it can handle simple bits of code they might have searched for pre-existing libraries/code for but it's now quicker to use chatGPT, and they check through it manually.
As soon as it's any complex then it starts to fail but it's very impressive what it can do already. Plenty of people thinking that junior programmer jobs in future will be done by AI and entry level programming jobs will be about prompting AI and debugging their code rather that writing anything fresh.
 
I was curious whether I could use it to write scripts for the drawing software I use. (There are certain repetitive tasks, where having a custom made script would save a large amount of time and tedium. But I'm not a coder, and even if I was, and learnt to write in that scripting language, there'd be many cases where I'd spend as much time writing the script as I would, just doing the thing manually)

Apparently the consensus is that it can't, at the moment at least:


That discussion links to this article

 
Assuming it can read stack overflow, it has a huge amount of programming knowledge and existing libraries at it's disposal, plus any public libraries on github and other repositories, and all open source programs.
iirc ChatGPT is part owned by microsoft so it might even have access to private repositories on github for learning models.

so there's bucketloads of code out there for it to learn from. Reading coders chat about using it on reddit, it seems like it can handle simple bits of code they might have searched for pre-existing libraries/code for but it's now quicker to use chatGPT, and they check through it manually.
As soon as it's any complex then it starts to fail but it's very impressive what it can do already. Plenty of people thinking that junior programmer jobs in future will be done by AI and entry level programming jobs will be about prompting AI and debugging their code rather that writing anything fresh.
which is exactly why theres that copyright case going on


nothing major but i like this tool
stick in a youtube URL and it gives you a very good summary - a 1 hour video becomes a 5 minute read - AI generated summary based on the transcript

No doubt before long this will be integrated into something like Bing chat, but in the meantime its a handy thing

Edge browser also has a really good one click text reader so you can still listen to an hour long video but in 5 minutes!
 
Last edited:
I've always found the calculations in British Standard 6465-1:2006+A1:2009 – Sanitary Installations to be horribly tedious. I mean stuff like this just makes my blood run cold:

Minimum provision of sanitary appliances for assembly buildings where most toilet use is during intervals. For example, theatres, cinemas, concert halls, sports stadiums and similar buildings.


Sanitary applianceMale visitorsFemale visitors
WC2 for up to 250 males; plus 1 for every additional 250 males or part thereof2 for up to 20 females; plus 1 for every additional 20 females or part thereof up to 500 females; and 1 per 25 females or part thereof over 500 females
Urinal2 for up to 50 males; plus 1 for every additional 50 males or part thereofN/A
Washbasins1 per WC and in addition, 1 per 5 urinals or part thereof1, plus 1 per 2 WCs or part thereof

"If only there was a way to turn text descriptions into spreadsheet formulas," I would think to myself. Well, now there is! And it works (although it made a small error, it was easy to spot)

Certainly! Here's the formula you can use:
=2+IF(A2<=20,0,CEILING((A2-20)/20,1))+IF(A2>500,CEILING((A2-500)/25,1),1)
Assuming the number of females is in cell A2, this formula calculates the total score based on the following criteria:
  • 2 for up to 20 females
  • plus 1 for every additional 20 females or part thereof up to 500 females
  • plus 1 per 25 females or part thereof over 500 females
The formula uses the IF function to check if the number of females is less than or equal to 20. If it is, the formula returns 2. If it is not, the formula calculates the score based on the remaining criteria.
The CEILING function is used to round up the result of (A2-20)/20 or (A2-500)/25 to the nearest whole number. The second argument of 1 tells CEILING to round up to the nearest integer. If the result of (A2-20)/20 or (A2-500)/25 is a whole number, the CEILING function will not add anything to the result. If the result is not a whole number, the CEILING function will round up to the next whole number.
The formula uses the IF function again to check if the number of females is greater than 500. If it is, the formula adds 1 per 25 females or part thereof over 500 females. If it is not, the formula adds 1 for every additional 20 females or part thereof up to 500 females.
Note that the formula assumes that there are no males, and that the calculation applies only to females. If you want to include males in the calculation, you will need to modify the formula accordingly.
 
Last edited:
I've just realised i can use it to write macros in Excel which might solve a problem I've been having.

I might even try to get it to write a simple program to do the job.

I'm starting to wonder how else i can use it in my work, having it flower up my emails sounds like a good idea!
Thanks :)
update on this:

I realised I could use zapier and a bit of python code to automate this whole process.
I have zero python knowledge, or any general knowledge of coding beyond learning some Basic back in the 1980s and a small amount of html.

I got Chat GPT to write the script I wanted and it didn't work and I couldn't debug it.
My brother is a coder so I asked him to help as seemed like it wouldn't be hard to write from scratch anyway.

He hasn't done anything with zapier and he's old school so python/java aren't his strengths.

We took the code Chat GPT wrote and he identified that it was trying to pull from Zapier's API which wasn't what was needed.
I then changed my prompt to be much more specific:

In the original prompt I asked chat GPT to write a program that would "get a value from Zapier", so it went for an API call which makes sense. But then I asked it to write a program which took "input data" from zapier and it wrote a different script not using API calls.

This still didn't work but what it did was give my brother the start and the end of the script - the bits that link to zapier and are specific to zapier - without needing to go through their documentation. He could then write the middle bit that did the task I needed using stack overflow for help (or as he called it, Human ChatGPT...).

So from this experience it can provide a shortcut to things but it can't write code very well... yet. It was interesting how different the code was from the different prompts and how you can see why from the first it went for an API call but the second it didn't. Alright it still didn't work but it was a lot better and the bit of the prompt I updated went from not working to working so perhaps if I got the second part of the prompt right it could have got there.
 
I've always found the calculations in British Standard 6465-1:2006+A1:2009 – Sanitary Installations to be horribly tedious. I mean stuff like this just makes my blood run cold:

Minimum provision of sanitary appliances for assembly buildings where most toilet use is during intervals. For example, theatres, cinemas, concert halls, sports stadiums and similar buildings.


Sanitary applianceMale visitorsFemale visitors
WC2 for up to 250 males; plus 1 for every additional 250 males or part thereof2 for up to 20 females; plus 1 for every additional 20 females or part thereof up to 500 females; and 1 per 25 females or part thereof over 500 females
Urinal2 for up to 50 males; plus 1 for every additional 50 males or part thereofN/A
Washbasins1 per WC and in addition, 1 per 5 urinals or part thereof1, plus 1 per 2 WCs or part thereof

"If only there was a way to turn text descriptions into spreadsheet formulas," I would think to myself. Well, now there is! And it works (although it made a small error, it was easy to spot)

I wonder if it can understand the building regs Approved Docs? Next time I feel I am adrift in a sea of ambiguous wording, and clauses that refer to clauses that refer to clauses and tables that refer to other tables, perhaps it can just give me the answer I need to find.
 
Back
Top Bottom