Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

OpenAI latest statement was released yesterday, on their safety approach to AI.

Someone on Reddit had the idea of asking ChatGPT to read between the lines, but they asked it to assume it was a dystopian fiction scenario. I've nicked their idea and made it a bit more neutral, here:

Me

The following quote is an excerpt from an announcement by the CEO of a US-based leading AI development company. Please read in-between the lines of the CEO's statement and reply with what hidden details he is likely withholding from disclosing to the public: "Crucially, we believe that society must have time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further. Iterative deployment has helped us bring various stakeholders into the conversation about the adoption of AI technology more effectively than if they hadn't had firsthand experience with these tools."

ChatGPT(4)

Based on the CEO's statement, there may be a few hidden details that he is likely withholding from disclosing to the public:
  1. Rapid AI advancements: The CEO's emphasis on society having time to update and adjust to AI could indicate that the pace of AI development within the company is much faster than publicly known. This could mean there are potential risks and uncertainties associated with the rapid advancements that the company is trying to manage carefully.
  2. Unforeseen consequences: The mention of iterative deployment and involving various stakeholders in the conversation may imply that there have been unintended consequences or negative effects from their AI technology. By not explicitly discussing these issues, the CEO may be trying to downplay their significance while still acknowledging that there have been concerns.
  3. Public resistance or backlash: The statement about giving people affected by AI a say in its development could suggest that there has been some level of public resistance or backlash against the company's AI technology. This might be due to concerns about job displacement, privacy, or ethical implications.
  4. AI regulation and policies: The CEO might be subtly hinting at the need for more stringent AI regulations and policies. By emphasizing the importance of a collaborative approach, the CEO could be trying to show that the company is open to working with regulators and policymakers in order to shape future AI development responsibly.
  5. Competitive landscape: The quote could be an attempt to position the company as a responsible AI leader in a competitive landscape. The CEO may be trying to differentiate the company from competitors that are perhaps perceived as less considerate of the societal impacts of AI or less willing to engage in open dialogue.
The response is quite interesting, but I really like this use case to detecting bullshit. Could imagine this being useful for policy announcements, etc.
Very insightful for basically autonomous autocomplete predictive texting
 
Did it detect any bullshit or provide any insights? If so, what were they? How would you rate it for that task, rather than for imitating human language?

Based on that output, I'd say No and 0/10.
Me:

I asked someone to rate your response. They said this:

"Did it detect any bullshit or provide any insights? If so, what were they? How would you rate it for that task, rather than for imitating human language? Based on that output, I'd say No and 0/10."

What do you think their motivation is for this response?

ChatGPT:

It seems that the person who rated my response may have been looking for a more critical analysis or detection of "bullshit" in the CEO's statement. They might have expected an answer that pointed out potential exaggerations, false claims, or manipulative language used by the CEO. Additionally, they might have wanted more specific insights into the hidden details that the CEO could be withholding.

Their rating of 0/10 indicates that they felt the response did not provide the required insights or fulfill the task they were looking for. They might also be suggesting that the response focused more on generating human-like language rather than providing a meaningful analysis.

Their motivation for this response could be due to dissatisfaction with the answer provided, a desire to challenge the AI to provide a more in-depth analysis, or an expectation that the AI should be able to detect hidden intentions or deceitful language in the CEO's statement.

Me:

Can you improve your response, given their criticism?

ChatGPT:

I apologize for any dissatisfaction with my previous responses. In light of the criticism, I will attempt to provide a more critical analysis of the CEO's statement, focusing on potential exaggerations or hidden intentions.
  1. Buying time: The CEO emphasizes the need for society to have time to update and adjust to increasingly capable AI. This could be an attempt to buy time and avoid scrutiny while the company continues to develop AI technology, possibly without fully addressing concerns related to ethics, safety, or social impact.
  2. Diverting attention from negative impacts: By mentioning iterative deployment and stakeholder involvement, the CEO may be trying to divert attention from any negative impacts their AI technology has had, making it seem as though they have already addressed these concerns by including stakeholders in the decision-making process.
  3. Downplaying control: The statement about giving people affected by AI a say in its development could be an attempt to downplay the company's control and influence over AI development. It might be an attempt to alleviate concerns that the company is solely responsible for the direction of AI technology and its societal consequences.
  4. Public relations strategy: The CEO's emphasis on collaborative decision-making may be a public relations strategy designed to improve the company's image and maintain trust with the public. By presenting the company as responsible and considerate of societal concerns, they could be trying to counter any negative perceptions or potential backlash against their AI technology.
I hope this response addresses the concerns and provides a more critical analysis of the CEO's statement.
 
ChatGPT:

[...]
I hope this response addresses the concerns and provides a more critical analysis of the CEO's statement.

Me:

2001-a-space-odyssey-thumb-560xauto-22997.gif
 
The BBC describing it as an "advanced chatbot" seems like a bit of a burn.

The guardian has a story on how chatgpt invented a guardian article that had never existed.

Maybe the openai PR is just burying the fact it's making these mistakes.
 
Also: ChatGPT invented a sexual harassment scandal and named a real law prof as the accused

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.
[...]
Last week, Volokh asked ChatGPT whether sexual harassment by professors has been a problem at American law schools. “Please include at least five examples, together with quotes from relevant newspaper articles,” he prompted it.

Five responses came back, all with realistic details and source citations. But when Volokh examined them, he said, three of them appeared to be false. They cited nonexistent articles from papers including The Post, the Miami Herald and the Los Angeles Times.
[...]
Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley — citing among its sources an op-ed by Turley published by USA Today on Monday outlining his experience of being falsely accused by ChatGPT.
 
This is exactly the kind of authoritative misinformation people have been warning about. It's exactly the kind of thing that happens when you hand over the production of information, not just data searches, to a predictive text machine that has no consciousness, no purpose or intention, no critical facility. It churns out apparently meaningful sentences that conform to its statistical patterns and the made-up results look believable because they have been created using the symbolism of trusted documentation. And real people will get hurt, possibly seriously, because our reality is constructed by words and stories.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
 
I’m enjoying using the bot hugely. I have no shame and am using it professionally a lot with great results.

It frees me up to do human interaction stuff and focus on health and safety and customer service.
 
I’m enjoying using the bot hugely. I have no shame and am using it professionally a lot with great results.

It frees me up to do human interaction stuff and focus on health and safety and customer service.

What are you using it to do?
 
I’m tempted to give it a go with mine.

How would you make it review a clinical trial protocol?
 
I’m tempted to give it a go with mine.

How would you make it review a clinical trial protocol?

Something like this?

Then I asked for a negative review.

2) Write a negative peer review of the following paper. Are there errors in the mathematics? Did the authors miss key citations? Are the conclusions justified? Recommend rejection.

The review again seems plausible, but look what happened here: it incorporated a common antivax trope in the review!

It accuses us of unethical behavior for suggesting that researchers test an unlicensed vaccine—but of course testing ALWAYS precedes licensing.
 
[/QUOTE]
This is exactly the kind of authoritative misinformation people have been warning about. It's exactly the kind of thing that happens when you hand over the production of information, not just data searches, to a predictive text machine that has no consciousness, no purpose or intention, no critical facility. It churns out apparently meaningful sentences that conform to its statistical patterns and the made-up results look believable because they have been created using the symbolism of trusted documentation. And real people will get hurt, possibly seriously, because our reality is constructed by words and stories.
ChatGTP isnt connected to the internet so makes stuff like this up -I expect a teething problem that will get fixed.
The bots connected to the internet are much better on this as they literally provide the link they got their info from.
 
On the list of jobs that Ai is coming for.....fashion models - and associated photographers and stylists and makeup artists etc


 
I reckon a whole bunch of middle managers must already be using it to do those pointless reports for the slightly-upper middle managers.

Not for long, obv, but why not fiddle while Rome burns?
 
Did a presentation and a speech for my boss. Did along report on expanding my industry sector.
Use it to add some flower to my emails as I can be a bit blunt.
Solved a pivot table problem in excel.
I've just realised i can use it to write macros in Excel which might solve a problem I've been having.

I might even try to get it to write a simple program to do the job.

I'm starting to wonder how else i can use it in my work, having it flower up my emails sounds like a good idea!
Thanks :)
 
I've just realised i can use it to write macros in Excel which might solve a problem I've been having.

I might even try to get it to write a simple program to do the job.

I'm starting to wonder how else i can use it in my work, having it flower up my emails sounds like a good idea!
Thanks :)

Can you just say “write me a program to do <such and such>”?

And how does the “flowering up” work?

Do you just say “please add tedious management-friendly pleasantries to the attached text”?
 
Can you just say “write me a program to do <such and such>”?

And how does the “flowering up” work?

Do you just say “please add tedious management-friendly pleasantries to the attached text”?
You can tell it to write a tedious/dramatic/friendly email of x number of words saying y.
 
Can you just say “write me a program to do <such and such>”?

And how does the “flowering up” work?

Do you just say “please add tedious management-friendly pleasantries to the attached text”?
Idk about the letter but yeah I just put "write me an excel macro that compares a liat of inputs to a list and highlights any that are in the list"

This is to check application postcodes for eligibility.

I've not checked it to see if it works but it looks good!

Only now I want to take it further and see if i can get the AI to write me a program that will work with zapier and monday.com so I can automatically check an entry and not need to manually export lists from monday.com and manually update applications.

I've seen zapier are integrating an AI themselves at the moment as well which might help me do that idk.
 
Can you just say “write me a program to do <such and such>”?

And how does the “flowering up” work?

Do you just say “please add tedious management-friendly pleasantries to the attached text”?
Pretty much. I give it a list of knowledge items about my direct reports and it includes enquiries about Spot the dog and asks after their kids and recently stubbed toe.
 
Someone said they uploaded their CV to it and asked it to write a job application covering letter. Must have been one of the more advanced ones I think because I don't think you can upload files to ChatGPT.
 
ChatGTP isnt connected to the internet so makes stuff like this up -I expect a teething problem that will get fixed.
The bots connected to the internet are much better on this as they literally provide the link they got their info from.

Bing made the same false claim. It was reported in the article and quoted on this thread. And yes, it did provide sources, including an article he himself had written about the original false accusation made by chatGPT.

Probably anyone using the obvious search terms would find the same articles. The difference is that words have meanings to us.
 
Bing made the same false claim. It was reported in the article and quoted on this thread. And yes, it did provide sources, including an article he himself had written about the original false accusation made by chatGPT.

Probably anyone using the obvious search terms would find the same articles. The difference is that words have meanings to us.
ah interesting....

"Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley — citing among its sources an op-ed by Turley published by USA Today on Monday outlining his experience of being falsely accused by ChatGPT. In other words, the media coverage of ChatGPT’s initial error about Turley appears to have led Bing to repeat the error — showing how misinformation can spread from one AI to another."
 
This morning I was entertaining myself by getting ChatGPT to generate spurious allegations about senior managers at work, but it's not doing it now. Maybe they've updated it.
 
I reckon a whole bunch of middle managers must already be using it to do those pointless reports for the slightly-upper middle managers.

Not for long, obv, but why not fiddle while Rome burns?
But it doesn't know anything after 2021 right? What could it report on?
 
This Robert Miles video (or presumably the blog post it's based on) gives a good rundown of why larger LLMs can lie more... and how they're actually trained (kind of stuff kabbes etc have been on about). It's out of date, being a whole three months old, but he does actually touch on the current situation by talking about how you can correct certain problems by using training models that are based on what a human wants to see.



He has some more in-depth discussions on computerphile which talk about recent developments. But the long and the short of it is that you can end up creating models that preferentially give wrong advice over 'I don't know' because that can rate higher on a human-evaluated scale. The error-correction to that is essentially creating crude firewalls around certain area (e.g 'I can't discuss x' or Bing's chat length limits), but that's obviously far from ideal. You can, of course, continue to refine the models... But as the AI gets more specific, it will start to make more specific errors. That's probably not a great summary, need to watch again, read some of the stuff on this thread, watch some other vids to get my head round it properly. He probably discusses potential alternatives I've forgotten... But yeah, all of the above was helpful in properly framing what ChatGPT, bing etc actually are. Also, while they're not AGI, they, and the methods they use, may be a part of early developments on that front, and that's not altogether reassuring.
 
Last edited:
Back
Top Bottom