Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Quite. I had a conversation last week with a PhD student who is studying exactly this for his PhD. He’s excited for but very cautious about what can be done if you do start allowing the machine to gain experience, but that’s definitely not what’s happening at the moment. What we’re seeing at the moment is the interesting way humans project their own perspective onto non-human things. It’s kind of like seeing faces in clouds.
I feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.
 
I feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.
I did wonder what it would be like if you’d never seen any kind of calculating device and then I showed you a Casio calculating a logarithm or even the square root of 58. Would you also assign a level of purpose and thought to this ability?
 
Quite. I had a conversation last week with a PhD student who is studying exactly this for his PhD. He’s excited for but very cautious about what can be done if you do start allowing the machine to gain experience, but that’s definitely not what’s happening at the moment. What we’re seeing at the moment is the interesting way humans project their own perspective onto non-human things. It’s kind of like seeing faces in clouds.
The model is learning and improving. How is that different to your "experience"?
 
I feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.
Harder to dismiss people who work in this for years+ academic papers. Last few pages have only referenced those
 
The model is learning and improving. How is that different to your "experience"?
It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.
 
It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.
@101

QQ we pl
 
It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.
@101

QQ we pl
 
see above
33mins12 in the video

He says the same as the authors do - that it uses the neural net to predict the next word.

What's in the neural net is the result of feedback on a complex system over an unimaginably long training process. It's very hard for us to know what's actually in there. You could think of it as a magic robot, or you could think of it as a very effective algorithm for predictive text like you'd expect to get from that process.
 
He says the same as the authors do - that it uses the neural net to predict the next word.

What's in the neural net is the result of feedback on a complex system over an unimaginably long training process. It's very hard for us to know what's actually in there. You could think of it as a magic robot, or you could think of it as a very effective algorithm for predictive text like you'd expect to get from that process.
Yes I'm not arguing against that but he also adds a BUT about additional understanding of context.... He went out of his way to bring this up, it wasn't a question put to him he wanted to highlight it as a significant step
 
It wrote me an article advocating replacing my directors with a virtual version. It’s very good. I want to get it seen without being held responsible.
 
this man Geoffrey Hinton - Wikipedia has been working on AI for 40+ years and its his + colleagues neural network model, dismissed for years, that has resulted in the current breakthrough



interesting video for lots of reasons but highlight for me

-shows how these machines are modelled directly on the human brain and how they are able to learn and store the lessons that they learn, by making new "neural pathways"

-explains how they recognise images, again based on how animals (including us) do it

-on the question of sentience thinks people are being too quick to dismiss it outright, and we need to consider different definitions of sentience. Describes it as an idiot savant

-says that the Its Predicting The Next Word simplification is fundamentally true,but misses out that the bot understands the sentences and discerns meaning before responding - he gives an example of that in action. there is some degree of comprehension/context going on of the words being inputed

-also talks about some worries, particularly the military implications

- big issue about who decides what the truth is and adds that bias to the bot. We;ve already had one Left Wing Bias headline in the UK press



see above
33mins12 in the video

That's a great video.

He's disarmingly straightforward and not at all reassuring. :D He's not even sure if it's a good thing these things work so well. It is good that people like him are involved, though.
 
It won’t talk dirty to me in the voice of Joanna Lumley. Bah.
google sex chatbot
here's some
 
That's a great video.

He's disarmingly straightforward and not at all reassuring. :D He's not even sure if it's a good thing these things work so well. It is good that people like him are involved, though.
its a video for a general tv audience so maybe broad strokes but hes got a convincing manner and presumably knows what he is talking about and is realistic about the present and future
 
There are also issues with things like creative commons attribution and with GPL'd code which it will probably spit out without any indication of the license so it will get included in proprietary applications.

Hadn't seen this before but there's a lawsuit about that too:

Copilot, which was unveiled by Microsoft-owned GitHub in June 2021, is trained on public repositories of code scraped from the web, many of which are published with licenses that require anyone reusing the code to credit its creators. Copilot has been found to regurgitate long sections of licensed code without providing credit — prompting this lawsuit that accuses the companies of violating copyright law on a massive scale.
 
Hadn't seen this before but there's a lawsuit about that too:
interesting interview. it cant be that hard to "train their AI in a manner which respects the licenses and provides attribution". I'd expect their case is watertight and thats will have to happen. With Bing it has a link from where it got its info, similiar thing here. Or AI writes its own code and never scrapes
 
google sex chatbot
here's some
I don't necessarily want dirty talk. I was just er testing.
 
interesting interview. it cant be that hard to "train their AI in a manner which respects the licenses and provides attribution". I'd expect their case is watertight and thats will have to happen. With Bing it has a link from where it got its info, similiar thing here. Or AI writes its own code and never scrapes
Not read the link, but tbh, I hope this whole AI thing brings about some new thinking on copyright. It'll have to, I guess. But it's been overdue for a long time.

There's only so many ways to make a Python function do a simple task. Especially if you stick to coding conventions, best practices, etc. Give 100 human programmers a simple coding problem, and unless they were trying to be clever, they'll submit very similar programs.

Same with music, art, etc. Authors who've long been dead having their work under copyright, owned and profited off by some faceless org seems wrong to me.

I've noticed people assigning licenses to their models now, which I find bizarre.

"Here's a model I've train on anime cat girls. Feel free to use it but only non-commercially."

I guess it's similar to Unity (or Unreal Engine, I forget which), which allows you to use their game engine for free as long as you earn less then $100,000. After that, you need a new license. Though, that's much easier to prove.

Another interesting thing I've seen on AI blogs is people using ChatGPT to generate training data for their own models.

1. This is genius, IMO.

It cost OpenAI $2.5m to train GPT4. But now anyone with a subscription can ask it to generate questions and answers and use these to develop their own competitor/alternative, at a much reduced cost.

2. It's in OpenAI's T&Cs that you cannot use their model in this way.

Again, how would they know? Especially if you don't release the model. Say, you're a private company who wants a niche model for internal use and don't require 13Bn parameters about poetry and whatever. You just want to know about engines. So you get ChatGPT to generate the training data, knowing it's a billion-dollar piece of technology. Then use that to create your highly focused AI knowing you have grade A training data.

3. Is this piracy?

Are you 'stealing' their IP by doing this, especially when it's specifically againsy the license conditions? Maybe. But they'd be brave to start suing people for doing this, given their own 'stolen' training data. Currently they just ban you, if they find out.

There are going to be some interesting and difficult legal issues in the near future surrounding all of this.

this is the most interesting video ive yet seen. most videos are about new developments, this is more on history and analysis

this man Geoffrey Hinton - Wikipedia has been working on AI for 40+ years and its his + colleagues neural network model, dismissed for years, that has resulted in the current breakthrough



interesting video for lots of reasons but highlight for me

-shows how these machines are modelled directly on the human brain and how they are able to learn and store the lessons that they learn, by making new "neural pathways"

-explains how they recognise images, again based on how animals (including us) do it

-on the question of sentience thinks people are being too quick to dismiss it outright, and we need to consider different definitions of sentience. Describes it as an idiot savant

-says that the Its Predicting The Next Word simplification is fundamentally true,but misses out that the bot understands the sentences and discerns meaning before responding - he gives an example of that in action. there is some degree of comprehension/context going on of the words being inputed

-also talks about some worries, particularly the military implications

- big issue about who decides what the truth is and adds that bias to the bot. We;ve already had one Left Wing Bias headline in the UK press



see above
33mins12 in the video

This was really good, ta!
 
What all this means is that any sense that the machine is having an actual meaningful conversation with you is something generated by you — by your projection of humanity onto it — not by the chat bot itself. The sense of personhood comes from your representation of it in your own mind, not any consciousness in the machine. A “machine in the mind” rather than a “mind in the machine”
It's interesting that if you just replace "machine" with "urban75 poster" then what you have written remains true.
 
Not read the link, but tbh, I hope this whole AI thing brings about some new thinking on copyright. It'll have to, I guess. But it's been overdue for a long time.

There's only so many ways to make a Python function do a simple task. Especially if you stick to coding conventions, best practices, etc. Give 100 human programmers a simple coding problem, and unless they were trying to be clever, they'll submit very similar programs.

Same with music, art, etc. Authors who've long been dead having their work under copyright, owned and profited off by some faceless org seems wrong to me.

I've noticed people assigning licenses to their models now, which I find bizarre.

"Here's a model I've train on anime cat girls. Feel free to use it but only non-commercially."

I guess it's similar to Unity (or Unreal Engine, I forget which), which allows you to use their game engine for free as long as you earn less then $100,000. After that, you need a new license. Though, that's much easier to prove.

Another interesting thing I've seen on AI blogs is people using ChatGPT to generate training data for their own models.

1. This is genius, IMO.

It cost OpenAI $2.5m to train GPT4. But now anyone with a subscription can ask it to generate questions and answers and use these to develop their own competitor/alternative, at a much reduced cost.

2. It's in OpenAI's T&Cs that you cannot use their model in this way.

Again, how would they know? Especially if you don't release the model. Say, you're a private company who wants a niche model for internal use and don't require 13Bn parameters about poetry and whatever. You just want to know about engines. So you get ChatGPT to generate the training data, knowing it's a billion-dollar piece of technology. Then use that to create your highly focused AI knowing you have grade A training data.

3. Is this piracy?

Are you 'stealing' their IP by doing this, especially when it's specifically againsy the license conditions? Maybe. But they'd be brave to start suing people for doing this, given their own 'stolen' training data. Currently they just ban you, if they find out.

There are going to be some interesting and difficult legal issues in the near future surrounding all of this.


This was really good, ta!
Remember that copyright is a very narrow form of IP, protecting the expression of an idea not the underlying idea in general.

BTW there is a case waiting to be decided at the UK Supreme Court on whether an AI machine can be named as an inventor on a patent. It’s accepted already that it was the sole deviser of the invention. The case is about the technicalities of what patent law does about that.
 
Another scraping issue: Clearview AI used nearly 1m times by US police

Facial recognition firm Clearview has run nearly a million searches for US police, its founder has told the BBC.

[...] Clearview now has 30bn images scraped from platforms such as Facebook, taken without users' permissions.

The company has been repeatedly fined millions of dollars in Europe and Australia for breaches of privacy.
 
a decent read on the AI hype train and the faux fear of how powerful it is as a marketing campaign for OpenAI. no doubt they did well out of being a non-profit before shifting to a for-profit and taking the open source code private. fucking tech bros.

 
a decent read on the AI hype train and the faux fear of how powerful it is as a marketing campaign for OpenAI. no doubt they did well out of being a non-profit before shifting to a for-profit and taking the open source code private. fucking tech bros.

the article kind of likes to have it both ways
dont be scared, the scare is hype, but do be scared of the things there are to be scared of
and it doesn't take much imagination to think of near future outcomes that carry risk


If GTP5 solidly outperforms humans on numerous benchmark tests then that is ‘human-competitive intelligence’ of some kind. Intelligence is a loaded word, human intelligence is different to this artificial intelligence, but clearly there will be growing intellectual areas where humans can and will be regularly outcompeted.
 
OpenAI latest statement was released yesterday, on their safety approach to AI.

Someone on Reddit had the idea of asking ChatGPT to read between the lines, but they asked it to assume it was a dystopian fiction scenario. I've nicked their idea and made it a bit more neutral, here:

Me

The following quote is an excerpt from an announcement by the CEO of a US-based leading AI development company. Please read in-between the lines of the CEO's statement and reply with what hidden details he is likely withholding from disclosing to the public: "Crucially, we believe that society must have time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further. Iterative deployment has helped us bring various stakeholders into the conversation about the adoption of AI technology more effectively than if they hadn't had firsthand experience with these tools."

ChatGPT(4)

Based on the CEO's statement, there may be a few hidden details that he is likely withholding from disclosing to the public:
  1. Rapid AI advancements: The CEO's emphasis on society having time to update and adjust to AI could indicate that the pace of AI development within the company is much faster than publicly known. This could mean there are potential risks and uncertainties associated with the rapid advancements that the company is trying to manage carefully.
  2. Unforeseen consequences: The mention of iterative deployment and involving various stakeholders in the conversation may imply that there have been unintended consequences or negative effects from their AI technology. By not explicitly discussing these issues, the CEO may be trying to downplay their significance while still acknowledging that there have been concerns.
  3. Public resistance or backlash: The statement about giving people affected by AI a say in its development could suggest that there has been some level of public resistance or backlash against the company's AI technology. This might be due to concerns about job displacement, privacy, or ethical implications.
  4. AI regulation and policies: The CEO might be subtly hinting at the need for more stringent AI regulations and policies. By emphasizing the importance of a collaborative approach, the CEO could be trying to show that the company is open to working with regulators and policymakers in order to shape future AI development responsibly.
  5. Competitive landscape: The quote could be an attempt to position the company as a responsible AI leader in a competitive landscape. The CEO may be trying to differentiate the company from competitors that are perhaps perceived as less considerate of the societal impacts of AI or less willing to engage in open dialogue.
The response is quite interesting, but I really like this use case to detecting bullshit. Could imagine this being useful for policy announcements, etc.
 
The response is quite interesting, but I really like this use case to detecting bullshit. Could imagine this being useful for policy announcements, etc.

Did it detect any bullshit or provide any insights? If so, what were they? How would you rate it for that task, rather than for imitating human language?

Based on that output, I'd say No and 0/10.
 
Ah yes a press release about how they might possibly contain the incredibly powerful tool that will definitely shape the future of humanity that they happen to be selling.
 
Back
Top Bottom