I feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.Quite. I had a conversation last week with a PhD student who is studying exactly this for his PhD. He’s excited for but very cautious about what can be done if you do start allowing the machine to gain experience, but that’s definitely not what’s happening at the moment. What we’re seeing at the moment is the interesting way humans project their own perspective onto non-human things. It’s kind of like seeing faces in clouds.
I did wonder what it would be like if you’d never seen any kind of calculating device and then I showed you a Casio calculating a logarithm or even the square root of 58. Would you also assign a level of purpose and thought to this ability?I feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.
The model is learning and improving. How is that different to your "experience"?Quite. I had a conversation last week with a PhD student who is studying exactly this for his PhD. He’s excited for but very cautious about what can be done if you do start allowing the machine to gain experience, but that’s definitely not what’s happening at the moment. What we’re seeing at the moment is the interesting way humans project their own perspective onto non-human things. It’s kind of like seeing faces in clouds.
Harder to dismiss people who work in this for years+ academic papers. Last few pages have only referenced thoseI feel like a lot of what we're also seeing is people who have no idea about coding or how computer programs work seeing some of the stuff in the wild. tweets are going viral purporting to show something amazing but in reality are fairly bog standard tech.
It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.The model is learning and improving. How is that different to your "experience"?
@101It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.
@101It’s learning and improving how to pick the next word, not learning and improving how to make sense of the words it has given you. What it’s improving at is picking a word that means something to the user. It’s like a calculator that started out guessing at 8+7 and, over time, is getting closer to giving you 15 as the answer. But it has no memory of its previous failures to give you 15, and it doesn’t have any kind of aim to give you the right answer. There is an important difference between the two ideas. I know you say you don’t want to talk philosophy, but it’s hard to understand the difference without engaging with the idea that ‘experience’ involves meaning-making.
see above
33mins12 in the video
Yes I'm not arguing against that but he also adds a BUT about additional understanding of context.... He went out of his way to bring this up, it wasn't a question put to him he wanted to highlight it as a significant stepHe says the same as the authors do - that it uses the neural net to predict the next word.
What's in the neural net is the result of feedback on a complex system over an unimaginably long training process. It's very hard for us to know what's actually in there. You could think of it as a magic robot, or you could think of it as a very effective algorithm for predictive text like you'd expect to get from that process.
this man Geoffrey Hinton - Wikipedia has been working on AI for 40+ years and its his + colleagues neural network model, dismissed for years, that has resulted in the current breakthrough
interesting video for lots of reasons but highlight for me
-shows how these machines are modelled directly on the human brain and how they are able to learn and store the lessons that they learn, by making new "neural pathways"
-explains how they recognise images, again based on how animals (including us) do it
-on the question of sentience thinks people are being too quick to dismiss it outright, and we need to consider different definitions of sentience. Describes it as an idiot savant
-says that the Its Predicting The Next Word simplification is fundamentally true,but misses out that the bot understands the sentences and discerns meaning before responding - he gives an example of that in action. there is some degree of comprehension/context going on of the words being inputed
-also talks about some worries, particularly the military implications
- big issue about who decides what the truth is and adds that bias to the bot. We;ve already had one Left Wing Bias headline in the UK press
see above
33mins12 in the video
Try BingBot, its unhingedIt’s a bit lilly livered. I want to ply it with drink, see what happens then.
google sex chatbotIt won’t talk dirty to me in the voice of Joanna Lumley. Bah.
its a video for a general tv audience so maybe broad strokes but hes got a convincing manner and presumably knows what he is talking about and is realistic about the present and futureThat's a great video.
He's disarmingly straightforward and not at all reassuring. He's not even sure if it's a good thing these things work so well. It is good that people like him are involved, though.
There are also issues with things like creative commons attribution and with GPL'd code which it will probably spit out without any indication of the license so it will get included in proprietary applications.
Copilot, which was unveiled by Microsoft-owned GitHub in June 2021, is trained on public repositories of code scraped from the web, many of which are published with licenses that require anyone reusing the code to credit its creators. Copilot has been found to regurgitate long sections of licensed code without providing credit — prompting this lawsuit that accuses the companies of violating copyright law on a massive scale.
interesting interview. it cant be that hard to "train their AI in a manner which respects the licenses and provides attribution". I'd expect their case is watertight and thats will have to happen. With Bing it has a link from where it got its info, similiar thing here. Or AI writes its own code and never scrapesHadn't seen this before but there's a lawsuit about that too:
I don't necessarily want dirty talk. I was just er testing.google sex chatbot
here's some
List of Sex Chatbot
While chatbots have typically been used for customer support, they have also found a way to thrive in the adult entertainment industry. Sex chatbots (also known as adult chatbots) communicate through flirty and sexual conversation, allowing users to indulge their fantasies.www.ometrics.com
Not read the link, but tbh, I hope this whole AI thing brings about some new thinking on copyright. It'll have to, I guess. But it's been overdue for a long time.interesting interview. it cant be that hard to "train their AI in a manner which respects the licenses and provides attribution". I'd expect their case is watertight and thats will have to happen. With Bing it has a link from where it got its info, similiar thing here. Or AI writes its own code and never scrapes
this is the most interesting video ive yet seen. most videos are about new developments, this is more on history and analysis
this man Geoffrey Hinton - Wikipedia has been working on AI for 40+ years and its his + colleagues neural network model, dismissed for years, that has resulted in the current breakthrough
interesting video for lots of reasons but highlight for me
-shows how these machines are modelled directly on the human brain and how they are able to learn and store the lessons that they learn, by making new "neural pathways"
-explains how they recognise images, again based on how animals (including us) do it
-on the question of sentience thinks people are being too quick to dismiss it outright, and we need to consider different definitions of sentience. Describes it as an idiot savant
-says that the Its Predicting The Next Word simplification is fundamentally true,but misses out that the bot understands the sentences and discerns meaning before responding - he gives an example of that in action. there is some degree of comprehension/context going on of the words being inputed
-also talks about some worries, particularly the military implications
- big issue about who decides what the truth is and adds that bias to the bot. We;ve already had one Left Wing Bias headline in the UK press
see above
33mins12 in the video
It's interesting that if you just replace "machine" with "urban75 poster" then what you have written remains true.What all this means is that any sense that the machine is having an actual meaningful conversation with you is something generated by you — by your projection of humanity onto it — not by the chat bot itself. The sense of personhood comes from your representation of it in your own mind, not any consciousness in the machine. A “machine in the mind” rather than a “mind in the machine”
Remember that copyright is a very narrow form of IP, protecting the expression of an idea not the underlying idea in general.Not read the link, but tbh, I hope this whole AI thing brings about some new thinking on copyright. It'll have to, I guess. But it's been overdue for a long time.
There's only so many ways to make a Python function do a simple task. Especially if you stick to coding conventions, best practices, etc. Give 100 human programmers a simple coding problem, and unless they were trying to be clever, they'll submit very similar programs.
Same with music, art, etc. Authors who've long been dead having their work under copyright, owned and profited off by some faceless org seems wrong to me.
I've noticed people assigning licenses to their models now, which I find bizarre.
"Here's a model I've train on anime cat girls. Feel free to use it but only non-commercially."
I guess it's similar to Unity (or Unreal Engine, I forget which), which allows you to use their game engine for free as long as you earn less then $100,000. After that, you need a new license. Though, that's much easier to prove.
Another interesting thing I've seen on AI blogs is people using ChatGPT to generate training data for their own models.
1. This is genius, IMO.
It cost OpenAI $2.5m to train GPT4. But now anyone with a subscription can ask it to generate questions and answers and use these to develop their own competitor/alternative, at a much reduced cost.
2. It's in OpenAI's T&Cs that you cannot use their model in this way.
Again, how would they know? Especially if you don't release the model. Say, you're a private company who wants a niche model for internal use and don't require 13Bn parameters about poetry and whatever. You just want to know about engines. So you get ChatGPT to generate the training data, knowing it's a billion-dollar piece of technology. Then use that to create your highly focused AI knowing you have grade A training data.
3. Is this piracy?
Are you 'stealing' their IP by doing this, especially when it's specifically againsy the license conditions? Maybe. But they'd be brave to start suing people for doing this, given their own 'stolen' training data. Currently they just ban you, if they find out.
There are going to be some interesting and difficult legal issues in the near future surrounding all of this.
This was really good, ta!
Facial recognition firm Clearview has run nearly a million searches for US police, its founder has told the BBC.
[...] Clearview now has 30bn images scraped from platforms such as Facebook, taken without users' permissions.
The company has been repeatedly fined millions of dollars in Europe and Australia for breaches of privacy.
A company totally out of control.Another scraping issue: Clearview AI used nearly 1m times by US police
the article kind of likes to have it both waysa decent read on the AI hype train and the faux fear of how powerful it is as a marketing campaign for OpenAI. no doubt they did well out of being a non-profit before shifting to a for-profit and taking the open source code private. fucking tech bros.
Column: Afraid of AI? The startups selling it want you to be
ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There's more to fear here than killer robots.www.latimes.com
The response is quite interesting, but I really like this use case to detecting bullshit. Could imagine this being useful for policy announcements, etc.