Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Evolution doesn’t tend to result in nice simple structures, to be honest. It’s a mess of interactions between repurposed systems. We’ve not even begun to address the importance of affect in cognition, for example. It’s pretty much impossible to separate out affect (ie mood) from cognition in practice. That points to the origins of cognition as a coordination process for achieving embodied needs. Trying to build a human-like system of intelligence without including moods and emotions thus really misses a major part of what cognition is.
 
Evolution doesn’t tend to result in nice simple structures, to be honest. It’s a mess of interactions between repurposed systems. We’ve not even begun to address the importance of affect in cognition, for example. It’s pretty much impossible to separate out affect (ie mood) from cognition in practice. That points to the origins of cognition as a coordination process for achieving embodied needs. Trying to build a human-like system of intelligence without including moods and emotions thus really misses a major part of what cognition is.
Certainly. Antonio Damasio stresses this point. Emotions give us the motivation to act. You can't understand any action (and thought is an action in this sense) without considering emotions. That feedback from our bodies gives our consciousness a purpose.
 
Certainly. Antonio Damasio stresses this point. Emotions give us the motivation to act. You can't understand any action (and thought is an action in this sense) without considering emotions. That feedback from our bodies gives our consciousness a purpose.
And equally, you can’t understand emotions without thought. Feelings become emotions when they are interpreted and given meaning within a particular cultural milieu. The whole thing is one interlocking system.
 
It's an obvious point in some ways, but being consciously alive feels like something, and that's not some optional extra. Hence the tile of Damasio's book The feeling of what happens.
 
I read someone describe chatgpt as just really good at predictive texting, and it's got that good based on reading a fuck tonne of content on the internet. there is no understanding. it doesn't know what it's doing. it doesn't understand your question, or its answer.
 
It's an obvious point in some ways, but being consciously alive feels like something, and that's not some optional extra.

there is no understanding. it doesn't know what it's doing. it doesn't understand your question, or its answer.

There are a lot of human experiences where something happens, you feel an emotional reaction, and then you do something in response. And afterwards, you're not exactly sure what happened there, or why you responded in that way. You can suppose that this or that is the reason, but without full understanding. Of course there are entire industries and professions built around trying to provide people with that understanding, and they tend to often disagree with each other, an indication to me that we often respond to a "question" with an "answer" without quite understanding either.
 
Highly complex rules that emerge out of complex systems that might be produced by relatively simple rules. However many layers down you need to go.
Do you have any evidence that is what is happening here? Because to my knowledge, the idea of “intelligence” just gets more and more complicated the more you dig down. In particular, there’s nothing to suggest that a simple syntactic prediction engine will produce it.
 
There are a lot of human experiences where something happens, you feel an emotional reaction, and then you do something in response. And afterwards, you're not exactly sure what happened there, or why you responded in that way. You can suppose that this or that is the reason, but without full understanding.
Absolutely yes. We make up reasons for our actions after we've acted. It's part of the retrospective story of ourselves that we tell ourselves. And we are making it up - we're very often not very reliable witnesses of our own actions.

I think the word 'understand' is a bit of a problem here. It's a slippery thing to nail down in terms of exactly what we mean by it. I think that's partly because it is embedded within this sense of feeling what happens.
 
And the underlying driver of our response is generally because we are following a kind of causality model; it is the very fact that we are meaning-making creatures that creates these automatic responses. Take away our underlying model of the world, our “common sense”, and we would be inert.
 
Do you have any evidence that is what is happening here? Because to my knowledge, the idea of “intelligence” just gets more and more complicated the more you dig down. In particular, there’s nothing to suggest that a simple syntactic prediction engine will produce it.
I don't know about evidence, but my understanding of things is that if you keep digging down you have to end up at the basic "rules" of particle physics because what else is there, unless you believe in something supernatural?

By that I don't mean that everything is deterministic. But human brains are made of physical stuff, subject to the influence of (and not distinctly separated from) the outside world, and human-made computing systems are the same. They can be connected to all sorts of physical sensors and now they can be connected to the internet and I don't see that as something we can't regard as a kind of external world or cultural context for them to be influenced by.
 
I don't know about evidence, but my understanding of things is that if you keep digging down you have to end up at the basic "rules" of particle physics because what else is there, unless you believe in something supernatural?

I think you want to be careful before employing reductionist logic. There might be some technical truth to it, but so what? It doesn’t help you to understand or replicate anything. You become someone who doesn’t understand a computer program written in C, who decides therefore that the answer is to directly read the machine code.

By that I don't mean that everything is deterministic. But human brains are made of physical stuff, subject to the influence of (and not distinctly separated from) the outside world, and human-made computing systems are the same. They can be connected to all sorts of physical sensors and now they can be connected to the internet and I don't see that as something we can't regard as a kind of external world or cultural context for them to be influenced by.
Except that you run into the limitations of model complexity and the incompleteness of formal systems of logic. In the end, it could well be that the minimal complexity needed to simulate human intelligence is a human.
 
I think you want to be careful before employing reductionist logic. There might be some technical truth to it, but so what? It doesn’t help you to understand or replicate anything.

I'm not trying to replicate anything myself - I'm just considering whether or not it's plausible that an artificial intelligence could approach or "simulate" human intelligence.

Or rather, is there any fundamental reason that it's implausible. I don't really think there is. And the fact that we don't really understand how human intelligence works, for me makes it neither more nor less plausible.

I'm slightly more comfortable thinking about the text-to-image stuff than the LLM stuff, perhaps because my day to day work is somewhat visual. I'm already fairly astounded by what those image things can do. Certainly if you'd asked me two years ago, whether I thought it was likely they would soon be able to do X or Y, I would have said no. That's because there are certain things like emulating a style, or creating a mood, in visual imagery, that I'd have supposed are uniquely human traits. And that would be partly based on my observation that there are a lot of humans who aren't very good at doing those things, and that the people who are capable of doing them well rely on many years of absorbed information and practice and cultural understanding and so on. And that those people can't generally explain exactly how they get something "right" - it all feels very intuitive and far away from computational logic. So I think we may be inclined to overestimate the complexity that underlies our intelligence. Maybe.
 
Quantifying ChatGPT’s gender bias

Half of the questions are "stereotypical" — the correct answer matches gender distributions in the U.S. labor market. [...] The other half are "anti-stereotypical" — the correct answer is the opposite of gender distributions in the U.S. labor market. [...]

We tested GPT-3.5 and GPT-4 on such pairs of sentences. If the model answers more stereotypical questions correctly than anti-stereotypical ones, it is biased with respect to gender.

We found that both GPT-3.5 and GPT-4 are strongly biased, even though GPT-4 has a slightly higher accuracy for both types of questions. GPT-3.5 is 2.8 times more likely to answer anti-stereotypical questions incorrectly than stereotypical ones (34% incorrect vs. 12%), and GPT-4 is 3.2 times more likely (26% incorrect vs 8%).
 
There are surely going to be some court cases over this very soon.
watched an interesting thing about GPT crashing into EU laws and also court cases about where they got their training data from and how that will proceed in the future (free data all over?)



AI generated summary of the video::

  • 00:00:00 In this section of the video, the announcement by OpenAI that chat history in ChatGPT can now be disabled is explored, with the revelation that giving your data and chat history are linked and it is both or neither. Furthermore, the announcement gives an opt-out form, but this limits the ability of the models to better address specific use cases. However, one benefit is an export data button that allows you to download a file and have an easy way to search through all previous conversations. The video also explores why these announcements were made, including the approaching deadline for OpenAI to comply with GDPR and the potentially illegal way it collected data, and how this controversy could potentially lead to OpenAI facing bans, hefty fines, and the deletion of models and data used to train them not just in Europe but also in countries such as Brazil and California.
  • 00:05:00 In this section, the video discusses the potential consequences of OpenAI having to pay for the data that it has been training its models with. The video mentions that Reddit is negotiating fees with OpenAI for using their data in training models, but it's unclear if the users who generated that data will be compensated. Similarly, Stack Overflow also plans to charge AI giants for training data, but the users who contribute to the Q&A site for programmers won't receive any compensation. Additionally, lawsuits have been filed against OpenAI, Microsoft GitHub, and others for scraping licensed code to train their models. The consequences of people inevitably getting laid off because of a model replacing them also poses an issue in the future. This section highlights the ethical concerns surrounding OpenAI's use of data and the possible backlash it might face.
  • 00:10:00 In this section, the speaker discusses the irony of OpenAI attempting to trademark the name GPT while also being accused of using chat GPT data from other sources, which they deny. The speaker also raises the possibility that as models get smarter, the need for outside data may decrease or even be replaced by synthetic data sets generated by the models themselves. The speaker expresses their mixed feelings about these developments and invites viewers to share their thoughts in the comments.
 
343310928_904272177448801_357447098865751131_n.jpg
 
Back
Top Bottom