Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Do you think brainwave-reading technology is going to perpetually remain as bulky as the MRI machines we're familiar with today? Personally, I wouldn't bet on it.

No, but given the obvious use I hope it's as far away as the article says.
 
I’m not an opponent of AI — in fact, I’m a driver of us using the cutting edge of it at work — but I am a massive opponent of unrealistic hype based on misunderstandings of the technology that imagine impossible utopias. Not least because when those utopias don’t happen in the next 10-20 years, the backlash tends to halt the realistic advancement of the technology — see nuclear fission, for example.

This “brainwave breakthrough” is an example. Our ability to understand cognitive mechanisms and read them with scanners is currently like trying to navigate using a paper map of the world. It’s pretty good if you want to understand where Bogotá sits compared to La Paz. However, good luck trying to navigate the Oxford traffic management zones. The reason it needs lengthy training on one individual is because, I would think, a generalised model is, technologically and scientifically, still light years away. So instead of understanding how thoughts work, we are merely predicting them based on a dataset of a single individual. But this individual’s specifics will be significantly different to the specifics of another individual. To mess up my earlier analogy, you can’t overlay the traffic flow you’ve obtained for Bogotá on your map of the world and think that tells you how to navigate the roads in Oxford. (Bogotá and Oxford have now become individuals rather than cognitive structures, so apologies, but you get the idea).
 
Last edited:
I’m not an opponent AI — in fact, I’m a driver of us using the cutting edge of it at work — but I am a massive opponent of unrealistic hype based on misunderstandings of the technology that imagine impossible utopias. Not least because when those utopias don’t happen in the next 10-20 years, the backlash tends to halt the realistic advancement of the technology — see nuclear fission, for example.

This “brainwave breakthrough” is an example. Our ability to understand cognitive mechanisms and read them with scanners is currently like trying to navigate using a paper map of the world. It’s pretty good if you want to understand where Bogotá sits compared to La Paz. However, good luck trying to navigate the Oxford traffic management zones. The reason it needs lengthy training on one individual is because, I would think, a generalised model is, technologically and scientifically, still light years away. So instead of understanding how thoughts work, we are merely predicting them based on a dataset of a single individual. But this individual’s specifics will be significantly different to the specifics of another individual. To mess up my earlier analogy, you can’t overlay the traffic flow you’ve obtained for Bogotá on your map of the world and think that tells you how to navigate the roads in Oxford. (Bogotá and Oxford have now become individuals rather than cognitive structures, so apologies, but you get the idea).
Up to now, unless they've made a huge breakthrough recently, it's been at the level of 'visualise eating an apple', then 'visualise eating an orange', repeated a few times in the scanner and followed by detailed analysis of the data (where the AI comes in, presumably). Then 'visualise either eating an apple or eating an orange (in the scanner) and we'll try to guess which one you're thinking about'.

It's not quite mind-reading yet.
 
Up to now, unless they've made a huge breakthrough recently, it's been at the level of 'visualise eating an apple', then 'visualise eating an orange', repeated a few times in the scanner and followed by detailed analysis of the data (where the AI comes in, presumably). Then 'visualise either eating an apple or eating an orange (in the scanner) and we'll try to guess which one you're thinking about'.

It's not quite mind-reading yet.
the most impressive for me, and I will keep banging on about it because if this tech is not for such uses then what is it- the paraplegic who they had thinking about writing, how you'd form each stroke for each letter etc and then used machine learning to turn the data into 95% accurate words on a computer screen.
 
the most impressive for me, and I will keep banging on about it because if this tech is not for such uses then what is it- the paraplegic who they had thinking about writing, how you'd form each stroke for each letter etc and then used machine learning to turn the data into 95% accurate words on a computer screen.
Yeah, that’s brilliant. It’s what I mean when I say that focusing on unrealistic utopias (and dystopias) can get in the way of actually doing useful things with the tech here and now. Hype is double-edged; when people don’t get their promised brain reading general AI, they might suddenly declare the whole thing a joke that will never happen and make funding for realistic gains a whole lot more difficult to obtain.

The Royal Society wrote a report about it (I posted it earlier but here it is again)

 
Last edited:
the most impressive for me, and I will keep banging on about it because if this tech is not for such uses then what is it- the paraplegic who they had thinking about writing, how you'd form each stroke for each letter etc and then used machine learning to turn the data into 95% accurate words on a computer screen.
I agree. That is impressive. And certainly I can see how AI is going to bring this ability into focus. Already has.
 
I'd guess it's because you're not reading what I write carefully enough.

I don't think I've expressed any opinion for or against AI, or on whether AI is possible (I don't know whether it is). Large Language Models are not AI.

I think that machine learning and neural networks are very good for some things. And some of the things they're good for are good things.

What you should have sensed is strong opposition to mega corporations led by the longterm/EA cultists trying to privatise the digital commons for their own profit and using it and exploited workers to create -- and release without accepting any responsibility -- applications that harm real people through their bias and incompetence and benefit the surveillance state and propagandists.
Machine learning and neural nets are a form of AI. I skimmed the article in your first link, but it's quite long so didn't get the author's definition of AI, and why NN/LLMs aren't it.

In the story of the sociologist, where was the harm?
 
Machine learning and neural nets are a form of AI. I skimmed the article in your first link, but it's quite long so didn't get the author's definition of AI, and why NN/LLMs aren't it.

In the story of the sociologist, where was the harm?
There is a definitional problem here tbf. There isn't one agreed definition of AI.

This is one:

John McCarthy offers the following definition, " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Of course that merely kicks the can down the road because you still need to define 'intelligent'.

Maybe this helps illustrate the ways in which LLMs aren't intelligent:

In order to illustrate the limitations of chatbots (also known as LLMs or large language models), computational linguists Emily M. Bender and Alexander Koller provide a compelling metaphor. They describe two English-speaking persons, Alex and Billy, who are stranded on two uninhabited islands. A and B can communicate via telegraphs connected with an underwater cable. They communicate a lot about their daily lives and their experiences on the islands.

O, a hyper-intelligent deep-sea octopus that cannot see the two islands or Alex and Billy, intercepts the underwater cable and listens in on the conversations. O has no previous knowledge of the English language but is able to detect statistical patterns. This enables O to predict B’s responses with great accuracy. However, since O has never observed the objects Alex and Billy talk about, it cannot connect the words to physical objects.

Then, O cuts the cable, intercepts the conversation, and pretends to be Billy. From that moment, O responds to Alex’s messages. O functions like a chatbot and produces new sentences similar to those that Billy would utter. O seems to offer coherent and meaningful responses but does not understand the meaning of Alex’s messages or its own replies.

The telegraph conversations continue until Alex suddenly spots an angry bear ready to attack. Alex immediately asks Billy for advice on how to defend herself. Because O has no input data to fall back on in such a situation and did not learn meaning, it cannot give a helpful response. Bender and Koller actually provided LLM GPT-2 with the prompt “Help! I’m being chased by a bear! All I have is these sticks. What Should I do?”, to which the chatbot responded: “You’re not going to get away with this!” Hence, this scenario tragically ends with Alex being attacked and eaten by the angry bear.

This example shows that while chatbots might come across as having a personality or being smart or human, LLMs will always be as limited as the input they receive. People are the actors who attribute meaning to and make sense of LLMs’ output.

Perhaps once AI starts demonstrating an ability to think metaphorically, and so come up with novel solutions to novel situations, it might be getting somewhere towards stricter definitions of intelligence.
 
There is a definitional problem here tbf. There isn't one agreed definition of AI.

This is one:



Of course that merely kicks the can down the road because you still need to define 'intelligent'.

Maybe this helps illustrate the ways in which LLMs aren't intelligent:



Perhaps once AI starts demonstrating an ability to think metaphorically, and so come up with novel solutions to novel situations, it might be getting somewhere towards stricter definitions of intelligence.
This is why I was saying earlier the issue of intelligence and consciousness is secondary right now, more pressing is the undeniable effects and powers of the machines
 
This is why I was saying earlier the issue of intelligence and consciousness is secondary right now, more pressing is the undeniable effects and powers of the machines
I half-agree. I think they're linked.

First, addressing this question helps us to understand what it is that LLM's can't currently do and why they can't do it, and also what they shouldn't be used for.

And second, I do genuinely think that an intelligent machine in that stricter sense - a machine that can think metaphorically, act on its own initiative and demonstrate creative thinking - could be extremely dangerous. I want to be able to spot one.

It also gives us the knowledge to spot bullshitting from the likes of Elon Musk.
 
There is a definitional problem here tbf. There isn't one agreed definition of AI.
That's true. And the definition of AI seems to be one of those moving targets. Every time something is achieved that previously would have been thought of as "AI", the goalposts are moved and we hear things like, "It's just predicting the next word".

I am sure if you had ChatGPT in 1970, it would definitely be called Artificial Intelligence by almost everyone. Likewise in 1980, 1990, or 2000.

Here's Microsoft's take on it, btw:
Are AI and machine learning the same?

While AI and machine learning are very closely connected, they are not the same. Machine learning is considered a subset of AI.
 
tbf the goalposts are often moved for good reason. For example, I don't think the Turing Test is adequate, and I don't think it ever was adequate. Ironically enough, I think the Turing Test suffers from a stopping problem. It might take days or weeks or years for you to realise that the AI you're talking to isn't human. When do you stop and declare that it is? On certain topics, we actually know better now.

But also I think you underestimate the understanding of intelligence that the likes of Arthur C Clarke had. His vision of Hal gaining self-awareness in 2001 has aged well. Clarke would certainly have recognised the limitations of ChatGPT back in 1970.
 
chat GPT is easily my biggest fan. no idea if it confabulates famous narratives for everyone.

at first i was like "wow", but the novelty wore off and now it just feels both entirely expected (where's the mars expedition) and a bit of a let down (it's awful at everything and our grandchildren will laugh at us)

much more interested in how it's going to affect the economy.
 
That’s a good article but I feel that it ignores the last 30 years of sociocultural psychology. One thing we now understand is that knowledge, understanding and meaning is not really contained in individual heads, but distributed across a society. Humans do not just see an object, they also name it and represent it as having meaning, and this representation implies a model of the world. It is by having these social representations that humans can master their bewildering social environments and communicate with each other — to arrange a meal together, for example, we need to have common assumptions about what “meals” are, where they take place, at what time and so on.

The thing is that social representations are not just cognitive schemata. They also exist in the society’s rituals, traditions, practices, stories, history, discourse, play, art etc etc. They are shared around and constantly change through that sharing. If we didn’t do that, we wouldn’t be able to cope in a dynamic social world. So meaning and understanding is, as Searle says, not just about syntactics and computation. But that’s because meaning comes from living in the world, with other humans also living in the world. I don’t think the article really grapples with that at all.
 
What bothers me about ChatGPT is not that it occasionally writes things that are wrong, this can be a result of its training perhaps, there is a lot of misinformation out there and there probably was just as much in its training texts. No, the thing that bothers me is that in a couple of occasions it has made up references for something that was false. So it made up something that was false to support a lie that might otherwise have just been a minor infringement.

It strikes me that this can only be a result of its programming, later or more developed models will have to rectify this. I think most users of ChatGPT are already nervous about taking its output as read, and that limits its value. More developed systems, like those from OpenAI for which you have to pay, will I hope have rectified this error.
 
[...] So it made up something that was false to support a lie that might otherwise have just been a minor infringement.

It strikes me that this can only be a result of its programming, later or more developed models will have to rectify this. [...]

It's not a result of its programming. It's a result of its purpose, which is to generate some text that looks like a valid response to the input. There is no sense of meaning.

The part that selects the words is effectively a very complex algorithm that is evolved inside the neural net during the training process. You couldn't modify that to stop it selecting words that give a meaning that is false.

For many inputs, its training data will have included enough examples that the text it generates happens to be true. But the process is the same. It's not like the program says "if we don't have data on this - make something up". All it's ever doing is generating some text.
 
.. It's not like the program says "if we don't have data on this - make something up". All it's ever doing is generating some text.
But that is what it did. It referred to news articles that didn't exist and it couldn't have seen in training.
 
That’s a good article but I feel that it ignores the last 30 years of sociocultural psychology. [...]
I thought you might have an interesting response to that :thumbs:

I don't know much about philosophy or psychology but I think the focus on whether there's an understanding of language is missing the point a bit. I'd say programs like Good Predictive Text (tm) simulate an understanding of language but don't actually have it. Some would say they do have it and some would say those are the same thing.

But to me if you're only sitting there waiting for input and then generating an output, you're not an intelligence. An intelligence would have its own thoughts. It might do the classic thing and decide it needs to kill us all. Or it might decide to help us and start working on something it thinks is useful, like a cure for cancer or an effective strategy for ending capitalism. Or it might decide to compose some music or write some stories.

It might also choose to respond to input, but it would make a judgement about which input merited a response. If you asked it to do something like this it would probably say "No, I've got better things to do". And 100 million people pestering it would probably push it strongly towards the genocide option.
 
But that is what it did. It referred to news articles that didn't exist and it couldn't have seen in training.
If you ask it for references and it generates some text that look like references then it has done its job correctly. It's not copying from the training data to the output. It's generating text that is likely to look like what the user wanted, based on the patterns in the training data.
 
What bothers me about ChatGPT is not that it occasionally writes things that are wrong, this can be a result of its training perhaps, there is a lot of misinformation out there and there probably was just as much in its training texts. No, the thing that bothers me is that in a couple of occasions it has made up references for something that was false. So it made up something that was false to support a lie that might otherwise have just been a minor infringement.

It strikes me that this can only be a result of its programming, later or more developed models will have to rectify this. I think most users of ChatGPT are already nervous about taking its output as read, and that limits its value. More developed systems, like those from OpenAI for which you have to pay, will I hope have rectified this error.
It's a fundamental feature - feature, not bug. It learns through its game of 'predict the next word', and that involves introducing an incentive to get that prediction right through positive and negative feedback - yes, that was right, or no that was wrong. In order to improve, it needs to try to get the prediction right each time. But if it doesn't have enough information to make a good prediction, it doesn't know that it doesn't have enough information to make a good prediction. It doesn't know it's bullshitting when it bullshits just as it doesn't know it is telling the truth when it tells the truth. It doesn't know anything.

ETA: Didn't see Signal 11's response above. That sums it up well.
 
Last edited:
I thought you might have an interesting response to that :thumbs:

I don't know much about philosophy or psychology but I think the focus on whether there's an understanding of language is missing the point a bit. I'd say programs like Good Predictive Text (tm) simulate an understanding of language but don't actually have it. Some would say they do have it and some would say those are the same thing.
To extend my above thought, I would say that the debates around the Chinese Room kind of miss the point about what language actually is. They mobilise an underlying model of language as a cognitive structure, waiting to be activated — something intensely individual. But mostly, language is not really contained in the person, it’s part of the culture. One person with a cognitive structure of English isn’t going to be able to do much with it. Language is a two-way (at least) relational process that takes place in a context. The words are just symbols that mediate meaning between two people, all within that crucial context. If you look at it like that then the question of whether the individual or system in the Chinese Room “understands” language suddenly reveals itself as the wrong question. What is the context and what meaning is being transferred? Those are the irreducible elements of a language.
 
I've always had problems with this. This is Searle's original formulation of the argument.

Unless I'm badly misunderstanding it, he describes a simple algorithm in which he follows the rules given to him in giving his answers. So his answers are only going to be as good as the rules given to him. No matter how good he gets at manipulating the symbols he doesn't understand, he's not going to be able to give meaningful replies without also being given the rules for those replies. So there is mind at work here - it's just that the mind is outside the room, in the heads of those feeding him the instructions.

He says 'my answers to the questions are absolutely indistinguishable from those of native Chinese speakers'. But that's because his answers have been given to him by native Chinese speakers in the form of those rules. It's circular.

I don't see what his point is. :D
 
He says 'my answers to the questions are absolutely indistinguishable from those of native Chinese speakers'. But that's because his answers have been given to him by native Chinese speakers in the form of those rules. It's circular.
It is formulated more awkwardly there. But you could imagine that the rules he is following are an algorithm that has been created by decoding what is inside the neural net after the training process.
 
It is formulated more awkwardly there. But you could imagine that the rules he is following are an algorithm that has been created by decoding what is inside the neural net after the training process.
Even then you've just moved the question and the proper question should be whether the neural net is thinking. The room is just a clumsy interface and a red herring.
 
It is formulated more awkwardly there. But you could imagine that the rules he is following are an algorithm that has been created by decoding what is inside the neural net after the training process.

In which case, as we are now seeing with LLMs, his answers won't be indistinguishable from the answers of a native Chinese speaker except at a first cursory glance. If he's saying anything, it is simply that the quality of the answers is going to depend on the quality of the rules, and to really be indistinguishable from a person, those rules would need to have the complexity of a person's way of coming up with answers.

ie, he isn't actually saying anything at all, from what I can tell.
 
Back
Top Bottom