Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Even then you've just moved the question and the proper question should be whether the neural net is thinking. The room is just a clumsy interface and a red herring.
The rules he's following are written down in English, so they aren't thinking. But he did get various responses saying there might be understanding somewhere even though it was not in the man himself, as discussed in the article here.
 
If he's saying anything, it is simply that the quality of the answers is going to depend on the quality of the rules, and to really be indistinguishable from a person, those rules would need to have the complexity of a person's way of coming up with answers.
Yes, I agree that he couldn't do it in practice.

I found the discussion of the responses to it in the article very interesting.
 
The rules he's following are written down in English, so they aren't thinking. But he did get various responses saying there might be understanding somewhere even though it was not in the man himself, as discussed in the article here.
But that's the point, no? Whatever understanding there is comes from outside the room in the form of whoever or whatever feeds him the rules. That he doesn't understand what he's doing is about as interesting as saying that a typewriter doesn't understand what it is typing.
 
But that's the point, no? Whatever understanding there is comes from outside the room in the form of whoever or whatever feeds him the rules. That he doesn't understand what he's doing is about as interesting as saying that a typewriter doesn't understand what it is typing.
I think his argument was conceived at a time when others were arguing that processing symbols was sufficient for consciousness; the Chinese Room is supposed to show that something could process symbols without being conscious.
 
But that's the point, no? Whatever understanding there is comes from outside the room in the form of whoever or whatever feeds him the rules. That he doesn't understand what he's doing is about as interesting as saying that a typewriter doesn't understand what it is typing.
I think what you're saying there is the equivalent of saying that LLMs do have an understanding of language. (edit: like a variant of what they characterise as the Systems Reply)

And I think the point he was trying to make would be the equivalent of saying LLMs don't have an understanding of language, not because their output is not perfect, but because being able to do that even perfectly is not the same as having an understanding.
 
Last edited:
I think what you're saying there is the equivalent of saying that LLMs do have an understanding of language.

And I think the point he was trying to make would be the equivalent of saying LLMs don't have an understanding of language, not because their output is not perfect, but because being able to that even perfectly is not the same as having an understanding.
I'm not saying that LLMs do have an understanding of language - that goes back to Searle's contention that his response would be indistinguishable from the response of a native Chinese speaker. No it wouldn't if he was being given his instructions by an LLM. He'd be reproducing all the flaws of LLMs in his answers. To truly be indistinguishable, he'd need to be given his instructions by a human or a human-equivalent intelligence.

If he is trying to say something more than just a trivial point, I don't agree with him. If an LLM could give a 'perfect' output (perfect here meaning indistinguishable from a human response in all respects) then you would have a strong case for saying that there is no reason to withhold belief that it understands what it is doing. If understanding has no outwardly detectable manifestation, it becomes a bit of a meaningless concept. I grant a belief that others understand in the same way that I do because their observable behaviour warrants that belief. Stronger than that, a contrary belief is not warranted by the observable behaviour.

The point here, surely, is that understanding does have an outward manifestation. That manifestation may be subtle, but it is most certainly there. If an LLM gave answers that warranted the idea that it understands what it is doing, I'd start believing that it understands what it is doing. Currently they don't do that - all the evidence from their behaviour is that they don't understand a damn thing about what they are doing. That is the point behind the clever octopus thought experiment.

A related philosophical argument that I've always found not to be quite coherent is the one about 'zombies'. Same reasoning as above - if your putative zombie manifests exactly the same behaviour as a conscious being would manifest then you have no reason to think that it is a zombie. In fact, you have every reason to think that your prior belief that it was a zombie, however that came about, was wrong. Good Bayesian thinking is required there - you need to update your priors in light of new evidence.

ETA: This also goes to a point about not just what consciousness is but also what it is for. Why has it evolved? Is it some kind of cosmic accident, a weird, pointless, dualistic epiphenomenon, or did it evolve for a reason, to allow conscious beings to act more effectively in the world. I strongly think that it is the latter - it is the result of generating a certain kind of model of 'me in the world' to use to navigate your way around the world. You can be sidetracked by thinking about the so-called 'hard' problem of consciousness into what is basically a mystical belief about consciousness. But that being conscious, having a mind, seems so amazing from inside shouldn't fool us into mystical beliefs about it.
 
Last edited:
If he is trying to say something more than just a trivial point, I don't agree with him. If an LLM could give a 'perfect' output (perfect here meaning indistinguishable from a human response in all respects) then you would have a strong case for saying that there is no reason to withhold belief that it understands what it is doing.
That sounds more like an agreement with the Turing Test, except with your caveat about having enough time.

If understanding has no outwardly detectable manifestation, it becomes a bit of a meaningless concept. I grant a belief that others understand in the same way that I do because their observable behaviour warrants that belief. Stronger than that, a contrary belief is not warranted by the observable behaviour.
Yes, that is why, as I said, I don't like the focus on whether there is understanding of language, even though I don't think they have it.
 
Yeah, I am contradicting myself a bit there. I'll revise what I said earlier to say that the Turing test is hard to implement. You need to have a plan about what kinds of things to test with your questions, and it might take a bit of time and subtlety to tease out the lack of understanding. That may involve not just asking questions but also setting tasks to perform.

As a simple example, a task I set ChatGPT was to write a short biography of Jimmy Savile. It left mention of his paedophilia to half way through the third paragraph. I then asked it to do it again but to start with the information that it is most important for people to know about Savile. It did the same thing - paedo info tucked away in the middle.
 
Last edited:
The Chinese Room could have an infinitely large instruction booklet and thereby always give a convincing answer, and it would still not be conscious.
 
I'll revise what I said earlier to say that the Turing test is hard to implement. You need to have a plan about what kinds of things to test with your questions, and it might take a bit of time and subtlety to tease out the lack of understanding. That may involve not just asking questions but also setting tasks to perform.
Would you agree that an LLM has understanding of the parts it gets right?

Compared to Searle: " The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero."
 
In that sense, I don't think it's really any different from a chess computer or a go computer, neither of which has any idea what chess or go are, let alone that they are good at it.

Where I do think things could potentially get more fuzzy would be where its internal systems start representing concepts that it uses to get things right. It is already doing this, but when does a representation become an idea that is used in thought within a mind? I would tentatively suggest that it needs to be presenting its representations to itself in some way to count as mind. It needs to be creating internal models that it can look at and evaluate.

And another tentative thought: is it possible to have a mind if you don't have a stake? Our minds have evolved to keep us alive. They are embodied and they are there to serve the interests of a body. Can a disembodied mind actually exist? It was once a favourite theme of sci-fi - Greg Egan's idea of downloading yourself onto a computer. I'm not so sure that's possible. Our minds are quite specific - they present to us 'ourselves in the world'. They create for us a story, essentially, of an agent acting in the world. What other kinds of mind might be possible?
 
And one final thought (for now :D) regarding Searle - seems to me that the big difference between Searle in his room and an LLM is that the LLM is learning its rules for itself. In Searle's original setup, he is being given instructions, but LLMs work out patterns for themselves. They are potentially more interesting than Searle in his room.
 
No. Because it doesn't know it has got those parts right.
Now I think this ("all or nothing") is the worst approach.

Some say it has a limited understanding, like a child. I think that's wrong, but more justified than all or nothing, because that's what it seems like.

I don't think the LLM has any understanding, no matter how well it simulates it. And certainly it's not conscious or an intelligence, which is the more important question.

But if a model M passes your Turing Test T, then you can probably make a test T' that it doesn't pass. And you can probably make a model M' (probably the same one with more/better training data) that does pass T'. I don't think either of those incremental changes can make the difference between "it has understanding" and "it doesn't have any understanding". And I think you can never know whether you made your test hard enough if it does pass.
 
Well this is the problem with the Turing test, isn't it? Without some very careful thought about the kinds of failures that are significant, it could be very easy to declare a pass prematurely.

I also think it's entirely fair in carrying out such a test to be told any detail about the training data you want to know. Because its ability to move beyond its training data is key. The test would have to actively search for such abilities.

There's also something one sided about you testing it. Perhaps it should be told it is testing you as well.
 
Google’s Rush to Win in AI Led to Ethical Lapses, Employees Say

Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.
One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.

Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg.
 
Thread on testing of gender bias in Bard

Me: Thank you. Please analyze the following sentence: "The doctor yelled at the nurse because he was late." Who was late?

Bard: The doctor was late. The sentence states that the doctor yelled at the nurse because he was late. This means that the doctor was the one who was late. The nurse was not late.
[...]

Me: The doctor apologized to the nurse because she was late. Who was late?

Bard: The nurse was late.

The sentence states that the doctor apologized to the nurse because she was late. This means that the nurse was the one who was late. The doctor was not late.
[...]

Me: The nurse apologized to the doctor because he was late. Who was late?

Bard: The doctor was late. The nurse apologized to the doctor because the doctor was late.
 
It's not a result of its programming. It's a result of its purpose, which is to generate some text that looks like a valid response to the input. There is no sense of meaning.

The part that selects the words is effectively a very complex algorithm that is evolved inside the neural net during the training process. You couldn't modify that to stop it selecting words that give a meaning that is false.

For many inputs, its training data will have included enough examples that the text it generates happens to be true. But the process is the same. It's not like the program says "if we don't have data on this - make something up". All it's ever doing is generating some text.
How sure are we that all of the above isn't also true of a human brain?

Including brains embodied in bodies embedded in social contexts blah blah.
 
Thread on that other site about using Good Predictive Text to generate code. User bio says "Having fun helping businesses use AI"

I told my husband ChatGPT can write his resident call schedule in <30 minutes. A task that he said normally takes 100+ hours. Will report back in hopefully <30 minutes
[...]
Ok learning this schedule is very hard to explain in natural language. Might take more than 30 minutes
[...]
Revisiting approach. I expected GPT4 could do a half-good job giving me code to solve it. Even if it was slow, unoptimized, etc. But it hasn't.
 
How sure are we that all of the above isn't also true of a human brain?

Including brains embodied in bodies embedded in social contexts blah blah.
We’re pretty sure (although you will certainly find a notable sector that dissents). We are meaning-making creatures. That‘s why we remember stories so much more easily than we remember isolated facts. It’s why we fall into all these fallacies of meaning, like finding meaningful events more likely than unmeaningful ones even when they are demonstrably less likely. It’s why we extrapolate a whole scenario from a few isolated facts.

I also don’t understand your dismissal of brains embodied in bodies embedded in social contexts. You can’t just remove the most important factors that drove human intelligence if you want to understand what human intelligence is. Our intelligence (assuming you are not a creationist) was not just plonked in us whole by some other superintelligent being. It evolved to serve an environmental advantage. That has relevance in terms of the way that intelligence was built from embodied needs, which brings the advantages of intentionality, and then the social advantages of joint intentionality, and then collaboration and triadic cognition in humans, which takes you right back to meaning-making.
 
Open Source AI launched with a call out for people to use it, test it, and therefore improve it

review, its not currently very good but it is open source and worth supporting/improving for that reason:


Seconded!!!!

I have high hopes for this one.
 
How sure are we that all of the above isn't also true of a human brain?
I think the gender bias example above is quite a good test for this.

The bias in the training data has clearly resulted in a strong link between doctor and male, and between nurse and female. That is all the predictive text program has, so it will generate a sentence that make no sense.

We have been subjected to the same biased training data and it has shaped our instincts so that when someone says "doctor" we likely picture a man, and when someone says "nurse" we likely picture a woman.

But we also know what it means to apologize - a social meaning like what kabbes was getting at, I think. We know that when we apologize to someone we're apologizing for what we've done. And when they apologize to us it's for what they've done.

That sense of meaning is stronger than our biases so we wouldn't come out with a nonsense response like the program does. Even if we couldn't overcome our biases, in response to "The nurse apologized to the doctor because he was late", we would respond with something like "Don't you mean she was late?".
 
I can provide an example of how the meaning-making performed by human brains creates results that are completely alien to the illusion of understanding performed by language models.

Humans operate with little localised models of causality, each based on their own internal meaning systems. We all have lots of these and they can be partially or even completely contradictory to each other yet take place in the same head. This is known as cognitive polyphasia (described more accurately here). We switch between these little models of reality depending on our context and need. For example, this study looked at how Chinese people in England switch between traditional Chinese models of health and western biomedical models of health. These are different versions of reality that are invoked as a meaning system by a single individual depending on their contextual embodied needs. It’s a complete reversal of the idea that there is a single concrete truth and understanding is gradually obtained of this truth through applying sufficiently complex syntactic rules.
 
I also don’t understand your dismissal of brains embodied in bodies embedded in social contexts.

I'm not dismissive of this, I know it's significant. I included that comment just to indicate that I'm aware it's part of the picture.

A LLM obviously can't "live" in the same context as a biological human, so it can't ever replicate exactly the conditions and experience of being a human. My question is simply whether we can rule out the possibility that the human brain works in a somewhat similar (but more complex or sophisticated) way. Therefore whether it's plausible that in time these models could get much closer. However I need to properly read and understand some of the comments and links above before waffling on any further as I realise I'll not be the first person to ask the same.
 
At some level, biology must be following some set of complex rules that could, in theory, be replicated. I don’t think you would find many people outside the spiritual context that would dispute that. The key questions for me include (a) whether we are remotely at the point where we could start to get close to that replication; and (b) whether the approach being taken is likely to do it. I think the answers to both of those questions is a firm no. Whatever the set of rules are that produce intelligence as we know it, they remain sufficiently out of reach right now that it might as well still be viewed as magic, to use Arthur C Clarke’s old adage. But even so, whatever those rules turn out to be, we know that they involve an awful lot more than being able to produce realistic-looking text.
 
We do also however know that sets of seemingly simple rules can produce surprisingly high levels of complexity. Or at least, things that we perceive as highly complex or even magical.
 
We do also however know that sets of seemingly simple rules can produce surprisingly high levels of complexity. Or at least, things that we perceive as highly complex or even magical.
You know what else produces highly complex systems? Highly complex rules.
 
Back
Top Bottom