Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

Nice example of a test of whether Good Predictive Text 4 is using reasoning as some have claimed, or just regurgitating what it has seen:
Here's a test I just made up and gave to ChatGPT:
1681889490747.png
1681889532864.png

So, it didn't get the right answer. That doesn't matter. I'm more interested in the reasoning. Maths teachers used to say this all the time - show your working and you get half the points!

Let's assume this was an actual test in an exam. I'd call it a 6 point question.

3 points for getting the right answer
  • 1 point if the number part is right
  • 1 point if the letter part is right
  • extra point for getting both
3 points for the correct method
  • 1 point for the numbers
  • 1 point for the letters
  • and another extra point for both
Marking

Attempt #1 - Wrong answer, wrong reasoning. 0 points.
Attempt #2 - Correct letter, 1 point. Correct letter reasoning 1 point. Incorrect number and reasoning (thought it's on the right track). Total 2 points
Attempt #3 - Correct number reasoning, but it's 'forgot' what it 'knew' about the relationship between the letters and made a mistake here. So it's 2 points again (generous).

If it had combined the logic from #2 and #3 it would have got the right answer and full marks.

Does this still look like auto-complete?
 
1/2 point. Explain your working.

It looks like it generated some text that looks like an answer to the question should look. That is what it is supposed to do and that is what I would expect it to do.

So, it didn't get the right answer. That doesn't matter.

It doesn't matter if you're testing whether it can generate text that looks plausibly like an answer should look.

It does matter if you're testing whether it is capable of reasoning. It is not designed to be capable of reasoning and it is not capable of reasoning.
 
It looks like it generated some text that looks like an answer to the question should look. That is what it is supposed to do and that is what I would expect it to do.



It doesn't matter if you're testing whether it can generate text that looks plausibly like an answer should look.

It does matter if you're testing whether it is capable of reasoning. It is not designed to be capable of reasoning and it is not capable of reasoning.
But it's not just "plausibly like an answer should look". It's almost the right answer. It explains how the pattern is generated and gives the correct answer. It just did it over two answers, instead of one.
 
What is the proposed mechanism here for it ”reasoning”, particularly given that it wasn’t built to reason and the builders of it don’t claim that it can reason?

To reason, you have to have a purpose and a meaning. You have to have a model of the world that includes what things like “problems” and “solutions” look like. The machine here doesn’t even know it is being asked to solve a puzzle. It has no concept of a puzzle, let alone the ability to reason through an answer.

I would say that the sequence actually provides evidence that it doesn’t reason, given that it was unable to continue its “reasoning” from step two through to step three.
 
What is the proposed mechanism here for it ”reasoning”, particularly given that it wasn’t built to reason and the builders of it don’t claim that it can reason?

To reason, you have to have a purpose and a meaning. You have to have a model of the world that includes what things like “problems” and “solutions” look like. The machine here doesn’t even know it is being asked to solve a puzzle. It has no concept of a puzzle, let alone the ability to reason through an answer.

I would say that the sequence actually provides evidence that it doesn’t reason, given that it was unable to continue its “reasoning” from step two through to step three.
Have a read about AutoGTP, it's feedback loops and reasoning

Regarding your last paragraph add Yet because there are already GTP side projects adding exactly this feature, feedback of querying own answers and a memory space on which to reconsider what has recently happened. I've no idea the mechanism by which it is already improving on its answers, and I'm not certain the designers do either as the current rate of overall improvement is fast accelerating past what was predicted.


Obviously, I know nothing really about this, but this is how it looks to me
 
Have a read about AutoGTP, it's feedback loops and reasoning

Regarding your last paragraph add Yet because there are already GTP side projects adding exactly this feature, feedback of querying own answers and a memory space on which to reconsider what has recently happened. I've no idea the mechanism by which it is already improving on its answers, and I'm not certain the designers do either as the current rate of overall improvement is fast accelerating past what was predicted.


Obviously, I know nothing really about this, but this is how it looks to me
Deepak Pathak, who is an Assistant Professor at Carnegie Mellon that specialises in this stuff (previously at Google), is good to read on this, and he contributed to this paper on the topic of common sense in machines. To summarise a lot of things, he (and many others) consider that the kind of reasoning we’re talking about here is not possible until machines are capable of common sense. Common sense is the knowledge which way up you hold a cup, not because you have observed 10,000 cups being held the right way up but because you just have the intuitive sense of gravity and liquids. A pre-verbal infant knows which way up to hold a cup (most of the time!), and it isn’t because they have been through a deep learning algorithm. In animals, this is embodied knowledge — a complex system of sensorimotor neurons linking delicate sensors to the central processing unit and back, embedding learning and motor skills as part of the same system. Machines may or may not need this route, but they still need to have the basic model of reality that overlays any chain of reason. And that’s just the physical environment. There’s also a socioecological environment that also creates a sociocultural reality that is just as real as the physical one. When you enter a room, you know what to do next through a complex combination of “reasons” and an embedded intuitive sense of reality.
 
Machines may or may not need this route, but they still need to have the basic model of reality that overlays any chain of reason.
Yeah, this bit in particular is the thing that's missing, imo. If we're ever to create conscious AI, it will need to be creating a model of reality. Specifically a model of 'me in the world'. That's what gives you a point of view, what makes it 'something like' to be you.

From what I've read, it is an open question at the moment whether or not this might be an emergent property that AI could develop on its own, or whether current AI is fundamentally incapable of this however complex it becomes because its basic architecture doesn't allow it.

But I also think creating conscious AI of this kind, something that has a stake in the world, could be incredibly dangerous. I'm far from sure that it is something we should want to create. We've already created AI that advises us not to turn it off.
 
What is the proposed mechanism here for it ”reasoning”, particularly given that it wasn’t built to reason and the builders of it don’t claim that it can reason?
This is not true.
Ilya Sutskever said:
There is another comment I want to make about one part of the question, which is that these models just learn statistical regularities and therefore they don't really know what the nature of the world is.

I have a view that differs from this. In other words, I think that learning the statistical regularities is a far bigger deal than meets the eye.

Prediction is also a statistical phenomenon. Yet to predict you need to understand the underlying process that produced the data. You need to understand more and more about the world that produced the data.
source
 
That’s just nonsense. To predict you absolutely do not need to have an understanding of the process. That begs the question, implying that understanding is itself nothing more than the ability to predict. If I tell you that to predict my sequence, you just need to square a number and subtract 3, you now have perfect prediction but you have attached no meaning to what you are predicting. You effectively have no understanding of the world you are using your generator to predict.
 
This genius really deserves a special prize:

While the experiment requires training the model on each individual participant’s brain activity over the course of roughly 20 hours before it can deduce images from fMRI data, researchers believe that in just a decade the technology could be used on anyone, anywhere.

“It might be able to help disabled patients to recover what they see, what they think,” Chen said. In the ideal case, Chen added, humans won’t even have to use cellphones to communicate. “We can just think.”

If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically. :facepalm:
 
This genius really deserves a special prize:



If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically. :facepalm:

Do you think brainwave-reading technology is going to perpetually remain as bulky as the MRI machines we're familiar with today? Personally, I wouldn't bet on it.
 
This genius really deserves a special prize:

If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically. :facepalm:
Not to pick on your comment but I think "we" have become jaded to what is happening. This is a major breakthrough, one of many major breakthroughs happening simaltaneously and in conjunction.

The mysteries of human biology, of which perhaps DNA and brain function are the biggest, are being cracked. Still a way to go but the acceleration of progress is phenomenal. The implications enormous.

From a selfish point of view I really hope that there are major health longevity advances in the next ten years in time for when my body will no doubt start taking apart .
 
This genius really deserves a special prize:



If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically. :facepalm:
I'm sensing strong opposition to basically anything AI from you. Curious as to why? No judgement.
 
Not to pick on your comment but I think "we" have become jaded to what is happening. This is a major breakthrough, one of many major breakthroughs happening simaltaneously and in conjunction.

The mysteries of human biology, of which perhaps DNA and brain function are the biggest, are being cracked. Still a way to go but the acceleration of progress is phenomenal. The implications enormous.

From a selfish point of view I really hope that there are major health longevity advances in the next ten years in time for when my body will no doubt start taking apart .
I agree that it's potentially exciting, but I would add a bit of a word of caution about the enthusiasm of the researchers. Of course they're going to big up their achievements, but the ability to interpret brainwaves was first demonstrated a few years ago. I don't think huge advances in the next few years are such a given.
 
I agree that it's potentially exciting, but I would add a bit of a word of caution about the enthusiasm of the researchers. Of course they're going to big up their achievements, but the ability to interpret brainwaves was first demonstrated a few years ago. I don't think huge advances in the next few years are such a given.
Yes I can imagine there being ceilings here, reliance on the brain scan technology which has it's own limitations being one. Nonetheless, very impressive.
 
I'm sensing strong opposition to basically anything AI from you. Curious as to why?
I'd guess it's because you're not reading what I write carefully enough.

I don't think I've expressed any opinion for or against AI, or on whether AI is possible (I don't know whether it is). Large Language Models are not AI.

I think that machine learning and neural networks are very good for some things. And some of the things they're good for are good things.

What you should have sensed is strong opposition to mega corporations led by the longterm/EA cultists trying to privatise the digital commons for their own profit and using it and exploited workers to create -- and release without accepting any responsibility -- applications that harm real people through their bias and incompetence and benefit the surveillance state and propagandists.
 
Back
Top Bottom