krtek a houby
Merry Xmas!
Fail.
Nah, Tosh was relatively easy going. It was Burnside you had to watch out for.
Fail.
Here's a test I just made up and gave to ChatGPT:Nice example of a test of whether Good Predictive Text 4 is using reasoning as some have claimed, or just regurgitating what it has seen:
Yes.Does this still look like auto-complete?
1/2 point. Explain your working.Yes.
1/2 point. Explain your working.
So, it didn't get the right answer. That doesn't matter.
But it's not just "plausibly like an answer should look". It's almost the right answer. It explains how the pattern is generated and gives the correct answer. It just did it over two answers, instead of one.It looks like it generated some text that looks like an answer to the question should look. That is what it is supposed to do and that is what I would expect it to do.
It doesn't matter if you're testing whether it can generate text that looks plausibly like an answer should look.
It does matter if you're testing whether it is capable of reasoning. It is not designed to be capable of reasoning and it is not capable of reasoning.
ChatGTP has beige coded in for mass market consumption. Beige is not innate to all AI.A n00b, no. ChatGPT is very good at sounding natural. We'd soon get bored with them, though. Beige.
Yes, true.ChatGTP has beige coded in for mass market consumption. Beige is not innate to all AI.
ChatGTP has beige coded in for mass market consumption. Beige is not innate to all AI.
Have a read about AutoGTP, it's feedback loops and reasoningWhat is the proposed mechanism here for it ”reasoning”, particularly given that it wasn’t built to reason and the builders of it don’t claim that it can reason?
To reason, you have to have a purpose and a meaning. You have to have a model of the world that includes what things like “problems” and “solutions” look like. The machine here doesn’t even know it is being asked to solve a puzzle. It has no concept of a puzzle, let alone the ability to reason through an answer.
I would say that the sequence actually provides evidence that it doesn’t reason, given that it was unable to continue its “reasoning” from step two through to step three.
Deepak Pathak, who is an Assistant Professor at Carnegie Mellon that specialises in this stuff (previously at Google), is good to read on this, and he contributed to this paper on the topic of common sense in machines. To summarise a lot of things, he (and many others) consider that the kind of reasoning we’re talking about here is not possible until machines are capable of common sense. Common sense is the knowledge which way up you hold a cup, not because you have observed 10,000 cups being held the right way up but because you just have the intuitive sense of gravity and liquids. A pre-verbal infant knows which way up to hold a cup (most of the time!), and it isn’t because they have been through a deep learning algorithm. In animals, this is embodied knowledge — a complex system of sensorimotor neurons linking delicate sensors to the central processing unit and back, embedding learning and motor skills as part of the same system. Machines may or may not need this route, but they still need to have the basic model of reality that overlays any chain of reason. And that’s just the physical environment. There’s also a socioecological environment that also creates a sociocultural reality that is just as real as the physical one. When you enter a room, you know what to do next through a complex combination of “reasons” and an embedded intuitive sense of reality.Have a read about AutoGTP, it's feedback loops and reasoning
Regarding your last paragraph add Yet because there are already GTP side projects adding exactly this feature, feedback of querying own answers and a memory space on which to reconsider what has recently happened. I've no idea the mechanism by which it is already improving on its answers, and I'm not certain the designers do either as the current rate of overall improvement is fast accelerating past what was predicted.
Obviously, I know nothing really about this, but this is how it looks to me
Yeah, this bit in particular is the thing that's missing, imo. If we're ever to create conscious AI, it will need to be creating a model of reality. Specifically a model of 'me in the world'. That's what gives you a point of view, what makes it 'something like' to be you.Machines may or may not need this route, but they still need to have the basic model of reality that overlays any chain of reason.
This is not true.What is the proposed mechanism here for it ”reasoning”, particularly given that it wasn’t built to reason and the builders of it don’t claim that it can reason?
sourceIlya Sutskever said:There is another comment I want to make about one part of the question, which is that these models just learn statistical regularities and therefore they don't really know what the nature of the world is.
I have a view that differs from this. In other words, I think that learning the statistical regularities is a far bigger deal than meets the eye.
Prediction is also a statistical phenomenon. Yet to predict you need to understand the underlying process that produced the data. You need to understand more and more about the world that produced the data.
watching now... this video is full of utter madness...jaw dropping
This genius really deserves a special prize:From brain waves, this AI can sketch what you're picturing
Researchers around the world are training AI to re-create images seen by humans using only their brain waves. Experts say the technology is still in its infancy, but it heralds a new brain-analysis industry.www.nbcnews.com
While the experiment requires training the model on each individual participant’s brain activity over the course of roughly 20 hours before it can deduce images from fMRI data, researchers believe that in just a decade the technology could be used on anyone, anywhere.
“It might be able to help disabled patients to recover what they see, what they think,” Chen said. In the ideal case, Chen added, humans won’t even have to use cellphones to communicate. “We can just think.”
So they can scan my brain and see my evil thoughts? Oh dear.This genius really deserves a special prize:
If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically.
This genius really deserves a special prize:
If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically.
Not to pick on your comment but I think "we" have become jaded to what is happening. This is a major breakthrough, one of many major breakthroughs happening simaltaneously and in conjunction.This genius really deserves a special prize:
If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically.
I'm sensing strong opposition to basically anything AI from you. Curious as to why? No judgement.This genius really deserves a special prize:
If I fancy meeting a mate down the pub, instead of texting him I could just jump in the MRI scanner and have it done automatically.
I agree that it's potentially exciting, but I would add a bit of a word of caution about the enthusiasm of the researchers. Of course they're going to big up their achievements, but the ability to interpret brainwaves was first demonstrated a few years ago. I don't think huge advances in the next few years are such a given.Not to pick on your comment but I think "we" have become jaded to what is happening. This is a major breakthrough, one of many major breakthroughs happening simaltaneously and in conjunction.
The mysteries of human biology, of which perhaps DNA and brain function are the biggest, are being cracked. Still a way to go but the acceleration of progress is phenomenal. The implications enormous.
From a selfish point of view I really hope that there are major health longevity advances in the next ten years in time for when my body will no doubt start taking apart .
Yes I can imagine there being ceilings here, reliance on the brain scan technology which has it's own limitations being one. Nonetheless, very impressive.I agree that it's potentially exciting, but I would add a bit of a word of caution about the enthusiasm of the researchers. Of course they're going to big up their achievements, but the ability to interpret brainwaves was first demonstrated a few years ago. I don't think huge advances in the next few years are such a given.
I'd guess it's because you're not reading what I write carefully enough.I'm sensing strong opposition to basically anything AI from you. Curious as to why?