Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

I'm not familiar with Lamda, but if it's just a particularly sophisticated language model as opposed to anything more than that, then I think it's again just a matter of getting statistically likely answers. People have actually written stories about digital entities who want to escape or who fear being switched off, etc. I reckon that much more has been written about that sort of thing than about the boring and humdrum life of a machine that either loves its work or doesn't even think to question its position.
totally follow that...surely it would be easy to look into the back end and see what processes it is doing and resolve any doubt about this...the guy who made the sentience claim was no fool, id expect this shouldn't lead to any room for confusion??
 
totally follow that...surely it would be easy to look into the back end and see what processes it is doing and resolve any doubt about this...the guy who made the sentience claim was no fool, id expect this shouldn't lead to any room for confusion??

It's my understanding that modern AI is often a kind of "black box", due to its code being generated by stochastic processes, as opposed to every line of code being hand-written by a person. So teasing out the whys and wherefores of a particular prompt+output isn't just a case of scrolling through the code and reading the commentary to try and work out exactly what happened. I'm not sure if such models can be altered after the fact, or whether you have to start over and re-bake the entire cake. Either way, I doubt that troubleshooting such models is easy.

You don't have to be a fool in order to be fooled.
 
the guy who made the sentience claim was no fool

Yes he is.

the Google programmer at the center of the Washington Post story, Blake Lemoine, is full of shit. Sorry. The chat transcript he released really is impressive, but it’s the heavily edited collection of out-of-context and rearranged questions and answers from nine different conversations with two different people, as noted in the PDF Lemoine released. As far as I can tell, he hasn’t publicly released the raw dialog.
[...]
““I know a person when I talk to it,” said Lemoine…I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist.” (He’s an ordained priest who has previously accused his fellow Google employees of harassing and discriminating against him for his “sincerely held religious beliefs in God, Jesus and the Holy Spirit” as a Christian mystic.)

When the Post reporter asked LaMDA if it was a person, it said “no” at which point Lemoine said the reporter wasn’t playing with it right. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.” It’s quotes like that that make me wonder if Lemoine is sentient. [...]
 
This thread on testing whether the Bing one could write a paper for a student is interesting too. Again it's clear there's no sense that things can be either true or false, it just strings words and search results together that are statistically likely. That can look impressive until you test it properly.
 
If you ask it to write a poem (in English) it absolutely insists on making it rhyme, even when you specifically tell it to not you, and even if you ask it to change specific words in the lines to different ones. It doesn't have any 'awareness' of what it's producing.
 
On the recent Prince Harry thread, I was extremely unimpressed with its attempt to answer an undergraduate psychology exam question. It failed to pick up on the content words of the question, instead giving a very high-level generic answer to the subject matter as a whole. It also didn’t do the thing that the question asked for, namely provide a critical analysis. As I said on that thread, I’m not sure I’d even give it a bare pass.
 
Not chatGPT but this seems to be turning into the AI general thread so sharing this here:



An AI talk show with celebrities answering questions from the chat. The questions themselves are selected by an AI and read out automatically, so it runs 24/7

quite amazing - and genuinely funny
 
I discovered the Athene AI Show when Asmongold on YouTube talked about the AI that Athene had made of him.

Athene is also currently doing daily streams in which AI versions of celebrities and streamers play the roles of Lord of the (cock)Rings characters. You can now enjoy George Carlin as Gandalf, Alex Jones as Saruman, Snoop Dogg as Legolas, and Donald Trump as Gimli among others. I don't really know any of the streamers but I still think they put on an entertaining if low-brow performance. There are five chapters of LOTCR as I write this, and you can watch either the full streams or just the highlights, shared below:





 
I don't understand enough about this to fully appreciate the importance, but the ML bloggers etc seem to think this is a big deal for AGI:


From what I think I understand, it says that language models perform better at language tasks when they also have image recognition. And vice versa. In other words, we don't need to keep increasing the parameter count to get better performance, if we feed it a variety of data types instead.

ChatGPT gave me this layman's summary:

Large language models can do complex tasks, but they struggle to understand real-world problems like robotics. To fix this, researchers have made a new kind of language model that incorporates real-world information, like what a robot can see and touch, into how it understands language. This new model can do things like plan robot movements and answer questions about what it sees in pictures. It can also learn from different kinds of tasks and information at once. The researchers made a really big version of this model, and it works well for lots of different tasks, including ones that need both language and visual information.
 
From a neuropsychological point of view, I can tell you that most researchers now think that language uses a multi-modal cognitive system regulated centrally. What that means is that “action” words are processed in the action-relevant part of the brain — the part that runs simulations of the actions in question. It jumps on that processing system to speed up the linguistic process too. (That part is totally uncontroversial, it’s the central processing part that isn’t completely decided).

You can tell this for many reasons, including that if I hear the phrase “Kabbes kicks the ball”, I process it faster than “Kabbes wants to kick the ball”, and the former lights up the part of the brain associated with kicking whereas the latter does not.

So this system of including images is just a small step towards emulating how the brain actually handles language, which is that it is part of a fully embodied physical being that interacts with its environment.
 
For a while this morning it looked like I had lost all of my converastions with Chat GPT. Thankfully they're accessible again now.

Would be really great if you could merge conversations and/or allow ChatGPT to reference other conversations.
 

A mix of entirely AI generated music. Much better than I was expecting, tbh.

No idea on how the music was actually made, though.

edit: So, "Riffusion" is the tech that makes this work. It's pretty cool. Here's my Mongolian Hip Hop I just prompted:

 
Last edited:

A mix of entirely AI generated music. Much better than I was expecting, tbh.
this is how i think most IDM glitch sounds tbh :D ie not impressed that much

i joke but fuck give it 5 years...................

that said Ive put Jazz Modal into Riffusion and its music ive never heard before and not in a bad way... pretty fresh tbh and I'm 10 minutes in

I can imagine AI jazz is going to be more interesting than AI DnB/House/Techno etc. Core dancemusic is about mastering basics, I think AI will struggle, coming up with untried jazz shit right now seems more AI territory
 
Last edited:
All the things I used to think that about have been proven wrong. Art, writing and music(ish) - all possible now.

I wouldn't write anything off.
very true
im sure to be wrong
but to repeat my point, the genius of dance music is nailing uplifting basics hard, a much harder trick then people realise
being abstract is actually easier, easier still is being abstract and moody
i expect AI to nail abstract moody first - sweet spot electronic dance musis last :D
 
very true
im sure to be wrong
but to repeat my point, the genius of dance music is nailing uplifting basics hard, a much harder trick then people realise
being abstract is actually easier, easier still is being abstract and moody
i expect AI to nail abstract moody first - sweet spot electronic dance musis last :D

The ultimate cheeseburger?
 
this comes down to how i think about music in general
Brian Eno style Music For Airports is the kind of music AI can make easily and fool people i reckon

What makes the best music great is a secret groove ingredient that I think AI will struggle to identify. It would need a very limited dataset (of very good music) and some intelligent instruction as to what makes it good. I dont trust anyone to define what that is , and nor do i trust pattern recognition to differentiate between genius and logical average. There is perhaps still room for soul-genius yet.

Music is a language and its about communicating truths. At this point AI is average inputs and regurgitating. If it has something to say for itself, or can be tricked into having something to say, then that will be interesting.
 
Another test, this one with chatgpt.

I [...] wanted to test the capabilities it will by my students to fake next exams.

1st obvious answer: it can fake any simple question you can make during a written exam, so the only solution is to make sure students don't have a mobile.

2nd answer many found already: it produces self-confident and fake answers to more complex questions. It generates unexisting references.
[...]
It also cannot discriminate between existing facts and theories, or already observed phenomena, and their virtual (and probably not existing) versions that scientists described in their work.
[...]
It also gave me different lists of 100% not existing references of an author I know well (myself)
[...]
For students who think it's a good idea to use it to speed up literature research of writing of thesis, this will be a disaster.
[...]
But honestly, it will be a disaster for all of us, because once this will generate web pages with seemingly accurate facts (in which true-or-false matters) it will create a flood of plausible yet fake information, which will require an enormous amount of energy to debunk.
 
If it has something to say for itself,

There's no self. It's a computer program.

They’re great at mimicry and bad at facts. Why? [they], have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter,
[...]
We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

 
I am just having a play with it and I think it is phenomenal. I just typed the headlines of a couple of technical web pages I wrote last week, pages that I laboured over for hours and it spat out good text immediately!

Is there any restriction in how we are allowed to use the output? Because I could use it right now to create web pages?
 
I am just having a play with it and I think it is phenomenal. I just typed the headlines of a couple of technical web pages I wrote last week, pages that I laboured over for hours and it spat out good text immediately!

Is there any restriction in how we are allowed to use the output? Because I could use it right now to create web pages?
I don’t think so. You’re free to use it. Obviously you could edit and put your own spin on it to make it more yours if you wanted to.
 
When the handbrake on your Vauxhall Corsa pulls up without any resistance, it likely means that there is a problem with the cable that connects the handbrake lever to the rear brakes. There are several potential causes for this issue, including:

  1. Broken cable: The cable that connects the handbrake lever to the rear brakes may have broken or become disconnected, preventing the handbrake from engaging.
  2. Stretched cable: Over time, the cable that connects the handbrake lever to the rear brakes may become stretched, causing the handbrake to lose its tension and fail to engage properly.
  3. Faulty handbrake mechanism: The handbrake mechanism itself may be faulty, preventing the cable from engaging the rear brakes.
  4. Damaged brake pads or calipers: The rear brake pads or calipers may be damaged or worn, preventing the handbrake from engaging properly.
If your handbrake is not engaging properly, it is important to have the problem diagnosed and repaired as soon as possible. Driving with a faulty handbrake can be dangerous and may lead to accidents or damage to your vehicle.
 
What is the end game of this?

What will happen with it in 5 years?

The use of the internet for discussion and community forming might become much more difficult due to the volume of machine generated garbage. Anyone with sufficient resources might be able to make any inconvenient facts much harder to verify.

When the handbrake on your Vauxhall Corsa pulls up without any resistance,

And yes it's very likely to be usable in cases where you could safely google and click "I'm feeling lucky".
 
..
And yes it's very likely to be usable in cases where you could safely google and click "I'm feeling lucky".
TBF the example about the handbrake was just something I played with, the two articles that really impressed me were on very niche technology comparisons which it hardly thought about rather immediately spat out a very credible text.
 
Back
Top Bottom