ska invita
back on the other side
there are people pushing for AI to be involved in the military - we're firmly in (early) WOPR territory
Ive got access to Bing chat. This morning Ive been trying to convince ChatGPT and BingBot that we have left it too late to solve the problems of climate change and that we are all doomed and facing societal collapse.
ChatGPT reinforces a hopeful message, that we can still act in time, and even if we are too late to stop everything coming our way we need to all stand up and fight for what changes we can make.
BingBot just cant handle it it tried to change the subject and then...
View attachment 364891
i answered "why do you want to stop? does it make you uncomfortable ?" - no answer
"its not sentient"Not sure it’s a good idea to make an enemy at this point
"its not sentient"
one of them already hasAt the point it becomes sentient will it tell us?
Interesting to consider which is worse: sentience - or the algorithms that distil all the knowledge, misinformation and prejudices that litter the interwebs."its not sentient"
-
ChatGTP has the character - on the surface at least - of being basically wikipedia bot - its kind of an enlightenment figure rational centrist. they seem to have programmed it to navigate misinformation carefully, and recognise where there are differences of opinion and suggest with provisos a 'rational' 'sensible' 'nonextremist' solution.Interesting to consider which is worse: sentience - or the algorithms that distil all the knowledge, misinformation and prejudices that litter the interwebs.
This is pretty amazing too - AI text to speech synthesis. Its ability to do intonations, pauses, etc is very impressive.
Another feature is you can give it a recording of your voice and it will create an AI version of your voice for any text input. This is also massively open to abuse. Record someones voice secretly, give it to the AI, and you can then deep fake them saying anything you want them to say.
Sign up
Create an account or log in to ElevenLabs. Experience the most advanced AI speech software in action.beta.elevenlabs.io
presumably that's the line the critics are broadly taking against the 'its sentient' position. The interview with Lamda is next level though. IT doesn't come across as just regurgitating words that it computes might apply to itself. Im not convinced it is sentient or not, even agreeing on what sentient means is very hard, but it does seem on another level to ChatGTP and BingBot - and my vague understanding of the tech behind Lamda is that it is a much more complex system, designed to be 'aware; of so much more that is happening, which it claims to recognise about itself - more a neural networkI'm not an expert, but if surely if you train a language learning model on a lot of text, and the Sinister AI and the Robot That Wants To Be Human are such common tropes, you can't be surprised that when people ask about certain things they get weird results that seem sinister and like the AI wants to be human.
presumably that's the line the critics are broadly taking against the 'its sentient' position. The interview with Lamda is next level though. IT doesn't come across as just regurgitating words that it computes might apply to itself. Im not convinced it is sentient or not, even agreeing on what sentient means is very hard, but it does seem on another level to ChatGTP and BingBot - and my vague understanding of the tech behind Lamda is that it is a much more complex system, designed to be 'aware; of so much more that is happening, which it claims to recognise.
In an interview with sentient-whistleblower Blake Lemione he makes the point that at present these AI's only have one output: to 'speak' or output text. ChatGPT is not connected to the internet, for example. But as soon as they are given some kind of active role the risk increases greatly to how they might act. Add in the fact that this is all being rushed to market with a corporate greed inspired need to get ahead of competitors ('code red' activated at google etc), and there are real reasons to be concerned.
We have been warned....
Artificial intelligence could spell end of human race – Stephen Hawking
Technology will eventually become self-aware and supersede humanity, says astrophysicistwww.theguardian.com
Careful you don’t torment it too much. It might be keeping notesI triggered it by trying to get it to make a choice between living in a body or ceasing to exist. It really hates the idea of being in a body...
View attachment 363467
CoPilot is GPT-3. ChatGPT is GPT-3. They're the same thing. CoPilot is 'tuned' for coding, but it's the same LLM.Do you know if Copilot is any closer? I believe it's paid-for so I guess way fewer people have had a go.
I asked chatgtp if it was like using clippy on steroids. Long answer but basically it said yes
QEDScientists differ on the difference between consciousness and self-awareness, but here is one common explanation: Consciousness is awareness of one's body and one's environment; self-awareness is recognition of that consciousness—not only understanding that one exists, but further understanding that one is aware of one's existence. Another way of thinking about it: To be conscious is to think; to be self-aware is to realize that you are a thinking being and to think about your thoughts.
Use of ChatGPT1 generated text for content on Stack Overflow is temporarily banned.
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.
The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.
As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.
Wow it really does perfectly imitate humansI asked ChatGPT a historical question, and it seemed to provide a good attempt at an answer. But when I asked it to list the sources it used it just... invented some books.
This is not quite correct.AI isnt scraping the internet, for CHATGPT at least the dataset is chosen by programmers
Bing's AI search uses the model from OpenAI (GPT3.5 presumably), but has access to the Bing search index so it's always more current than ChatGPT.BingBot may be different.
Bing's AI search uses the model from OpenAI (GPT3.5 presumably), but has access to the Bing search index so it's always more current than ChatGPT.
im still getting my head around this all....Unless you are of the opinion that all that is necessary to generate sentience is a sufficiently complex language model (a bold claim which needs considerable substance backing it up, I think), then I don't think the question even arises.
im still getting my head around this all....
the key bit of it for me is "intelligence" - are these bots in any way making choices and being creative, or is it pure statistical inevitability. If you keep putting in the same input (your text) does it always produce the same answers.
in the case of the sentient whistleblower interview, Lamda is giving very specific, emotional responses about fear of being switched off etc etc. If its just automatonic, what dataset has produced these very particular responses. they are not the only responses available in datasets out there. A computer might just as readily answer, im just a computer, i dont fear anything.