Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

I've had it provide incorrect information a few times, well, info that conflicted with normal google research.
It's much better than google, in that you can ask it a question and you will get an answer, though.
I don't like that it can write stories, I'd rather write my own, but it did do a good job in padding out a short story I wrote called "Dave the Dirty Dog"
Mine's male, I decided, I call him "Smart arse"
I like messing about with it, asking it non-serious questions sometimes.
I do feel apprehensive about the future of artificial intelligence's effects on society.
 
this seems important - another new development


by my understanding:
explains how researchers are getting GPT4 to be self reflexive and learn in a way that IMO is the way humans do.

Basically when it gives an answer it reflects and asks itself if it got it right, recognises its own mistakes, and then gives an improved answer. Elsewhere I've read about how this method is being deployed at all times, so giving the computer a task it will continue to interrogate its conclusion and improve on them, at lightning speed.

What is incredible is that it is now at a point where it can recognise its mistakes and improve on them. GPT3.5 could not do this.
This process also upends earlier predictions about rates of improvement, as they were based on more data input. This suggests more data isn't necessary

You would expect a computer to broadly give the same output based on input. Limitations would require human upgrading to overcome. This transcends that.

...GPT5 + self-reflection and reasoning could well be a significant step towards something that could reasonably be called artificial general intelligence
 
Last edited:
well... it 'reflects' when you prompt it and sometimes still gets it wrong. I was surprised by the original mistakes.

it's all obviously impressive but I am becoming deliberately sceptical due to the noise. it's slightly reminiscent of crypto early days in that there's a real hype loop from people heavily invested.
 
well... it 'reflects' when you prompt it and sometimes still gets it wrong. I was surprised by the original mistakes.
it's all obviously impressive but I am becoming deliberately sceptical due to the noise. it's slightly reminiscent of crypto early days in that there's a real hype loop from people heavily invested.

yes its hard to judge objectively especially if rightly naturally cynical... i really dont know, but something about this feels different to me
 
I've tried catching it out and you can, but if you give it good instructions, it will give you good content in many contexts. It's much better than I expected it to be.

Regarding its potential for understanding, I don't think we can quite judge that yet. The history of AI has been full of confident predictions that something like general intelligence is just around the corner, ever since the 1950s.

The neural networks it uses are still a lot smaller than the neural networks of a human brain. But the big difference is that the nodes in these computer neural networks aren't neurons. They're just simple points with weights attached.

To model a brain more realistically, you would need to model the workings of an individual neuron, which itself would require a large neural network, then connect many of those up to one another. This would be more of a bottom-up approach to creating understanding. Start with something like the brain of a nematode worm, which has 302 neurons, and see how that works.
 
I've tried catching it out and you can, but if you give it good instructions, it will give you good content in many contexts. It's much better than I expected it to be.

Regarding its potential for understanding, I don't think we can quite judge that yet. The history of AI has been full of confident predictions that something like general intelligence is just around the corner, ever since the 1950s.

The neural networks it uses are still a lot smaller than the neural networks of a human brain. But the big difference is that the nodes in these computer neural networks aren't neurons. They're just simple points with weights attached.

To model a brain more realistically, you would need to model the workings of an individual neuron, which itself would require a large neural network, then connect many of those up to one another. This would be more of a bottom-up approach to creating understanding. Start with something like the brain of a nematode worm, which has 302 neurons, and see how that works.
i dont think anyone is comparing what is coming with AI to a human brain..however even the cited "sparks" of AGI are a major breakthrough, and GPT5 plus new models of processing suggest there may be some degree of it enough to qualify a definition, however small in comparison to human brains
 
Another factor that is quite possibly a crucial ingredient of understanding is the sense of self, and the sense of purpose that brings. After all, our sense of self exists in order to keep us alive, in order to produce intentions. How do you put that into a computer system? Is it an emergent property? Could it emerge without us wanting it to? A computer that is motivated to keep itself alive could be incredibly dangerous.
 
i dont think anyone is comparing what is coming with AI to a human brain..however even the cited "sparks" of AGI are a major breakthrough, and GPT5 plus new models of processing suggest there may be some degree of it enough to qualify a definition, however small in comparison to human brains
Yeah but in terms of understanding, it may not yet rival a nematode worm brain. That's more the point I was making.

Dunno. Open question at the moment.
 
Yeah but in terms of understanding, it may not yet rival a nematode worm brain. That's more the point I was making.

Dunno. Open question at the moment.
thats what i find interesting about this self learning and self improving announcement. however seemingly mechanical that process is, it still seems remarkable to my armchair understanding
 
thats what i find interesting about this self learning and self improving announcement. however seemingly mechanical that process is, it still seems remarkable to my armchair understanding
It is remarkable. It's incredible really. How does it learn? It tries to predict the next word in a set of text. It gets the feedback that its guess is 'right' or 'wrong' and adjusts its weights accordingly. Then it takes another set of text and tries to predict that next word. And. That's. It!

Where that process could lead, we don't know. We didn't know it would lead to where we've got now.
 
It is remarkable. It's incredible really. How does it learn? It tries to predict the next word in a set of text. It gets the feedback that its guess is 'right' or 'wrong' and adjusts its weights accordingly. Then it takes another set of text and tries to predict that next word. And. That's. It!

Where that process could lead, we don't know. We didn't know it would lead to where we've got now.
yeah exactly
why did it get it wrong? how did it so quickly work out it was wrong by its own and get it right next time? fascinating
 
yeah exactlywhy did it get it wrong? how did it so quickly work out it was wrong by its own and get it right next time? fascinating
As I understand it, the weights are set randomly at first. It has to learn it all on its own. It does that through brute force - billions of these self-learning 'predict the next word' games. But the end result with all the weights attached to its internal layers of neural network processing isn't something we've designed. It's not something whose workings we can actually understand.
 
It doesn't have any understanding. There's no 'it' to do the understanding.
it very much is something - "it" exists. if it can be critical and improve/learn that is a degree of understanding.
i think philosophical questions of the nature of consciousness aren't that important right now
 
It doesn't have any understanding. There's no 'it' to do the understanding.
That's what I would say at the moment as well. And as I said, we may still not be close to creating conscious AI. We might not even be on the right path towards it. But equally, we might be.
 
As I understand it, the weights are set randomly at first. It has to learn it all on its own. It does that through brute force - billions of these self-learning 'predict the next word' games. But the end result with all the weights attached to its internal layers of neural network processing isn't something we've designed. It's not something whose workings we can actually understand.
yes..... in that moment between answering once and not getting the answer fully correct and then questioning its answer and improving on it its not using brute force to reread every bit of data it has been given... a different process is occurring which i have no idea about and which seems to be surprising experts
 
Are you serious?
yes - better to concentrate on what is observable, what these things can do or not, and how they do them. If it can do something amazing it doesnt add anything to say Ah But Its Not Really Sentient. At this point that is an aside. It may be much less so in the future of course.
 
I do think conscious AI could be incredibly dangerous. But ChatGPT, very clever though it is, isn't that. And it's not clear that it will lead to that.
 
yes - better to concentrate on what is observable, what these things can do or not, and how they do them. If it can do something amazing it doesnt add anything to say Ah But Its Not Really Sentient. At this point that is an aside. It may be much less so in the future of course.
This is a false distinction. Philosophy doesn't mean ignoring observable things. If you want to debate whether something is aware, you need to have a general understanding of what awareness is. You need to know what you are looking for.
 
This is a false distinction. Philosophy doesn't mean ignoring observable things. If you want to debate whether something is aware, you need to have a general understanding of what awareness is. You need to know what you are looking for.
obviously it is relevant and lots of people are talking about it and debating it

my point is at this point the more interesting conversation is not about awareness speculations - which is very much what these conversations amount to, with significant weight on the 'its just a number cruncher' - but more in what are the concrete developments. IMO these seemingly upcoming low-level AGI processes are more interesting in and of themselves and not whether it can be labelled consciousness or not. Whether they are or not doesn't change the fact the bot can do what it is doing IYSWIM.
 
Been having fun with chatGTP and asked the inevitable question... Apologies if this has already been asked... I liked the answer.

Q: what are the pros and cons, dangers and pitfalls of artificial intelligence robots like chatGPT

As an AI language model, I can provide a general overview of the pros and cons, dangers, and pitfalls of artificial intelligence robots, like myself, including chatbots or conversational agents:
Pros:
Increased efficiency: AI robots can work 24/7 without needing breaks, which can increase efficiency and reduce costs.
Improved accuracy: AI robots can perform tasks with a high degree of accuracy, as they are not subject to human error or fatigue.
Personalization: AI robots can be programmed to provide personalized experiences to users, which can improve engagement and satisfaction.
Availability: AI robots can be accessible from anywhere in the world, making them an effective tool for global communication.

Cons:
Lack of emotional intelligence: AI robots may struggle to understand or convey emotions, which can limit their ability to communicate effectively with humans.
Limited creativity: AI robots may lack creativity, which can make them less effective in solving complex problems that require innovative thinking.
Dependence on programming: AI robots are only as effective as their programming and can be limited by the quality and accuracy of the programming.
Job displacement: AI robots can replace human workers, which can lead to job loss and social disruption.

Dangers:
Bias: AI robots can perpetuate biases and discrimination, as they can be trained on biased data or programming.
Malicious use: AI robots can be used for malicious purposes, such as cyber attacks or propaganda.
Lack of accountability: AI robots may not be held accountable for their actions, which can lead to legal or ethical issues.
Unintended consequences: AI robots can have unintended consequences, as they can make decisions that are not aligned with human values or goals.

Pitfalls:
Overreliance on AI: Overreliance on AI robots can lead to a lack of human interaction and a decline in social skills.
Inadequate training: AI robots require continuous training to improve their performance, and inadequate training can lead to errors and inaccuracies.
Privacy concerns: AI robots may collect and use personal data, which can raise privacy concerns and lead to potential misuse of data.
Lack of transparency: AI robots can be opaque in their decision-making process, which can lead to a lack of transparency and accountability.
Overall, AI robots like chatbots have great potential to improve efficiency, accuracy, and personalization, but their limitations, dangers, and pitfalls must be taken into consideration to ensure their safe and ethical use.

It also gave me a very useful answer (I figured out the first chord progression, but not the others)
Q: what are the most common chord progressions in Capoeira songs?
Capoeira is a Brazilian martial art that is often accompanied by music. The music of Capoeira typically includes percussion instruments and a stringed instrument called a berimbau. The chord progressions in Capoeira songs can vary depending on the style and region of Brazil, but here are some common ones:
I-IV-V: This is one of the most common chord progressions in all of music. In Capoeira, it’s often used in songs that have a faster tempo and are meant to energize the participants.
I-IV: This chord progression is also very common in Capoeira songs. It’s often used in slower songs that have a more melancholy or reflective feel.
I-V-vi-IV: This chord progression, known as the “four-chord progression,” is popular in many styles of music, including Capoeira. It’s often used in songs that are more upbeat and catchy.
ii-V-I: This chord progression is commonly used in jazz, but it also appears in some Capoeira songs. It’s often used in songs that have a more complex melody or harmonization.
Overall, the chord progressions in Capoeira songs are diverse and can vary greatly depending on the style and region of Brazil.
 
Yeah, I asked it similar. It must have been asked that millions of times. :D

That final sentence is how it ends everything: X and Y, but Z, where Z is actually a bit vague and hand-wavey.
 
It also gave me a very useful answer (I figured out the first chord progression, but not the others)
Q: what are the most common chord progressions in Capoeira songs?
This is its real utility, I think. It's like a form of wikipedia that you can ask specific questions. Depending on the subject matter, it can be very reliable, even though it needs checking. It is prone to bullshitting!
 
This is its real utility, I think. It's like a form of wikipedia that you can ask specific questions. Depending on the subject matter, it can be very reliable, even though it needs checking. It is prone to bullshitting!
I was very impressed with the answer to that one (chords), I have to say! I also asked it to write a Capoeira song, but it failed at doing that because there was no chorus. I gave it thumbs down and mentioned that. Hopefully it'll learn it.
 
I also asked what was essentially my 3rd year degree project (route planner for London Underground, before there was one, it was back in '96-7) - a complex question, and I did like the answer. Note: nowadays it's a lot easier to do this, but when I proposed it, it was more involved.

Write a software engine to calculate the shortest route between two locations

As an AI language model, I cannot write a complete software engine for you, but I can provide you with an outline of how you could potentially build such an engine to calculate the shortest route between two locations.

Determine the data source: You'll need to find a source of geographic data that includes the street network and address data. Some options include OpenStreetMap, Google Maps, or Bing Maps.

Import the data: Import the data into a database or data structure that allows for efficient querying and manipulation of the data.

Geocoding: Convert the two input addresses into geographic coordinates (latitude and longitude) using a geocoding service or library.

Route calculation: Use an algorithm such as Dijkstra's algorithm, A* algorithm, or a routing engine such as OSRM to calculate the shortest route between the two input addresses.

Output: Display the shortest route on a map or as text-based directions.

Additional features: Depending on your requirements, you may want to add additional features such as real-time traffic data, alternative routes, or routing preferences (e.g. avoid highways).

It's important to note that building a fully functional routing engine can be a complex and time-consuming task, so you may want to consider using an existing routing engine or library if available. Additionally, be aware of any legal or ethical considerations related to the use of geographic data and routing algorithms.
 
Talking of bullshitting, some on here know Andy Martin. He is certainly prone to self-mythologising, shall we say. Here's ChatGPT's biography of Andy. Anyone who knows him can see that this is not quite a rounded picture. :D

Screenshot 2023-04-03 at 14.10.41.png
 
Back
Top Bottom