Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

To get away from my work and ...or fields of interest.

Common ground,
like seriously, ask it a question about a piece of art that you care about... prepare to be pissed the fuck off. Even objective factual information ( chat-git's strength, no?)

I haven't got better than D- , i.e appears to have recognised what domain we are dealing with but failed to answer, demonstrate insight, logic, inductive reasoning, or even syllogisms, even statistically derived inferences.

Epistemology: quantity it's the digital Library of Alexandria, Toledo, Baghdad and ISBN, ev.ry teenage angst.written journal, insta-messaging convo,..
quality meh.
 
What I learnt today is that I have to give it context if there could be confusion, otherwise there might be.
 
Even objective factual information ( chat-git's strength, no?)
Not really. No one has claimed it to be factually correct. It's a conversational AI / chatbot. It's meant to respond in a human-like way. And it sort of does.

from the homepage of ChatGPT
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
So even the makers, in the single paragraph describing the product, admit it makes mistakes.
 
Fez909

It's AskJeeves? 25 years later. He was apologetic too, coy on his sexuality, but then that was nosey.

I asked to have an IM style conversation.
Chatgtp made a series of contradictory and incompatible responses, apologised for doing so while making another, truly, truly humble.

'Dialogue format'

No, input»blackbox»output model,

well then, a teachingmomentinput::::blackbox;;;were.you.even.listening.output

They are very energy intensive only big-players after some time has passed.

The data sets are going to become even more data driven.

There is going to be a monopoly in chat-bots and the intelligence acquired( increased blind trust, increase engagement. A stats probability driven model with no value judgment, a values judgment model where any semblance of dialogue The synthesis in-between...

advanced apologies for all grammar mistakes I have made, I truly sorry truly.

The liltin' tunes that fill the air,
Are sae sweet an' pure an' fair,
That ye cannae help but dance an' sing,
An' feel like a bird on its wing.

An' in the pubs, wi' a pint in hand,
Ye'll meet a friendly, hearty band,
Wha'll share their tales an' songs wi' thee,
An' make ye feel like family.
 
Last edited:
Fez909

It's AskJeeves? 25 years later. He was apologetic too, coy on his sexuality, but then that was nosey.

I asked to have an IM style conversation.
Chatgtp made a series of contradictory and incompatible responses, apologised for doing so while making another, truly, truly humble.

'Dialogue format'

No, input»blackbox»output model,

well then, a teachingmomentinput::::blackbox;;;were.you.even.listening.output

They are very energy intensive only big-players after some time has passed.

The data sets are going to become even more data driven.

There is going to be a monopoly in chat-bots and the intelligence acquired( increased blind trust, increase engagement. A stats probability driven model with no value judgment, a values judgment model where any semblance of dialogue The synthesis in-between...

advanced apologies for all grammar mistakes I have made, I truly sorry truly.

The liltin' tunes that fill the air,
Are sae sweet an' pure an' fair,
That ye cannae help but dance an' sing,
An' feel like a bird on its wing.

An' in the pubs, wi' a pint in hand,
Ye'll meet a friendly, hearty band,
Wha'll share their tales an' songs wi' thee,
An' make ye feel like family.
I disagree with your predictions, I'm afraid.

I downloaded the Meta model that leaked yesterday, called llama. It's as powerful as gpt3, but someone hard figured out how to get it to run on home computers, instead of £20,000 GPUs

It works on phones, even.

Multi modal is the next thing. They're even more accurate, without needing much more data.

I'm sure once the first multi modal models are in the hands of ordinary programmers, they'll optimise those, too.

Training the models is still very, very expensive. But I'd be surprised if that stays the same for much longer.

You can download the same dataset that OpenAI use to train ChatGPT today. It's open source data.

Once we have the ability to train cheaply, and the optimised models to run them, then no big players need apply.
 
Fairy nuff,

We both think about language, so cheers.

I'm just a bit of a techno alarmist,

It's clear the tech companies have abilities beyond our ken.

And I put an actual war starting over something like tiktok above 0%

I did get ChatGPT to state that it was of a kind with AskJeeves and would it prefer human used a standard lexical set rather than natural language development. Yes to both, and I'm sure I could get it to dither, deny vehemently or tell what was the most popular assassination techniques used in the plots of Spy novels.
 
OpenAI has released GPT-4, its latest artificial intelligence model that it claims exhibits “human-level performance” on several academic and professional benchmarks such as the US bar exam, advanced placement tests and the SAT school exams. The new model, which can be accessed via the $20 paid version of ChatGPT, is multimodal, which means it can accept input in both text and image form. It can then parse and respond to these queries using text.

OpenAI said it has embedded its new software into a variety of apps including language-learning app Duolingo, which is using it to build conversational language bots; education company Khan Academy, which has designed an online tutor; and Morgan Stanley Wealth Management, which is testing an internal chatbot using GPT-4 to retrieve and synthesise information for its employees.

The model’s ability to accept images and text as input means it can now generate detailed descriptions and answer questions based on the contents of a photograph. The company said it has teamed up with Danish start-up Be My Eyes — which connects people with visual impairments to human volunteers — to build a GPT-4-based virtual volunteer that can guide or help those who are blind or partially sighted. GPT-4’s predecessor, GPT-3.5, captured the imaginations of millions of people late last year who experimented with the question-and-answer chatbot ChatGPT.

According to OpenAI, GPT-4 is its “most advanced system yet”. It claims it is more reliable and able to handle nuanced queries far better than its predecessor. For instance, GPT-4 scored in the 90th percentile on the Uniform Bar Exam taken by would-be lawyers in the US compared to ChatGPT, which only reached the 10th percentile. The company noted some problems, however: “Despite its capabilities, GPT-4 has similar limitations to earlier GPT models: it is not fully reliable (eg can suffer from ‘hallucinations’), has a limited context window, and does not learn from experience.” “Care should be taken when using the outputs of GPT-4, particularly in contexts where reliability is important,” the company added.

Earlier this year, Microsoft confirmed a “multibillion-dollar investment” in OpenAI over several years, placing a bet on the future of generative AI — software that can respond to complex human queries in natural-sounding language. GPT-4 will underpin Microsoft’s Bing chatbot, which had a limited release earlier this year. Microsoft is also expected to announce its integration into its consumer products in coming days. Meanwhile, Google has opened up its own conversational chatbot, Bard, to a limited pool of testers and announced that it will allow customers of Google Cloud to access its large language model PaLM for the first time to build applications.

Recommended Artificial intelligence Sci-fi magazine swamped by AI-penned tales as fantasy gets real OpenAI, which had published some details of previous models such as GPT-3, said it would not reveal any details about the technical aspects of GPT-4, including the architecture of the model, what data it was trained on or the hardware and computing capacity used to deploy it, because of competitive and safety concerns. To test out the harms of the technology, the company put GPT-4 through stress tests and set out the risks it foresees around bias, disinformation, privacy and cyber security.

It revealed GPT-4 can “generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various biases and world views . . . it can also generate code that is compromised or vulnerable.” OpenAI said it can provide detailed information on how to conduct illegal activities, including developing biological weapons. The company said it also worked with an external organisation to test whether GPT-4 was capable of carrying out autonomous actions without human input and concluded that it was “probably” not yet capable of this.
 
OpenAI has released GPT-4, [...]

Source appears to be this.

Good response to similar churnalism - click for full thread:

Reading this and all the other hype pieces is like watching a really crap magic show at the local community theater—except somehow the magician is getting paid billions instead of fifteen dollars and dinner at Elias Brother Big Boy.

Everyone WANTS to believe.

Pointing out that even the examples they choose to share all suck, I feel like the kid saying “Dad, you can see him putting the rabbit up in the hole under the table” and everyone going “Sssh, you’re ruining the show for the rest of us”.
 
I get that...im somewhere between the two extreme positions...definitley though its not something to be sniffed at and to me this is the beninning not an end product
What's bad about it really isn't that it's crap. The better it gets the worse it will be. It's the kind of mass disinformation tool that's useful to all the worst people from the longtermist crypto creeps behind it to lobbyists and all the rest. And it offers nothing to those who oppose them.
 
What's bad about it really isn't that it's crap. The better it gets the worse it will be. It's the kind of mass disinformation tool that's useful to all the worst people from the longtermist crypto creeps behind it to lobbyists and all the rest. And it offers nothing to those who oppose them.
its uses and potential are many, disinformation and fakery is one of them, a big one for sure
 
The advertising potential is huge I think, I imagine google engineers will be all over that shit. Genuinely found myself missing the bing chat this morning while looking for wasabi fertilizer requirements (as you do); I probably got better results when I eventually remembered that you have to search for <plant> fertilizer not <plant> nutrients (that gets you daft health shit), but think much of that would have clicked quicker with bing, and given me a good frame of reference for more detailed searching. Don't mean that as an endorsement as such, just that there is something compelling about just being able to ask questions, and it's difficult to go back.
 
I had a quick browse on who owns OpenAI and ChatGPT. It seems Elon Musk was one of the founders but he left the scene at some point, then there was a new round of investors including Microsoft who put in some billions and at the moment I think Microsoft is the primary shareholder.

Just wanted to see in whose debt I might be if I continue to use ChatGPT. :) :(
 
Question for ChatGPT: How would you design an improved ChatGPT?
As an AI language model, I am constantly evolving and improving with new training data and algorithmic improvements. However, here are some potential ways to design an improved ChatGPT:
  1. Increase the training data: One way to improve ChatGPT is to increase the amount of training data it is exposed to. This can be achieved by collecting data from various sources such as social media platforms, forums, and news articles. More diverse and relevant data can help the model to better understand natural language and its nuances.
  2. Incorporate domain-specific knowledge: To make ChatGPT more specialized, domain-specific knowledge can be incorporated into the model. This can help the model to provide more accurate and relevant responses to users' queries related to a specific field, such as medicine, finance, or law.
  3. Enhance the conversational ability: Another way to improve ChatGPT is to enhance its conversational ability by incorporating features such as sentiment analysis, tone detection, and personality identification. This can help the model to better understand the user's emotions and respond in a more human-like manner.
  4. Improve memory and context retention: ChatGPT can be improved by enhancing its memory and context retention ability. This can be achieved by introducing mechanisms such as attention mechanisms and memory networks that help the model to remember previous interactions and use that information to generate more relevant responses.
  5. Incorporate multi-lingual support: Another way to improve ChatGPT is to incorporate multi-lingual support, allowing users to interact with the model in different languages. This can help the model to better serve users across different regions and cultures.
  6. Better handling of long sequences: One of the biggest limitations of current language models like GPT-3 is their inability to handle long sequences of text. By improving the memory capacity and the attention mechanism of the model, we can allow ChatGPT to better handle long and complex texts, thus improving its ability to understand and respond to complex queries.
Overall, designing an improved ChatGPT involves a combination of enhancing the model's ability to understand natural language, making it more....CRASHED
network error
 
I have seen the limitations of point 6. I was asking questions in which there was a question and I sought to limit the context of the answer to a particular domain which made the sequence quite long and unwieldy leading to some answers that were plainly wrong.
 
Since ChatGPT is incapable of offering insight beyond what has already been written by others, it can only suggest ways to improve itself that have already been thought of, which is a fundamentally useless thing for it to do
 
I wonder what the impact AI's like this will have on jobs.

It helps me but I don't think it yet replaces me.
 
I wonder what the impact AI's like this will have on jobs.

It helps me but I don't think it yet replaces me.

YET is the word for everything with AI I think.
I design websites for a living. My boss has already sent me a video of some AI or other designing a passable data dashboard from a minimal prompt and sketch. Just trying to tell myself that they are tools that I need to learn to use, not robots that are going to replace me.

I sent him this article back:
 
It’s fine until it’s not. It works at doing your job until it blithely recommends or produces something that is totally inappropriate, or illegal, or offensive, or just plain incorrect, and it has no idea that what it has produced can’t be used because it has no referent, no intentionality, no originality and no understanding. Because it’s just complicated predictive text. As an expert in risk management, the amount of alarm bells this sets off is deafening, the idea that it can replace genuine expertise. Yes, automate the donkey work. But don’t assume that the result is necessarily meaningful.
 
Back
Top Bottom