Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

I still find it mindblowing how fast this has happened. Ive never used any of these tools myself. Maybe I should to get a new job though :D

The output from this AI is much better than what a lot of teenagers or even adults could write. Obviously with less human depth....
 
I've just had my favourite experience with ChatGPT so far. It's been helping me with coding etc for a while, and it's cool, and I was (and am) impressed, but I like coding, so it I don't want it to do it for me. Just help me when I'm stuck.

However, I really dislike having my time wasted and there is a colleague at work who is always asking me things that force me to do loads of research (they do nothing to help), and then in the end it turns out to be impossible / illegal / not worth it.

This week, he's sent me a request to integrate some tech that I've never heard of. The website is hard to understand, and requires you to contact a sale rep to get more info, which I will not do. I asked multiple times what it is for, and whether we can use it.

"It's fine, it's stuff we're already doing anyway, but in a better way."

Back and forth I asked what it does without much answer, and eventually he directed me to some documentation which he expected me to read. It was a privacy policy, terms & conditions, etc. Lots to read.

I pasted it into ChatGPT and asked if there were any issues. It told me we'd be in breach of GDPR, given our setup. I got it to draft me an email back to colleague saying why we can't use it. Took a minute rather than what might take me an hour or two of reading & understanding.

:thumbs:
 
Google wants to take over the web
At its I/O event this week, Google gave us our most comprehensive preview so far of how it intends to reshape its search engine in response to the wave of hype surrounding generative AI and chatbots. [...]

But Google’s plan for the future of search shows us there are going to be very clear tradeoffs if we embrace the vision advocated by these companies. After building its business on the open web, Google has now scraped it onto its servers and will serve up paragraphs plagiarized from the very websites that used to depend on it for traffic. In the process, it will make it unnecessary for many users to continue beyond Google to those other websites, but will allow Google to sell more ads against the content it’s generated based on other people’s work.

Google’s efforts show how power is really being wielded behind the curtain of AI hype. We need to be aware of how companies are using this moment to further centralize power and increase their control over our experience of the web and everything we’ve ever contributed to it. The threat here isn’t sci-fi fantasies of intelligent computers that could exist in the distant future; it’s what companies are doing today that will have serious ramifications for people’s lives — and in many cases already is.
 
Short thread about Altman's presentation the day before.
It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators.

Also live toot thread of the hearing, starting here and continuing here

Listening to Josh Hawley fear-hype GPTs and LLMs at 10am on a Tuesday is not a fate I'd wish on anyone, but this is the life and careers I've chosen.

That said, I'll say this for Sam Altman: He's definitely learned how to package "I'm deeply concerned about AI overlords and being hunted in a Terminator-esque wasteland of bombed out cites and mountains of human skulls" into a mainstream-appealed senate-hearing soundbyte.
 
Slightly off topic but related unsurprising news - OpenAI’s Sam Altman nears $100mn funding for Worldcoin crypto project

OpenAI boss Sam Altman is close to securing about $100mn in funding for his plan to use iris-scanning technology to create a secure global cryptocurrency called Worldcoin, [...]

The group includes existing and new investors, said one of the people. Previous investors in the company include Khosla Ventures and Andreessen Horowitz’s crypto fund, as well as FTX founder Sam Bankman-Fried and internet entrepreneur Reid Hoffman.
[...]
Worldcoin executives said their approach tackles two problems raised by the increasing sophistication of artificial intelligence: distinguishing between humans and bots, and providing a form of universal basic income that might offset job losses caused by AI.

Not what he told the politicians above:

it will do tasks, not jobs. This is something that's going to help people with the jobs they have, not displace those jobs.
 
I still find it mindblowing how fast this has happened. Ive never used any of these tools myself. Maybe I should to get a new job though :D

The output from this AI is much better than what a lot of teenagers or even adults could write. Obviously with less human depth....

It comes down to the amount of money being throw at it and competition between the big players, that has basically turbocharged development, I posted this on the Expansion of AI and political / social impacts... thread yesterday, after watching a video posted by LDC.


There's so much money being invested, just 'Open AI' alone is valued at about $30 billion, that capitalist competition between various seriously big businesses has turbocharged development, and when people thought a certain stage of development would take, for example, another 2-3 years it is now taking a month or even just a week, the money is simply being spent on beating the competition, with little consideration to it being safe.
<snip>
He also spoke of the investment that has turbocharged development of humanoid robotics, and combined with AI, will be able to increasingly do more and more jobs, and it's not unrealistic that they will be able to do 95% of jobs that humans currently do in as little as 20-30 years. Although, I guess once they have reached that point, it will take another period of time for them to be completely rolled out.
 
The iOS app arrived yesterday and blimey is it fast! Even faster than the web version was when it first got updated late last year!
 
I caught a bit of a radio program that said AI is proving valuable in new drug discovery, apparently AI permits the selection of different solutions to illnesses in much reduced time to traditional methods. Seemed interesting.
 
I teach and we had an AI training session today about how it will save time with lesson planning, marking, grading etc. Save time my arse, create redundancies more likely!
You could be right, in teaching. Except overall we have had situations of labour saving devices in the past, think industrial revolution, and still we have high levels of overall employment.
 
  • Like
Reactions: ash
What a fucking idiot. How the fuck did they even make it through law school or whatever without learning that you can't just keep making shit up and expect to get away with it?
Yes, when I first saw it I thought it would be an intern or something but according to their filing (point 6), one has been a lawyer for 30 years and the other one for over 25 years.
 
It comes down to the amount of money being throw at it and competition between the big players, that has basically turbocharged development, I posted this on the Expansion of AI and political / social impacts... thread yesterday, after watching a video posted by LDC.

That's not entirely true tbh. Because the opensource community doesn't have the same amount of resources as big tech, they've focused on efficieny, you know the amount of resources a LLM chews up when it's learning.

Massive LLMs can be made with just 48GB of GPU RAM because they are more efficient. Yeah the likes of OpenAI and google might have loads of teams working for them ... but there's a lot of very crafty opensource developers out there who are using AI to craft a better more efficient AI....a semi singularity with humans in the loop ... for now.

If you want, you could download a large opensource LLM and run it on your PC locally ... with NO moral constraints

 
Great interview with an expert, Connor Leahy, an AI researcher and co-founder of EleutherAI on the dangers of AI right now.

If he hasn't greenscreened his background, I think he lives in Southbank London. It would be great if someone could convince him to post here!

 
UK to host major AI summit of ‘like-minded’ countries

There is no way in this world that governments have people's best interests at heart when it comes to AI or are even competent to make any decisions on it.

They don't understand how the internet, encryption and opensource works.

It's impossible for any human to keep up with developments, even if they spent all day engrossed on the subject.
EU rules on privacy and social media suggest you can regulate big tech. AI enforcement seems very hard to do, but will be easier to monitor the big tech companies than it will open source and bad faith actors.
 
Back
Top Bottom