At its I/O event this week, Google gave us our most comprehensive preview so far of how it intends to reshape its search engine in response to the wave of hype surrounding generative AI and chatbots. [...]
But Google’s plan for the future of search shows us there are going to be very clear tradeoffs if we embrace the vision advocated by these companies. After building its business on the open web, Google has now scraped it onto its servers and will serve up paragraphs plagiarized from the very websites that used to depend on it for traffic. In the process, it will make it unnecessary for many users to continue beyond Google to those other websites, but will allow Google to sell more ads against the content it’s generated based on other people’s work.
Google’s efforts show how power is really being wielded behind the curtain of AI hype. We need to be aware of how companies are using this moment to further centralize power and increase their control over our experience of the web and everything we’ve ever contributed to it. The threat here isn’t sci-fi fantasies of intelligent computers that could exist in the distant future; it’s what companies are doing today that will have serious ramifications for people’s lives — and in many cases already is.
It seems very very bad that ahead of a hearing meant to inform how this sector gets regulated, the CEO of one of the corporations that would be subject to that regulation gets to present a magic show to the regulators.
Listening to Josh Hawley fear-hype GPTs and LLMs at 10am on a Tuesday is not a fate I'd wish on anyone, but this is the life and careers I've chosen.
That said, I'll say this for Sam Altman: He's definitely learned how to package "I'm deeply concerned about AI overlords and being hunted in a Terminator-esque wasteland of bombed out cites and mountains of human skulls" into a mainstream-appealed senate-hearing soundbyte.
OpenAI boss Sam Altman is close to securing about $100mn in funding for his plan to use iris-scanning technology to create a secure global cryptocurrency called Worldcoin, [...]
The group includes existing and new investors, said one of the people. Previous investors in the company include Khosla Ventures and Andreessen Horowitz’s crypto fund, as well as FTX founder Sam Bankman-Fried and internet entrepreneur Reid Hoffman.
[...]
Worldcoin executives said their approach tackles two problems raised by the increasing sophistication of artificial intelligence: distinguishing between humans and bots, and providing a form of universal basic income that might offset job losses caused by AI.
it will do tasks, not jobs. This is something that's going to help people with the jobs they have, not displace those jobs.
I still find it mindblowing how fast this has happened. Ive never used any of these tools myself. Maybe I should to get a new job though
The output from this AI is much better than what a lot of teenagers or even adults could write. Obviously with less human depth....
There's so much money being invested, just 'Open AI' alone is valued at about $30 billion, that capitalist competition between various seriously big businesses has turbocharged development, and when people thought a certain stage of development would take, for example, another 2-3 years it is now taking a month or even just a week, the money is simply being spent on beating the competition, with little consideration to it being safe.
<snip>
He also spoke of the investment that has turbocharged development of humanoid robotics, and combined with AI, will be able to increasingly do more and more jobs, and it's not unrealistic that they will be able to do 95% of jobs that humans currently do in as little as 20-30 years. Although, I guess once they have reached that point, it will take another period of time for them to be completely rolled out.
You could be right, in teaching. Except overall we have had situations of labour saving devices in the past, think industrial revolution, and still we have high levels of overall employment.I teach and we had an AI training session today about how it will save time with lesson planning, marking, grading etc. Save time my arse, create redundancies more likely!
This is completely nuts:
Yes, when I first saw it I thought it would be an intern or something but according to their filing (point 6), one has been a lawyer for 30 years and the other one for over 25 years.What a fucking idiot. How the fuck did they even make it through law school or whatever without learning that you can't just keep making shit up and expect to get away with it?
It comes down to the amount of money being throw at it and competition between the big players, that has basically turbocharged development, I posted this on the Expansion of AI and political / social impacts... thread yesterday, after watching a video posted by LDC.
Skynet says 'Hold my beer':
Skynet says 'Hold my beer':
Hadn't heard of it before. Just listened to a few and they're not bad.Anyone come across The Lost Tapes of The 27th Club collection based on AI ?
Aside from the opening spoken word bit The Doors one is great imoHadn't heard of it before. Just listened to a few and they're not bad.
EU rules on privacy and social media suggest you can regulate big tech. AI enforcement seems very hard to do, but will be easier to monitor the big tech companies than it will open source and bad faith actors.UK to host major AI summit of ‘like-minded’ countries
There is no way in this world that governments have people's best interests at heart when it comes to AI or are even competent to make any decisions on it.
They don't understand how the internet, encryption and opensource works.
It's impossible for any human to keep up with developments, even if they spent all day engrossed on the subject.