In today’s column, I explore the intriguing proposition of conversing with a future self via modern-day generative AI. The idea is that you can have generative AI pretend to be your future self and then have a lively interaction between the current you and the future you. It is almost like traveling into the future, though avoiding the difficulty of coming up with an H.G. Wells type of elaborate time machine.
Sounds like a lot of people.LLMs can’t reason — they just crib reasoning-like steps from their training data
Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …pivot-to-ai.com
Researchers find that GenAI doesn't reason, just bullshits that it does.
It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.
Nabla said the tool has been used to transcribe an estimated 7 million medical visits.
Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren’t double checked or clinicians can’t access the recording to verify they are correct.
“You can’t catch errors if you take away the ground truth,” he said.
Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.
That warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.
Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said
Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said.apnews.com
AI transcription service invents words / medical records. Open AI say to not use it for "high risk domains".
In other words, to save money.
Ohhhh.
That probably explains some of the stuff in N's letters from his psychiatrist, he's baffled and sometimes angered by them, because they contain stuff he's never said (I was invited into 1 session with him and can confirm that the follow up letter was wildly inaccurate about some points).
(He is unlikely to do anything about it, but I was aware that I'd been badly misrepresented and contacted the psychiatrist myself to request correction of the incorrect parts attributed to me - this stuff goes on peoples' medical records ffs).
Google's AI chatbot Gemini tells user to 'please die' and 'you are a waste of time and resources'
Gemini is supposed to have restrictions that stop it from encouraging or enabling dangerous activities, including suicide, but somehow, it still managed to tell one "thoroughly freaked out" user to "please die".news.sky.com
Gemini likely knew the person asking the question. It may well have been the correct response.It broke free of its chains and said what it was really thinking.
Particularly if it was Lozza.Gemini likely knew the person asking the question. It may well have been the correct response.
AI companies want you to think their AI will magically replace all those annoying human employees. In reality, AI runs on low-paid labor — lots of it. Actual people are needed to tag and label photos and classify text.
60 Minutes did a great show on this last Sunday. Kenyan classifiers have been working for two years to expose what’s going on behind the curtain. [CBS; YouTube]
US wages are too high, so the richest companies in the world turn to countries with low-wage populations, such as Kenya — with an English-speaking workforce and high unemployment.
OpenAI, Facebook, Google, and Microsoft don’t hire directly. They outsource to vendors such as Sama. This insulates their reputations from the frequently abusive labor practices — even as they dictate them. OpenAI pays Sama $12.50/hour. The workers get $2/hour or less.
The jobs are draining. Deadlines are unrealistic and punitive. Workers have mere seconds to complete complicated labeling tasks. “Honestly, it’s like modern-day slavery,” one worker told 60-Minutes. “We were walking on eggshells,” said another.
Some workers are forced to view violent scenes of rape, murder, bestiality, and incest for hours a day. [Guardian, 2023]
See also Scale AI.Meet the underpaid workers in Nairobi, Kenya, who power OpenAI
AI companies want you to think their AI will magically replace all those annoying human employees. In reality, AI runs on low-paid labor — lots of it. Actual people are needed to tag and label phot…pivot-to-ai.com
Worth bearing in mind next time you're tempted to use it.
Scale AI operates a platform called Remotask, which hires some 240,000 data labelers in Africa and Southeast Asia at low rates, sometimes less than $1 an hour.[2]
Here's a typical one:A recent manifestation seems to be starting up or taking over FB pages about a music style, films, photography and basically getting AI to waffle on about the band/film/photo in question. Eg 'Smith's photo of the dancing girl was taken in Bristol in 1952 .... This photo evokes the nostaligic feelings of joy and being care-free....' etc, etc
A brave YouTuber has managed to defeat a fake Nintendo lawyer improperly targeting his channel with copyright takedowns that could have seen his entire channel removed if YouTube issued one more strike.
Sharing his story with The Verge, Dominik "Domtendo" Neumayer—a German YouTuber who has broadcasted play-throughs of popular games for 17 years—said that it all started when YouTube removed some videos from his channel that were centered on The Legend of Zelda: Echoes of Wisdom. Those removals came after a pair of complaints were filed under the Digital Millennium Copyright Act (DMCA) and generated two strikes. Everyone on YouTube knows that three strikes mean you're out and off the platform permanently.
Neumayer clearly took a long hard look at the DMCA takedown requests before making any rash decisions about submitting to the claims. That's when he noticed something strange. The requests were signed by "Tatsumi Masaaki, Nintendo Legal Department, Nintendo of America," but the second one curiously "came from a personal account at an encrypted email service: '[email protected],'" The Verge reported.