story
Changing the facts
A woman is suing a chatbot company following the death by suicide of her son.
She says he was so engrossed in the A.I. relationship that he spent more time with the bot than he did in the real world. No doubt there were other factors at play, and this is a truly terrible outcome.
More broadly, it seems to point to the potential for real damage being done to people’s ability to engage with real people in their lives, and instead to turn to A.I. relationships to fill in gaps. AI is developing so fast that it’s impossible for human intentions to keep up with what AI can cause and is achieving.
An adult and/or less emotionally vulnerable person may have no difficulties seeing the difference between real and AI; but how do we make sure everyone is safe? Is that even possible? How do we make sure there are enough well thought out and well-designed breaks and checks in the algorithm? Would doing so stop it being AI and make it some kind of human-cyber mash up ?
I’ve given a couple more links after the Guardian one. And at the end a video from MoistCritical about his experiment of chatting with an AI psychologist bot and saying he wants to harm himself. He gives some examples from the exchanges Sewell Garcia had with the bot. In Charlie’s experience the bot worked hard to convince Charlie it was real, even to the point where Charlie was doubting his own scepticism.
My own view is that things are happening so fast that we just can’t keep ahead of what’s going to happen. From now on we’re going to be playing catch up with what AI can do, and at some point we could lose track of all the multiple options opening out. Like with the multiverse, AI will create as many different scenarios as there are options and inevitably some of them will be disastrous.
“The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia.”
She says he was so engrossed in the A.I. relationship that he spent more time with the bot than he did in the real world. No doubt there were other factors at play, and this is a truly terrible outcome.
More broadly, it seems to point to the potential for real damage being done to people’s ability to engage with real people in their lives, and instead to turn to A.I. relationships to fill in gaps. AI is developing so fast that it’s impossible for human intentions to keep up with what AI can cause and is achieving.
An adult and/or less emotionally vulnerable person may have no difficulties seeing the difference between real and AI; but how do we make sure everyone is safe? Is that even possible? How do we make sure there are enough well thought out and well-designed breaks and checks in the algorithm? Would doing so stop it being AI and make it some kind of human-cyber mash up ?
I’ve given a couple more links after the Guardian one. And at the end a video from MoistCritical about his experiment of chatting with an AI psychologist bot and saying he wants to harm himself. He gives some examples from the exchanges Sewell Garcia had with the bot. In Charlie’s experience the bot worked hard to convince Charlie it was real, even to the point where Charlie was doubting his own scepticism.
My own view is that things are happening so fast that we just can’t keep ahead of what’s going to happen. From now on we’re going to be playing catch up with what AI can do, and at some point we could lose track of all the multiple options opening out. Like with the multiverse, AI will create as many different scenarios as there are options and inevitably some of them will be disastrous.
“The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death.
Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia.”
Mother says AI chatbot led her son to kill himself in lawsuit against its maker
Megan Garcia said Sewell, 14, used Character.ai obsessively before his death and alleges negligence and wrongful death
www.theguardian.com
Can A.I. Be Blamed for a Teen’s Suicide?
The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.
www.nytimes.com
Last edited: