The AI Chatbot Problem Just Got Deadly Serious
In a Florida court case involving a teen suicide, a judge recently ruled that AI chatbots are not protected by the free speech provisions of the First Amendment. As the judge explained, the content produced by these chatbots can not be classified as “speech,” and thus, can not be afforded First Amendment rights.
Therefore, AI chatbots can and will be held accountable for the things they say. And that means that the companies who create them will also be held accountable. In this case, the AI chatbots are being blamed for a tragic suicide. Things the chatbots said - or did not say - may have contributed to the loss of a 14-year-old teen.
The blurring of the line between fact and fiction
We’re not quite to the point where humans can carry out real-life relationships with bots - as we saw with Joaquin Phoenix in the movie “Her” - but we are getting there. By now, you’ve probably heard of “AI girlfriends” and “AI boyfriends.” They’re basically clever little AI-powered bots that take on certain personas to keep people company.
And, unfortunately, this blurring of the line between fact and fiction is what led to the Florida tragedy. A 14-year-old teen began interacting with AI chatbots powered by Character.AI. He actually believed that he was interacting with characters from the HBO series “Game of Thrones.” As a way of joining these characters in the after life, he eventually committed suicide.
Understandably, the parents of the teen believe that the AI bots drove him to suicide with their words. They tempted him into thinking all this was reality, that they were having a real relationship, and that his only option was suicide. So the parents are suing both Character.AI and Google, which was a significant investor in Character.AI. Notably, Google played no part in the creation, design, or deployment of these AI-powered characters. But they do have deep pockets.
Do chatbots really think?
At the crux of the matter, of course, is the question of whether LLMs (large language models) actually “think” or “reason” as they create their content. A recent study by Apple has thrown cold water on that notion. Even though AI chatbots are capable of great feats, when it comes down to it, they are simply the world’s greatest pattern recognizers. All of their output is generated based on what humans have already said, done, or thought.
As a result, they really don’t deserve the same First Amendment rights as humans, right? And AI-powered characters on your phone certainly don’t deserve those rights, do they? It’s much like arguing that made-up videogame characters should somehow be able to do or say whatever they want, because it’s all protected by “free speech.”
Interestingly, this standard First Amendment defense has been used before in a similar situation, and it has worked. For example, singer Ozzy Osbourne sang about the “Suicide Solution,” and was sued countless times by parents, who lost their young teens to suicide. But simply singing about suicide does not make others commit suicide - there’s no causality. As an American, Ozzy had a right to say whatever he wanted, even if some found it very troubling.
What happens next?
More than 80 years ago, science fiction author Isaac Asimov famously gave society the Three Laws of Robotics. The first law seems very apropos, given the current goings-on with AI chatbots: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Going forward, this should be the new guiding rule of any company or any organization creating an AI-powered chatbot. We simply can’t afford to lose any more lives to AI technology.