A federal judge on Wednesday rejected the arguments made by an artificial intelligence company that its chatbots are protected by the first change – at least for now. Developers behind character are seeking to reject a lawsuit claiming that the company’s chatbots pushed a teenage boy to kill himself.
The judge’s order will allow the wrong death lawsuit to continue, in what legal experts say is one of the latest constitutional tests of artificial intelligence.
The lawsuit was filed by a mother from Florida, Megan Garcia, who claims her 14-year-old son Sewell Setzer III fell victim to a character.
Metali Jain of the Law on Technical Justice Law, one of the lawyers for Garcia, said the judge’s order sends a message that Silicon Valley – made to stop and think and impose guard before starting products in the market. “
The lawsuit against the character technologies, the company behind the character. It also appoints individual developers and Google as defendants. It has attracted the attention of legal experts and observers in the US and beyond, as technology rapidly reformulates jobs, market places and relationships, despite what experts warn are potentially existential risks.
Order of the order certainly defines it as a possible case evidence on some broader issues that include it, ”said Lyrissa Barnett Lidsky, a professor of justice at the University of Florida focus on first change and artificial intelligence.
The lawsuit claims that in the last months of his life, Setzer was increasingly isolated from reality as he engaged in sexualized conversations with the bot, who was modeled after an imaginary character from the TV show – Thrones’s range. exchanges. The moments after receiving the message, Setzer shot himself, according to legal registrations.
In a statement, a character spokesman. He showed a number of security features that the company has implemented, including children’s guards and sources in the prevention of suicides that were announced on the day the lawsuit was filed.
“We care deeply about the safety of our users and our goal is to offer a space that is attractive and secure,” the statement said.
Lawyers for developers want the case to be dismissed because they say chatbots deserve protection of the first change, and the ruling may otherwise have an effect of separation “on the industry of it.
In her Wednesday’s order, US District Judge Anne Conway rejected some of the claims of the defendants’ free speech, saying she “has not been prepared” to keep the opinion that the production of chatbots constitutes the speech “at this stage”.
Conway revealed that characters’ technologies can assert the first rights to change its users, which she found to be entitled to “” she also determined that Garcia can move forward with claims that Google can be held responsible for her claimed role in helping character development.ai. Some of the founders of the platform had previously worked on the construction of it on Google, and the lawsuit says the technology giant was â € Aware for technology risks.
“We do not strongly agree with this decision,” said Google spokesman Josã © Casta ± Eda. œ œGoogle and the character he is completely separated, and Google has not created, designed or managed the AI character app or any component of it.
Despite how the lawsuit plays, Lidsky says the issue is a warning of “the risks of our emotional and mental health to AI.â €
â € œitâ € ™ warning to parents that social media and generating equipment are not always harmless, ”she said.
#judge #opposes #arguments #chatbots #free #speech #rights #lawsuits #death #adolescents
Image Source : nypost.com