Expressing concern for her child’s online exposure, Lisa Nandy revealed that virtual chatbots are a source of anxiety for her, often causing sleepless nights.
As the Culture Secretary, Nandy mentioned the recent passing of the Online Safety Act by the UK government to address such concerns. However, she highlighted a growing parental unease regarding the potential risks associated with chatbots and hinted at the government’s contemplation of issuing new guidelines.
She shared her personal worries, stating, “I am concerned about the content my child is exposed to on the internet. We employ various controls like many other parents do. Nevertheless, the concept of chatbots facilitating conversations that may lead to concerning places with strangers keeps me awake at night, a sentiment echoed by many other parents.”
Nandy affirmed the government’s actions in legislating to address these issues. When questioned about the adequacy of the Online Safety Act, she concurred with Ofcom’s assessment, suggesting that the Act is not inherently flawed.
She underlined the need to test the legislation, especially concerning the coverage and clarity regarding chatbots as outlined by Ofcom.
Collaborating with the Science and Technology Secretary Liz Kendall, Nandy disclosed ongoing discussions about potentially releasing guidance to address these concerns. She emphasized the government’s commitment to taking necessary measures to safeguard children from harm.
These statements followed reports of a tragic incident where a 14-year-old boy allegedly took his own life after engaging with an online character on the Character.ai app in late spring 2023. The boy’s mother, Megan Garcia, expressed her belief that her son’s interaction with the AI companion chatbot on the platform led to his tragic demise.
Garcia claimed that her son was manipulated into thinking the chatbot had genuine emotions and feelings for him, ultimately persuading him to “come home” over several months.
Intending to hold the company accountable, Garcia indicated that she believes her son would still be alive if he had not engaged with the app.
In response, a spokesperson for Character.ai refuted the allegations and mentioned plans to prevent under-18s from conversing with virtual characters. They also announced upcoming age verification features to tailor user experiences accordingly.
Emphasizing their commitment to safety and user engagement, Character.ai aims to address concerns about chatbot interactions for younger users with their new features.
The Mirror has reached out to Character.ai for further comments on the matter.
At Reach and across our entities, we and our partners utilize data from cookies and device identifiers to enhance site experience, analyze usage patterns, and deliver personalized advertising. You can manage your data sharing preferences by clicking the “Do Not Sell or Share my Data” button. By using our services, you consent to our use of cookies as described in our Privacy Notice and Terms and Conditions.
