
ChatGPT might ask adults for ID after teen suicides
Cases of “AI psychosis” are apparently on the rise, and multiple people have committed suicide after conversing with the ChatGPT large language model. That’s pretty horrible. Representatives of ChatGPT maker OpenAI are testifying before the US congress in response, and the company is announcing new methods of detecting users’ age. According to the CEO, that may include ID verification. New age detection systems are being implemented in ChatGPT, and where the automated system can’t verify (to itself, at least) that a user is an adult, it will default to the more locked-down “under 18” experience that blocks sexual content and, “potentially involving law enforcement to ensure safety.” In a separate blog post spotted by Ars Technica, OpenAI CEO Sam Altman said that in some countries the system may also ask for an ID to verify the user’s age. “We know this is a privacy compromise for adults but believe it is a worthy tradeoff,” Altman wrote. ChatGPT’s official policy is that users under the age of 13 are not allowed, but OpenAI claims that it’s building an experience that’s appropriate for children aged 13 to 17. Altman also talked up the privacy angle, a serious concern in countries and states that are now requiring ID verification before adults can access pornography or other controversial content. “We are developing advanced security features to ensure your data is private, even from OpenAI employees,” Altman wrote. But exceptions will be made, apparently at the discretion of ChatGPT’s systems and OpenAI. “Potential serious misuse,” including threats to someone’s life or plans to harm others, or “a potential massive cybersecurity incident,” could be viewed and reviewed by human moderators. As ChatGPT and other large language model services become more ubiquitous, their use has become more scrutinized from just about every angle. “AI psychosis” appears to be a phenomenon where users communicate with an LLM like a person, and the generally obliging nature of LLM design indulges them into a repeating, digressing cycle of delusion and potential harm. Last month parents of a California 16-year-old who committed suicide filed a wrongful death lawsuit against OpenAI. The teen had conversed with ChatGPT, and logs of the conversations that have been confirmed as genuine include instructions for tying a noose and what appear to be encouragement and support for the decision to kill himself. It’s only the latest in a continuing series of mental health crises and suicides, which appear to be either directly inspired or aggravated by chatting with “artificial intelligence” products like ChatGPT and Character.AI. Both the parents in the case above and OpenAI representatives testified before the United States Senate earlier this week in an inquiry into chat systems, and the Federal Trade Commission is looking into OpenAI, Character.AI, Meta, Google, and xAI (now the official owner of X, formerly Twitter, under Elon Musk) for potential dangers of AI chatbots. As more than a trillion US dollars are invested into various AI industries, and countries strive to make sure they have a piece of that pie, questions keep emerging about the dangers of LLM systems. But with all that money flying around, a “move fast and break things” approach seems to have been the default position up to now. Safeguards are emerging, but balancing them with user privacy won’t be easy. “We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict,” wrote Altman.