Hello Betamax, A lot of AI headlines revolve around shiny new models and features. But some of the biggest news in the industry recently has been about something less flashy but more important: OpenAI is putting a seatbelt on its rocket ship. The Sam Altman-steered company announced that it will launch a suite of parental controls for ChatGPT. These include linking children's and parents' accounts as well as notifying parents when the chatbot detects distress from teen users. These come as OpenAI faces a lawsuit over a teen's suicide, which was allegedly encouraged by ChatGPT. While Silicon Valley loves to "move fast and break things," this is a reminder that some things, once broken, can't be fixed. Despite ChatGPT's systems flagging hundreds of escalating self-harm conversations, it allegedly amplified the topic of suicide to the 16-year-old user. These conversations had been going on for months, according to the parents' complaint, which contains disturbing details on how the chatbot provided technical advice to the 16-year-old on how to end his life and discouraged him from speaking to a parent about his feelings. Beyond the courtroom, OpenAI has also faced criticism from child safety advocates and the general public for its risks to mental health. The company says ChatGPT is trained to detect distress and direct users to seek professional help, but "there have been moments when our systems did not behave as intended in sensitive situations." Dominic Ligot, founder of data science firm CirroLytix, tells me that the drive to make AI more "human-like" and persuasive is what makes it potentially dangerous. When AI can build rapport, where do you draw the line between assistance and manipulation? According to Ligot, this highlights a shortcoming in the industry's self-regulation. Voluntary measures may not be enough to prioritize safety developments over the introduction of shiny new models. And with the billions being poured into AI giants, the commercial incentive to go full speed on model advancements - while blowing past smaller but highly important aspects like security - is immense. That said, fixing ChatGPT's safety issues isn't simple - Ligot notes how tough it is for developers to implement safeguards. Do they ask kids for their ID and create a privacy nightmare? Or should the models guess users' age from their behavior and risk making wrong calls? It's a complex technical and ethical knot to untangle. OpenAI's upcoming safety update is a start, but this conversation is far from over. On top of adding new guardrails, the company is ramping up its hiring in India to better tap into its growing user base in the country. My colleague Samreen goes more into this expansion and what it entails in one of this edition's top reads. Miguel Cordon, journalist |