Hello Betamax, One of the early promises of AI was simple: automate the routines of daily life and, in doing so, return time to the people living it. That ambition now helps underpin projects like India's planned US$650 million AI city in Bengaluru, which my colleague Samreen dives into in this edition's top AI story from our desk. But as that vision edges closer to reality, the trade-offs are becoming harder to ignore. Handing over more of the physical world to machines carries risks, ones that grow more pronounced as AI agents become more capable and less predictable. Take Mythos, for example. Anthropic's new model has the capacity to be an elite cybersecurity researcher, matching the capabilities of multiple human experts. Mythos uncovered over 2,000 unknown software vulnerabilities in seven weeks. It's so capable at finding these exploits that Anthropic deemed it too dangerous for public release, restricting it to trusted partners. But even those guardrails failed. This week, unauthorized third-party users hijacked access to Mythos. If the world's most advanced security AI can't even be kept safe behind closed doors, what hope do we have for consumer-grade systems? And if malicious actors can breach a top-tier model, the implications extend further still. The prospect of an entire city's AI backbone being compromised no longer feels remote, particularly as modern cities are already strained by climate pressures, congestion, and population growth.  Risks aside, the AI race is only getting wilder by the day. DeepSeek launched the V4 foundation model, and OpenAI dropped GPT-5.5 and a new ChatGPT image model all within the past week. Naturally, our in-house designer had to play with OpenAI's ChatGPT Images 2.0 to get a feel for the hype, you can see the result in this week's featured illustration. Picture a control room where humans and machines work side by side to keep an AI city from spiraling under attack. Want in? Miguel Cordon, journalist |