WHAT WE THINK1️⃣ Communities take action on data centers in their backyards By 2030, data centers designed for hyperscalers will double from their current power capacity. To fill this gap, more companies are looking to build facilities in emerging markets. But not all countries agree with those plans. The Environmental Reporting Collective published a series of reports written by journalists across the globe, from Brazil to Thailand, on how communities are pushing back on data centers planned to be built in their backyards. The reports reveal that most countries still lack proper policies to regulate these facilities, especially in terms of their impact on the environment. Some neighborhoods might also not have enough clean water for these facilities. Still, that hasn't stopped governments the world over from inviting more investments into their constituencies. Do we really need chatbots more than water? 2️⃣ American AI's existential crisis Anthropic's dramatic breakup with the Pentagon has put two polarizing views in the spotlight. On one end is the pursuit of AI's unrestrained acceleration, on the other is a push for more cautious development. After Anthropic CEO Dario Amodei's refusal to allow Claude to be used for mass surveillance of US civilians and for autonomous weapon systems, the Pentagon cut ties with his firm. Defense Secretary Pete Hegseth labeled the firm a "supply chain risk to national security," and President Trump subsequently banned government agencies from partnering with the AI platform. So far, Amodei's principles have made it more favorable in the eyes of the public. Claude reached the top of App Store downloads in the US and among the top three in the UK last week. Meanwhile, Sensor Tower says uninstalls of ChatGPT in the US shot up by 295% on a daily basis for the past week. This followed OpenAI signing a deal with the US government within hours after Anthropic walked out. Already, a movement seeking users to end their ChatGPT subscription has gained ground. QuitGPT, a US-based consumer boycott movement, says its efforts have led to about 2.5 million subscription cancellations so far.  This indicates that the public may want AI for personal entertainment or daily optimizations, but perhaps not in situations where life-or-death decisions can be made. Even Anthropic's peers are unsettled by the US government's heavy-handedness. The Information Technology Industry Council, which counts the likes of Amazon and Nvidia as members, recently expressed formal concern over the "supply-chain risk" labeling. While the council didn't name Anthropic directly in its official letter to Hegseth, the subtext was clear: The industry is pushing back against federal strong-arming. I'm not optimistic that these developments will change how the US treats government contracts with AI companies. Yet, there is something undeniably reassuring about the lines being drawn. |