Trump has an AI data center problem ahead of the midterms. CNBC and others reported that President Trump is facing a growing political dilemma as the U.S. races to build energy-hungry AI data centers ahead of the 2026 midterms. The infrastructure needed to power the AI boom is driving concerns about rising electricity prices and strain on the grid, prompting backlash from voters and local communities. In response, major tech companies—including OpenAI, Microsoft, Google, Amazon, Meta, and Oracle—have pledged to cover the energy and infrastructure costs associated with their AI data centers so that consumers don’t see higher utility bills. The voluntary agreement, promoted by the White House as a way to ease voter concerns, reflects a broader tension: policymakers want the economic and geopolitical advantages of rapid AI expansion, but the enormous electricity demands of the technology are creating political and environmental pressures that are becoming harder to ignore.
Don't trust AI to file your taxes. In results that should surprise no one, a test by
The New York Times found that AI is no match for the US tax code, highlighting an important limitation of today’s AI chatbots: they still struggle with tasks that require precise, multi-step reasoning. To assess the technology’s ability to file a federal income tax return, the paper tested four AI chatbots — Google’s Gemini, OpenAI’s ChatGPT, Anthropic’s Claude and xAI’s Grok — to see how well they fared with eight fictional tax situations. They struggled, hard, miscalculating the refund or amount owed to the Internal Revenue Service by an average of more than $2,000. Even when provided with all the necessary materials, including all the forms they needed to fill out, the chatbots whiffed on some calculations. The problem reflects a fundamental limitation of large language models: they are designed to predict likely words rather than precisely track complex, interconnected information, making them strong at writing and summarization but weaker at procedural tasks like tax filing. Experts say the systems may improve with additional reasoning tools and verification layers, but for now they work best as assistants rather than replacements—another reminder that even as AI reshapes industries from coding to medicine, some seemingly simpler tasks remain surprisingly difficult.
Anthropic’s AI tool Claude is central to US campaign in Iran, amid a bitter feud. A new report from
The Washington Post highlights how quickly AI has moved from experimentation to the battlefield. According to the paper, the US military used an AI-enabled targeting system called Maven Smart System—built by Palantir and incorporating Anthropic’s Claude model—to help identify and prioritize targets during recent U.S. operations in Iran, accelerating what once took weeks of military planning into near-real-time decision making. Yet the deployment comes amid a bitter dispute between Anthropic and the Pentagon over limits on how its technology can be used in warfare, including concerns about autonomous weapons and mass surveillance. The episode underscores both the growing strategic importance of frontier AI systems and the tension between government demand for rapid deployment and companies’ attempts to set safety boundaries.