A tone-deaf take on the AI backlash will matter in 2026
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Tuesday, December 23, 2025
Silicon Valley’s tone-deaf take on the AI backlash will matter in 2026


Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…why Silicon Valley needs to read the room on AI skepticism…How Christian leaders are challenging the AI boom….Instacart ends AI-driven pricing tests that pushed up costs for some shoppers…and what will your life look like in 2035?

I’ve noticed a familiar frustration in Silicon Valley with public skepticism toward AI. The complaint goes like this: People outside the industry don’t appreciate the rapid, visible—and, to insiders, near-miraculous—advances that AI systems are making. Instead, critics and everyday users believe either that AI progress has stalled, or that the technology is just a hungry, plagiarizing machine spewing useless slop.

To AI optimists from San Francisco to San Jose, that skepticism is deeply misguided. AI progress is not stopping anytime soon, they argue, and the technology is already helping humanity—by contributing to cutting-edge research and boosting productivity, particularly in areas like coding, math, and science.

Take this excerpt from a recent post by Roon, a popular pseudonymous account on X written by an OpenAI researcher:

“Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”

I understand Roon’s excitement—and his impatience with people who seem eager to declare AI a bubble every time it hits a setback. Who wouldn’t want, as he puts it, a “computer friend that never takes the day off, never gets bored, never checks out and stops trying”?

Thrilling to one may sound threatening to another
The answer, in fact, is: many. What sounds like thrilling abundance to people building AI often sounds unsettling—or even threatening—to everyone else. Even among the hundreds of millions now using tools like ChatGPT, Gemini, and Claude, there is plenty of anxiety. Maybe it’s concern about jobs. Maybe it’s a data center coming to their backyard. Maybe it’s the fear that the benefits of the AI boom will accrue only to a narrow set of companies and communities. Or maybe it’s the fact that many people are already preoccupied with non-AI problems—making rent, saving for a home, raising a family, dealing with health issues, keeping the lights on.

In that context, the promise of a tireless, 24/7 digital mind can feel distant from daily life—or worse, like a threat to livelihoods and self-worth. And for many (even me, in my freaked-out moments), it simply feels creepy.

The disconnect will only grow harder to ignore in 2026
As we head into 2026, Silicon Valley needs to read the room. The disconnect between how AI is framed by its builders and how it’s experienced by the public isn’t being properly addressed. But it will only grow harder to ignore in 2026, with increasing societal and political backlash. 

On X yesterday, Sebastian Caliri, a partner at venture capital firm 8VC, argued that “folks in tech do not appreciate that the entire country is polarized against tech.” Silicon Valley needs a better story, he said–a story that people can really buy into. 

“People do not care about competition with China when they can’t afford a house and healthcare is bankrupting them,” he wrote. “If you want our industry to flourish, and you earnestly believe we will be better off in 5 years by embracing AI, you need to start showing ordinary people a reason to believe you and quickly.” 

My take is that AI companies spend an enormous amount of time trying to impress: Look at what my AI can do! And yes, as someone who uses generative AI every single day, I agree it is incredibly impressive—regardless of what the critics say, and regardless of whether you believe Big Tech ever had the right to scrape the entire internet to make it so.

But ordinary people don’t need to be impressed. They need answers: about jobs, costs, and who actually benefits; about societal impact and what their own futures look like in an AI-driven economy; about what billionaires are really discussing behind closed doors. Without that, all the AI bells and whistles in the world won’t bring people on board. What you’ll get instead is skepticism—and not because people don’t understand AI, but because, given what’s at stake, it’s a rational response.

On that note, have a very happy holiday and new year! Eye on AI is going on a holiday break and will be back on January 6.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

Advertisement
FORTUNE ON AI
AI IN THE NEWS

How Christian leaders are challenging the AI boom. This interesting article from Time reports that Christian leaders across denominations and affiliations—including Catholics, evangelicals, and Baptists—are increasingly vocal in pushing back against the rapid acceleration of AI and urging caution in both public discourse and policy. Rather than rejecting technology outright, many faith figures are concerned about AI’s impact on family life, human relationships, labor, children, and organized religion itself. They are raising these issues in sermons, open letters, and conversations with lawmakers. At the top of the Catholic hierarchy, Pope Leo XIV has used his platform to warn about AI’s potential harms, even as he acknowledges possible benefits like spreading the Gospel. Other leaders have criticized AI companions for isolating users, especially young people, and expressed discomfort with Silicon Valley’s use of religious language to promote technology. 

Instacart ends AI-driven pricing tests that pushed up costs for some shoppers. According to CNBC, Instacart said it will stop allowing retailers to run AI-driven pricing experiments on its grocery delivery platform after consumer groups and lawmakers raised alarms that shoppers were paying different prices for identical items at the same store. The company said retailers will no longer be able to use its Eversight technology—acquired for $59 million in 2022—to test price increases or decreases on Instacart, after acknowledging that the experiments “missed the mark” and undermined trust at a time when families are struggling with food costs. A Consumer Reports–led study found that identical baskets of goods could vary in price by about 7%, potentially costing customers more than $1,000 extra per year. While Instacart said the tests were not based on personal data and rejected claims of “surveillance pricing,” the move comes amid growing regulatory scrutiny, including an FTC inquiry into its pricing practices and a recent $60 million settlement over deceptive subscription tactics.

What will your life look like in 2035? I want to shout out this really cool interactive piece from the Guardian, which explores how everyday life might look in 2035 as a future artificial general intelligence (AGI) becomes deeply embedded in society, transforming work, health care, farming, law, and daily routines. For example, by then AI could act as the first point of contact in medicine—handling pre-diagnosis and personalized treatment suggestions—while human doctors focus on oversight and wearable AI devices manage information and anticipate needs. In professions like law and agriculture, advanced AI could handle research, argument preparation, and real-time monitoring of crops and livestock, potentially increasing efficiency but raising questions about fairness, bias, and transparency. Work itself may shift dramatically: AI augmentation could boost productivity, enabling shorter workweeks and more leisure for some, even as others get laid off or struggle with purpose and mental health in a world where routine tasks are automated. 

Advertisement
EYE ON AI RESEARCH

Can LLMs actually discover science and function as "AI scientists"? The answer is no, according to this interesting new paper from Harvard and MIT, which found that today's most sophisticated LLMs may talk and write like scientists, but they don't think like scientists. 

When the 50+ co-authors from around the world evaluated state-of-the-art LLMs on a new framework, they found that performance on scientific discovery tasks lagged behind results on standard science benchmarks; scaling up models and enhancing reasoning yielded diminishing returns for discovery-oriented tasks; and there were systematic weaknesses shared across different top models, suggesting that current architectures aren’t yet well suited for real scientific workflows.

The paper noted that LLMs do show promise on parts of the discovery process, especially when guided exploration and serendipity are involved, and the authors argue that the framework they used provides a practical path for future progress toward AI that can truly assist scientific discovery.

AI CALENDAR