Tech Brew // Morning Brew // Update
Plus, Meta turns memes into a product feature.

Mike Tyson promoted it. Grok powers it. It lasted about five minutes before the internet broke it. The Health and Human Services Department's Realfood.gov site features an AI chatbot meant to help users choose less processed food. Instead, it quickly produced inconsistent answers—including contradicting its own nutrition guidance (it's telling people to eat less protein, not more) and advising users on which human body parts are most nutrient dense.

It’s a familiar lesson: If there’s a way to push a chatbot off the rails, the internet will find it fast. Though, considering Realfood.gov’s one is essentially just Grok with a government logo slapped on it, the chaos was probably inevitable. When you call something like that a health resource, "best foods to insert into your rectum" is exactly the kind of thing the internet will ask about first.

Also in today's newsletter:

  • Safety leaders are walking out of AI’s biggest labs.
  • Meta lets you negotiate with your social media feed.
  • Elon’s latest rant includes aliens. Yes, really. (You can watch the video.)

—Whizy Kim, James Atamian, and Saira Mueller

THE DOWNLOAD

Animated email send module, resignation letter countdown

Anna Kim

TL;DR: Senior employees at OpenAI, xAI, and Anthropic all headed for the exits this week—some with dramatic social media farewells, one with a New York Times op-ed, and one safety lead with a lofty two-page letter warning that "the world is in peril." Their exact reasons vary, but the throughline is this: Key people tasked with keeping AI safe are departing over ethical concerns, as their former employers speedrun product improvements and updates.

What happened: In the past week, at least four top figures at OpenAI, Anthropic, and xAI have been “loud quitting.”

On Monday, Anthropic Senior Safety Researcher Mrinank Sharma resigned with a two-page letter—complete with footnotes—posted on X, citing worries about "interconnected crises" beyond just AI and that he constantly felt “pressures to set aside what matters most.” Einstein expressed remorse about the part he played in the creation of the atomic bomb, saying if he'd known, he would “have become a watchmaker." Sharma, for his part, plans to become “invisible” and study poetry.

Then, OpenAI Research Scientist Zoë Hitzig published a NYT op-ed on Wednesday announcing her resignation. The reason? ChatGPT's new ad rollout. Hitzig compared OpenAI's trajectory to Facebook's, arguing that users have entrusted the chatbot with an unprecedented “archive of human candor.” And while OpenAI says ads will be labeled and won’t influence responses, Hitzig worries these pledges will soon fall to the wayside because “the company is building an economic engine that creates strong incentives to override its own rules.”

On Monday and Tuesday, two xAI co-founders also left the company, meaning that half of xAI’s founding members have now exited. Tony Wu and Jimmy Ba posted amicable-sounding farewells on X, with Ba saying 2026 would be "the busiest (and most consequential) year for the future of our species." Musk, however, made it sound like it was totally his choice to push them out.

Turnover at big tech firms isn’t abnormal, but all these high-profile exits in a short period of time point to a bigger ethical problem. Employees at AI companies are grappling every day with how to prioritize issues like user safety and balance them with advances in technology and their employers' need for more revenue.

Gradually, then suddenly: The departures, and the foreboding missives published with them, come as public concern mounts that safety is taking a back seat to new product features and marketing stunts for AI companies. OpenAI alone has shipped over 20 model updates in the past year, plus shopping, an app marketplace, and now ads. Anthropic released five major models in the same span, as well as an agentic coding tool and an AI agent for daily tasks.

Meanwhile, Platformer reported yesterday that OpenAI has dissolved its seven-person mission alignment team, which was created in 2024 to ensure that development of AGI—a level of intelligence that surpasses human ability—stays true to OpenAI's founding mission of benefiting humanity. Team leader Joshua Achiam told Platformer that its research was "wrapping up," and that he'd take on the new title of "chief futurist," whatever that is.

The Wall Street Journal also reported Tuesday that OpenAI fired safety exec Ryan Beiermeister in early January for alleged sexual discrimination—a claim she denies. This just happens to come right after she opposed a planned adult convo mode for ChatGPT and raised concerns about child exploitation safeguards.

Anthropic, the self-appointed conscience of AI, arguably spends more time studying how models could go wrong than anyone—yet still pushes just as aggressively toward the next, potentially more dangerous, level of intelligence. Its own safety report for its latest model, published yesterday, found "elevated susceptibility to harmful misuse," with the model supporting chemical weapon development and other “heinous crimes.” That’s a stark contrast from the Claude constitution the company updated just a few weeks ago, which included a hard ban on helping make “biological, chemical, nuclear, or radiological weapons.” In a New Yorker profile from this week, one Anthropic researcher admitted he often wonders if "maybe we should just stop."

Will the adults please stand up?: If it's true that AI is barreling down the path of wielding world-shattering power, it’s worth asking who’s going to pull the brakes. Probably not the CEOs announcing new heights of spending on AI infrastructure practically every day—especially not the one who once said AI would likely “lead to the end of the world.” And it probably also, unfortunately, won’t be the employees who gesture vaguely at existential concerns on X before disappearing into the sunset. In the meantime, we’ll all be over here wondering if we’re in a doomsday scenario. —WK

Presented By Frontieras

A stylized image with the words bug report.

The one feature Apple can’t seem to get right

I truly cannot wrap my head around how one of the world’s largest, richest, and most historically innovative companies has such embarrassingly bad UX and UI in the alarms tab of the Clock app.

My wife and I are constantly trying to keep track of real-life stuff that doesn’t fit into Apple’s narrow idea of what an alarm is supposed to be for. Like when our dog needs his monthly medications, but never on the same day. Or when we need to remember to pick up our daughter's favorite Elmo backpack from daycare (if you've never experienced the wrath of a disappointed toddler at 7am, consider yourself lucky).

Why can’t I set an alarm for more than a day in advance? WHY?! I’m no developer, but this seems like basic synergy between the Clock app and the Calendar app. Maybe, just maybe, these two apps should talk to each other like they live on the same $1,000 device. (And don’t get me started on the Reminders app. Next rant maybe?)

And sometimes, just to keep things exciting, I’ll wake up to my phone showing an alarm going off… with no sound. Just vibes and anxiety. Yes, I know about the Attention Awareness feature buried deep in the Face ID & Passcode settings—I shouldn’t need to go on a subreddit scavenger hunt to make my alarm clock work.

It doesn’t end there:

  • Why can’t I skip an alarm once without turning it off entirely?
  • You can’t tie alarms to different Focus modes.
  • You can’t set alarm volume independently from “Ringer and Alerts.”
  • You can’t group alarms in any meaningful way.

Apple can build some of the most impressive tech the world has ever seen, but the Clock app is stuck in 2007. Make it make sense. —JA

If you have a funny, strange, or petty rant about technology or the ways people use (and misuse) it, fill out this form and you may see it featured in a future edition.

THE ZEITBYTE

Meta Threads logo on mobile device, stylized pixelated background

Tech Brew/Getty Images

It turns out that memes are just wishes we make to the tech gods. It started out as a viral trend on Instagram's Threads—users posted "dear algorithm" letters, begging for the content they actually wanted to see in their feeds. Now, Meta's turned it into a real product: an AI-powered feature released yesterday that lets users fine-tune their feeds by typing “Dear algo” and describing what they do or don’t want.

Many of the Dear Algo posts so far demand one thing: no more AI slop. Others, meanwhile, point out the irony of people using an AI feature to do so. Some want more posts from people who share their hobbies, or none from people who have different political views, and others are even getting meta (pun unintended) by asking for fewer Dear Algo posts. One intrepid user requested that the algo show them posts containing images of people’s IDs, Social Security numbers, and credit card numbers—you know, for “research purposes.”

Of course, there’s a catch—the adjusted feed only lasts for three days before it reverts back to whatever Meta’s $100-plus billion in AI investments thinks you should see instead. And be forewarned: Your Dear Algo posts have to be public, meaning the whole world will see when you beg for more Jensen Huang leather jacket content. —WK

Chaos Brewing Meter: /5

A stylized image with the words open tabs.

Readers’ most-clicked story was about the FBI not being able to crack a Washington Post reporter's iPhone because of Apple’s little-known Lockdown Mode feature. Here’s how to enable it yourself.

SHARE THE BREW

Share The Brew

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
techbrew.com/r/?kid=073f0919

✢ A Note From Frontieras

This is a paid advertisement for Frontieras’ Regulation A offering. Please read the offering circular at https://invest.frontieras.com/.

Reservation of the ticker symbol is not a guarantee that we will be listed on the Nasdaq. Listing on the Nasdaq is subject to approvals.

Under Regulation A+, a company has the ability to change its share price by up to 20% without requalifying the offering with the SEC.

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011