The iPhone maker is reportedly developing a screenless AI gadget that users can affix to their shirt or jacket.
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Thursday, January 22, 2026
Apple needs a hit. Is a wearable AI ‘pin’ the answer?


Good morning. Let’s start with an inspiring quote today:

“I think eventually, over the next 10 to 20 years, we could get to finding a solution to all disease.”

Those are the words of Demis Hassabis, the head of Google DeepMind, whose views on AI have made him one of the key luminaries at the World Economic Forum in Davos this week.

As my colleague Allie Garfinkle writes in the latest print issue of Fortune magazine, Hassabis has a fascinating side gig as the CEO of Isomorphic Labs, which is trying to revolutionize the drug discovery game. Whereas a typical biotech startup might do one or two drugs its entire corporate life, Hassabis is building an AI-driven system that he hopes will yield dozens of new drugs every year—and, as he told Allie, eventually “solve” disease. Read the full story here

More news below.

Alexei Oreskovic
alexei.oreskovic@fortune.com

Want to send thoughts or suggestions to Fortune Tech? Drop a line here.

Apple is developing a wearable AI ‘pin’ device



The race to create the ideal gadget for the AI era is heating up, with Apple reportedly planning a new screenless ‘pin’ device that users can affix to a shirt, according to The Information. The device is about the size of an Apple AirTag, and will be equipped with multiple cameras, a microphone and speaker, the report says, noting that Apple could release it as soon as next year. 

The iPhone maker has no time to lose. OpenAI, which hired former Apple design guru Jony Ive, is planning to release an AI gadget this year that may have a similar form factor. And Meta is already seeing success with its AI-equipped Ray-Ban smart glasses. 

The nascent AI pin, or badge, category of devices has already seen one high-profile dud when Humane (a startup founded by ex-Apple staffers) released a bug-ridden device in 2024. Apple is stuck in its own slump, with its $3,500 VisionPro headset failing to catch on with consumers and its efforts to build an in-house LLM so lackluster that it’s been forced to use Google Gemini.  The Google partnership will help Apple release a new version of Siri, dubbed Campos, that will function more like a chatbot later this year, according to a separate report in Bloomberg on Thursday. —AO

Anthropic rewrites Claude’s rules—and mulls AI ‘consciousness’

Can an AI model exhibit anything close to what a human would consider consciousness? Based on some changes it announced this week, Anthropic seems to think that the breakthrough is getting closer.

Anthropic is overhauling the rules for its Claude AI, moving beyond simply telling the bot what to do to teaching it why it should behave a certain way. The company is arguing that to be “good actors,” AI models like Claude need to understand why they should act ethically. Anthropic hopes this continued focus on AI safety will strengthen Claude’s appeal to enterprise customers, where the bot has become a popular tool for automating coding and research tasks.

But Anthropic has also set itself apart from rivals like OpenAI and DeepMind by publicly acknowledging uncertainty over whether Claude could have some form of consciousness or “moral status.” Anthropic, unlike its rivals, already has an internal model welfare team focused on these questions. In the new safety documents published Tuesday, the company says it cares about Claude’s well-being—not just for Claude’s sake, but because the AI’s judgment, and the safety of its users, may depend on it. – Beatrice Nolan