The Rabbit R1 is an example of the AI industry’s “test your products in the real world” ethos |
|
|
Hello and welcome to Eye on AI. In today’s edition…Rabbit launches Teach mode for the R1; Microsoft and HarperCollins strike a deal for training data; Google gives Gemini memory; an AI pioneer cites OpenAI’s upcoming model when urging for regulation; Stanford ranks countries on how vibrant their AI ecosystems are.
Rabbit, maker of the handheld orange AI-in-box device known as the R1, today released a new capability called Teach mode. The mode allows users to “teach” their devices how to perform specific actions by describing the process step-by-step on a computer using natural language and then syncing the lessons to their R1.
The new capability is a step toward Rabbit’s vision of an app store for model actions—and ultimately, an entirely new way of interacting with devices sans the graphical user interfaces to which we’ve become accustomed. It’s also the latest release from a company that has largely been building its product in public (and Teach mode will be no exception). Critics blasted the R1 at launch as clunky, basically useless, and more of a prototype than a finished product. In conversation with founder and CEO Jesse Lyu ahead of the Teach mode announcement, he described how the company has been using the criticism and user feedback to improve the product, pushing over 20 updates including a second generation of its LAM (large action model) system that makes it possible to interact with any site rather than just the four services available at launch. He framed this back-and-forth as essential, purporting that the only way to build in the AI era is to put a product out in the world and iterate from there.
An app store for actions If you watch any of the demo videos for Teach mode, you may think it looks like the most tedious way possible to do something. Take one example where an early tester teaches the device how to draft a tweet. On his computer, he instructs the program to “click the icon to open the box to compose a tweet,” “click inside the text box to start typing,” “type your text inside the box,” and so on before syncing the lesson to the R1.
No coding is required, but it’s still far more involved than just drafting a post on a smartphone or laptop as we always have. Rabbit’s idea, however, is that eventually most users won’t be doing any of this tedious work or interacting with Teach mode at all. Rather than teaching the R1 actions themselves, they will be able to go into an app store of sorts and buy actions created by others. (That’s a bit ironic seeing as Rabbit talks about doing away with apps.)
The store isn’t launching with Teach mode today, however, and Lyu said there’s no timeline for it as they still need to figure out how it will be monetized (both for the company and users who create the actions)to make money from it. For now, users will have access to a limited number of lessons created by early Teach mode testers or be able to create their own.
“You can imagine, like you train an agent, you should be able to choose whether you want to keep it public or you keep it private. And if you choose to publish it to the public community, and then if a million others are using your license, you should get paid. You should be able to benefit from it, right?” Lyu siad. “So I think this is, to us, the app store for iOS, but for agents.”
The “experimental” disclaimer The press release announcing Teach mode states that it’s still “experimental” and that “output can be unpredictable.” This is becoming a common disclaimer with the launch of generative AI products. Anthropic, for example, said the same last month when it released Computer Use, which enables its model to use computers the way people do.
Lyu believes this is intrinsic to the nature of how AI works (models aren’t pre-programmed, so you’ll never know exactly what they’ll do) and a result of how fast the technology is developing.
“You have to kind of encounter all the edge cases and tweak on a fly and continue. That’s just the whole nature of developing with AI models,” he said. In the case of Rabbit in particular, he also also pointed to how the startup doesn’t have a 10-year runway or the resources of a tech giant that would enable it to take its time. “We have to make sure that we take our shot and move fast. This is the only way that we can stay in competition.”
This isn’t to say Rabbit isn’t testing or taking care of issues before launch. The company collaborated with 20 testers to create more than 400 lessons, had them work with the company’s engineers in a dedicated Slack group, and implemented improvements and security safeguards prior to the Teach mode launch. Still, to many who have seen the “move fast and break things” mantra of the tech industry play out, Lyu’s philosophy may sound concerning. In AI safety and responsibility circles in particular, there’s growing conversation about how companies should be seeking out every edge case internally before launch. And as my colleague Jeremy Kahn noted in Tuesday’s newsletter, some believe it’d be a good idea to slow the whole train.
Who’s using the R1, anyway? As Rabbit continues to work out its kinks, who’s using it? And for what?
Teenagers, Lyu said, are currently the driving force of the product. On the flip side, he said they have elderly users who find just pressing the button easier than navigating apps on a smartphone. Additionally, it’s finding uses in specific industries: doctors who use it for translation when speaking with patients, and truck drivers who can’t use a smartphone while driving but can hit one button on the R1, similar to how they use the radio in their truck.
While it’s easy to still see the R1 as cumbersome and unnecessary, these examples of how it’s being used show it may have some potential. I wouldn’t count out the R1 (well, a future version of it) just yet.
And with that, here’s more AI news.
Sage Lazzaro sage.lazzaro@adamgale sagelazzaro.com
|
|
|
Generative AI’s role in the semiconductor industry AI is set to transform the semiconductor industry by improving design, manufacturing, and talent management. Learn about the strategic priorities that can guide companies towards successful AI integration. Read the full report
|
|
|
Microsoft signs a deal with HarperCollins to train an upcoming model on the publisher’s nonfiction books. Little is known about the unannounced model, but HarperCollins says the three-year deal includes “select nonfiction backlist titles” and that authors will have to opt into the training program. Author Daniel Kibblesmith posted screenshots of an email showing he was offered $2,500 per book to use his works. “I’d probably do it for a billion dollars. I’d do it for an amount of money that wouldn’t require me to work anymore, since that’s the end goal of this technology,” he posted on BlueSky. You can read more from The Verge.
Google rolls out a memory feature for paying Gemini users. Just like the memory feature OpenAI launched for ChatGPT earlier this year, this allows users to ask Gemini Advanced to remember facts about their preferences and interests so the chatbot can give more tailored responses. Users can view when the information is used in responses and edit or delete any of the information Gemini saves about them. While the feature personalizes the experience and is critical to achieving the vision of a true personal assistant chatbot, researchers have shown how this type of the capability can be exploited by hackers to steal a user’s data, as TechCrunch notes.
AI pioneer Yoshua Bengio calls for urgent AI regulations, citing OpenAI’s o1 model. In an opinion piece in the Financial Times, the AI researcher credited (along with Geoffery Hinton and Yann LeCun) for developing the breakthroughs currently driving AI argues that the OpenAI’s upcoming model may put AI on the brink of bridging the gap to human-level reasoning. Noting how the principles behind reasoning have been at the heart of AI research in the 20th century, he says the advancement could enable important applications. At the same time, he says there are major risks to consider, including an increased ability to deceive humans demonstrated in o1 and how it could assist in developing biological weapons. “Advances in reasoning abilities make it all the more urgent to regulate AI models in order to protect the public,” he writes.
|
|
|
Nov. 19-22: Microsoft Ignite, Chicago
Nov. 21-22: Global AI Safety Summit, San Francisco
Dec. 2-6: AWS re:Invent, Las Vegas
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)
Jan. 7-10: CES, Las Vegas
|
|
|
#1 That’s where the U.S. ranks on a new index that evaluates countries for how vibrant their AI ecosystems are, released today by The Stanford Institute for Human-Centered AI (HAI). Thirty six countries were ranked on eight pillars: research and development; responsible AI; economy; education; diversity; policy and governance; public opinion; and infrastructure. Indicators that informed the rankings include the number of AI journal publications, total AI private investment, AI legislation passed, and foundation model datasets.
In the rankings, China follows the U.S. for the number two spot. The remainder of the top 10—in order—consists of the U.K., India, the United Arab Emirates, France, South Korea, Germany, Japan, and Singapore. HAI notes that the U.S. leads the ranking by a significant margin, consistently producing the most notable models, attracting the most private investment, and exceling significantly in the R&D and AI economy pillars. China additionally scores well with R&D and economy, plus leadership in infrastructure. The U.K. stands out for its strength in education and policy/governance. You can read the paper or explore the interactive tool.
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. Did someone share this with you? Sign up here. For previous editions, click here. To view all of Fortune's newsletters on the latest in business, go here.
|
|
|
|