|
Here's this week's free edition of Platformer: the first episode of our new mini-series on what's really going on with AI and jobs. Our first guest is Box CEO Aaron Levie, and he made the most compelling case I've heard yet on why AI likely won't take your job any time soon. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. You can also watch it on our new YouTube channel. Want to support more independent reporting like this? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent piece about the potential end of the Meta Oversight Board. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This edition of Platformer is about AI. My fiancé works at Anthropic. See my full ethics disclosure here. One obvious reason for the public’s rapid turn against AI is the fear that it will someday take their jobs. It’s a fear the AI industry has encouraged them to have: tech CEOs issue regular warnings about AI-related job loss — and it’s already starting to show up in Silicon Valley. In March alone, tech companies announced nearly 46,000 layoffs — the worst single-month total in more than a year — with a growing number of executives citing AI as a factor in their thinning headcounts. Anthropic's Economic Index shows the share of work-related AI conversations climbing into nearly every white-collar profession. And a steady drumbeat of research suggests that entry-level work — the rung of the ladder most exposed to LLMs — is showing the earliest signs of disruption. In one sign of how seriously the tech industry is taking this, Google DeepMind workers in the United Kingdom just voted to unionize. At the same time, AI has been notoriously difficult to find in the productivity statistics. Amazon says it will hire about the same number of software engineering interns in 2026 as it has in recent years. Openings for software engineering roles are currently the highest they have been in the last three years. So what gives? Are jobs disappearing, or just transforming? Are workers becoming less essential to their bosses, or more? Are we witnessing the beginning of a massive disruption, or just another hype cycle? These are the questions we’re setting out to answer in a new mini-series on Platformer. Over the next seven weeks, I'll be talking with CEOs, operators, and academics watching this transition up close. In each episode, we’ll consider the AI and jobs story in the kind of depth that often isn’t possible in a single news story. And we’ll also bring data: my colleague Ella Markianos will join me at the top of each podcast to review the latest surveys, research and news stories that speak to the intersection of tech and labor. For our first episode, I wanted to talk to someone I've known about as long as I've known anyone in Silicon Valley: Aaron Levie, the CEO of Box. Aaron was the first person who explained software-as-a-service to me when I moved here in 2010, drawing diagrams on a whiteboard in the Box office with the kind of patience usually seen in a teacher showing kindergartners how to spell. Sixteen years later, he remains an enthusiastic explainer of the SaaS world. It helps that Box has a good story to tell — the company’s stock has held up materially better than most of its SaaS peers over the past year, even as a chorus of investors, founders, and posters have warned that traditional enterprise software is about to be eaten by AI agents. As you’ll hear, Levie is not in that camp. In our conversation, he makes a careful — and at times provocative — case for why he thinks both the "SaaSpocalypse" and the broader narrative of mass AI-driven job loss are wrong. He argues that agents will multiply the number of workers using business software rather than eliminate them; that the "last mile" of human work is far more durable than people assume; and that the engineer of the future is more likely to work at a pharma company than at Meta. “If you or I go and vibe-code something, we think we've replaced the engineer, replaced the accountant, replaced the lawyer,” Levie told me. “But then you actually look — that was the first 80% of the job. The extra 20%, it turns out, is all the value creation of that profession. All the expertise and domain knowledge is in that last 20%, not the text that got generated.” Highlights of our conversation are below, edited for clarity and length. We also hope you’ll listen to the entire conversation wherever you get your podcasts — just search for Platformer — or watch it on YouTube at youtube.com/caseynewton. And let us know what you think — we’re new to podcast production, and welcome your feedback at casey@platformer.news. Casey Newton: Aaron Levie, welcome to Platformer. Levie: Hey, good to be here. Newton: Aaron, you and I first sat down in 2010 — Levie: We were so young. Platformer: We were so young. Back then you were sort of early gray, but now you're just like — normal gray. I think running a public company will probably do that to you. Levie: The problem is, I've been like this for 13 of those years. It would be one thing if this only happened in the past six months, but it's actually been like this since I was 24. Newton: Maybe there are more reasons to be gray today, or maybe not — we'll get into it. That first time I met you, I have this core memory — because I had truly been in Silicon Valley for what felt like weeks when I came down to the Box office. Levie: Didn’t you come from, like, the Arizona Star Tribune or something? Newton: The Arizona Republic. I'd been covering local government. And then one day I said, "What's going on with computers? That seems interesting." And now here I am. But I needed people to explain it to me, and that's where you came in. As I recall, you explained the software-as-a-service business model to me on a whiteboard. So my first question: if you were explaining your business to a reporter like that today, how much of that whiteboard would look the same and how much would be totally different? Levie: If my recollection serves, a lot of it was trying to compare the on-prem days to cloud, and why cloud was such a big deal. My predictive capabilities were pretty locked in, maybe short of AGI. The whole idea was that software was going to move from your data centers to the internet, and in the process, the real power is that it becomes available to way more companies — businesses of all sizes, lines of business that never could have used software before, end users. This was the phase of consumerization of IT. So that played out. Now we're in the next frontier of what software is going to look like. A lot of the core architectural components hold. If you're running a global supply chain at a Fortune 500 company, you want deterministic systems and software that power your ERP. If you're at a large B2B like Salesforce, you want a clear set of business logic around how your CRM works, and how your internal workflows around sales automation work. If you're managing documents for a government agency or a pharma company or a law firm or a large bank, you want to make sure you can secure that data, protect it, govern it, ensure it's in a safe place and available to the right people. All of that is staying the same. What has completely changed is the interaction patterns on those systems — where the interaction is coming from. And what you can now do with all that data. The big idea is that in the future, if today maybe 90% of activity on this software is humans interacting with the interfaces, probably three years from now it'll be 90/10 the other direction. Agents will be interacting with these systems, talking to the data, pulling up data from these tools. And maybe 10% will be you going and browsing and looking through the software yourself. The interesting thing — and this is going to be the open debate for the industry — is in that 90/10, did the human side go down by 90%, or did we just have a 10x increase, where agents are now leveraging these tools? My argument is more the latter: agents are this explosion of new workers all using these systems, which makes the technologies even more valuable. You have all these new workers on these digital platforms that need data, that need to be secure, that aren’t leaking information in the wrong way. So you still need those guardrails — but now you've got a massive multiplier of what people can do with their data, because you have agents that can run in parallel. Newton: Right, that makes sense to me. There's this really interesting challenge — Aaron: By the way — podcast over? Newton: Yeah. That's all the time we have. I really want to thank you for joining us. I think we all learned a lot. No — let's throw in a few more questions for the super fans, because you just introduced what seems like a possibly profound change in the business model for what you all do. SaaS companies have gotten used to selling by the seat. You have 10 employees, you want 10 of them to be able to use Box, you pay a monthly fee. And it seems like that business model is under a lot of pressure in a world where maybe I don't have 10 people in those jobs anymore. What I need is a business outcome. So how are you navigating that? Do you think this seat-based business model survives in SaaS? Levie: You posited the scenario that's most open for debate, which is: did the people go away? In the math I laid out, the people stayed the same number, but the agents multiplied on top of the platforms. There will be some software categories where the literal seats are not as relevant because you don't have as many people doing the work. I would actually argue that for a large portion of software categories, that won't be the case. You'll have the same number or more people, but you'll also have 10 times the number of agents as people. So it's a multiplicative effect of more people — or the same number of people, or maybe a minor reduction — and then vastly more agents. The part that's not being priced in by the market is, is that scenario playing out? If I look at our software consumption internally at Box, there aren't a large number of cases I can make for many of our software products to reduce the number of people that exist as seats. But there are a lot of cases for a lot more agentic use cases on that software. To take an objective example that's not Box: if I look at Salesforce, we're actually going to have more sales reps at the end of this year than we had at the start. That's more seats within the Salesforce universe. At the same time, I can imagine 10 to 100 more agent use cases on the Salesforce platform than I could have two years ago. Those agents might not be roaming around the interface of Salesforce — they'll show up inside Claude Cowork or Codex or ChatGPT. The agent will be interacting via a different interface, but the underlying seat that says "Aaron is a user in this platform, with this level of access to this type of data" doesn't necessarily go away. We're already seeing this with our customers: you want a seat for the person because you want some kind of stateful representation of what data that person has, what their entitlements are. But then an agent might do an unbounded level of consumption on the software — where I, as a person, can only click so many things per day, but an agent can do that at 100x the scale. So the seat gives me the ability to use my information across these other agents. But then at some scale, there's so much data being used that there's a consumption model on top. This is why I think you're going to have a stacking business model in software: humans still have seats, but agents will be a consumption pattern on top of that. Newton: As a CEO, I'm imagining you're looking at all the SaaS you guys buy to run your business. I imagine you might be happy if you didn't have to pay for all those seats and could just have agents do it. So when you look at your own spending on SaaS, your feeling is truly, "I'm happy to keep spending for all the seats"? Levie: There's a difference between happy and practical. I'd always like our IT spend to be less, but I'm extremely practical about how technology works. The bear case of software is a confusing amalgamation of multiple issues people have — it's a Rorschach test of "what do you hate about software?" Some people say, "What we're going to do is vibe-code CRM systems." Others say, "We're just not going to have employees, it'll just be agents." Others say, "We just don't need all the features of these SaaS systems, agents will do those." Some I'm sympathetic to, some I'm not. The one I'm extremely not sympathetic to: we have no projects internally that I've approved to vibe-code a replacement to an existing SaaS service. If I look at the stack of our ERP system, HR system, CRM system, document management system, it would do us no good to spend our time and IT resources trying to replicate functionality that's already doing its purpose — especially at a moment when I'm about to get 10 times the value from those systems with agents using that data. If I have to both transition a system that's homegrown and figure out the next set of use cases, you'll just halt your ability to innovate. And a minor aside: if you did a word cloud of the past two to three weeks in AI, one of the biggest words would be cybersecurity. Not the Mythos part — the "we leaked customer data, the credentials, the secrets of our system got leaked, we downloaded a package that was exploited." Think about if the entire economy was trying to rebuild their own version of Salesforce or Workday or an ERP system, and any one of those events happened. Now the entire economy has to halt and do upgrades, or handle the maintenance and ongoing improvement of these technologies. That's just not very logical economically. The part I am sympathetic to: there's some software where, as you use agents more, some of the value proposition goes more into the agentic layer than the software layer. In those cases, you'd compress the value proposition of that software, and at the next renewal you might not spend as much. But conversely — for every scenario where that happens, there's another scenario where agents add more value on the system you're using. So the net vendor actually has more leverage in the future. You might save on one part of the stack but end up re-spending it on a different part because of all the upside. Newton: What we're really getting at is the skepticism the enterprise software market is facing right now. The reason I wanted to talk to you first is that Box has been facing this kind of skepticism in various ways its whole existence. You had to survive a very early pivot from being a consumer company to an enterprise-focused one. You had to convince people the cloud would be a safe and profitable place to be. And you faced a lot of skepticism about whether Box might just be a feature rather than a company. Now you have AI come along introducing this fresh wave. Maybe the most accelerationist version of that argument is that every company is now a feature, and the only thing that matters is going to be the frontier labs. So to what extent is this SaaSpocalypse story just the latest incarnation of a story that's never been true, and to what extent is this AI moment truly something different? Levie: The market is somewhat parsing the different outcomes — not perfectly, but there's some discerning behavior. If you took Wall Street as one metric and looked at our stock, it's held up better than most. One of the reasons is that the thing not really under debate is that your most agentic, vibe-coded enterprise future still has to store the data somewhere. You still have to secure and govern the important information of whatever the workflow is. You can vibe-code the creation of the contract, but the contract still has to get stored somewhere, governed somewhere, still has to have a retention policy, access controls. The part I'm excited by is that becomes meaningfully more important in a world of agents. When I think about the use of data in the enterprise, what all these agents really want to do is access data. They want to read data, write data, know context about your organization — your best practices, your policies, your customer relationships, your research. All of that sits inside your enterprise data, and most of it sits inside unstructured data in the form of business content. So we're firmly on the side of: bring on all the AI humanly possible, because those agents are all going to be working with enterprise content that still needs to get stored somewhere. A customer comes to us and says, "We want to automate our entire insurance claim process." A tremendous amount of enterprise content goes into an insurance claim. When they do that automation — maybe they build the agent on Anthropic, maybe on OpenAI — that agent still needs to talk to all the data in their enterprise. So they have to upgrade their infrastructure. There'll be winners and losers in software and SaaS, as has been true of ever |