Almost Timely News: 🗞️ Improving AI With Command Line Tools (2026-04-12) :: View in Browser The Big PlugsSo many new things! 3️⃣ A free 25 minute webinar Katie and I did on GEO - even though it says the date is past, it still works and takes you to the recording. Content Authenticity Statement100% of this week’s newsletter content was originated by me, the human. You’ll see me working with Claude Code in the video version. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: Improving AI With Command Line ToolsLast week I talked about the command line interface and how useful it was with today’s agentic AI tools, and I gave three examples. Many of you wrote back to say “we’d like to see more examples”. So this issue is your wishes fulfilled. Part 1: Why the Command Line MattersFor years now, we’ve talked about how things like MCPs - model context protocol servers - were the way to connect AI to external systems. That’s true to a great degree, but it turns out MCP implementations are horrendously inefficient. When you use an MCP to connect to a database or a web service, you’re telling your favorite AI tool how to talk to it and interact with it - pulling and pushing data. If this sounds familiar, it should. MCPs are to AI what APIs - application programming interfaces- are to regular software. The difference is, in many, many measured tests, AI tools and agents use MCPs incredibly inefficiently. They do a lot of reinventing the wheel every time they interact with an MCP. For casual use where you’re typing into a tool like Claude or ChatGPT, that doesn’t matter a ton. But once you start using AI agents, tools like Claude Code, Claude Cowork, OpenWork, OpenCode, Qwen Code, OpenClaw, etc., efficiency becomes very important. The reason why is that AI agents produce far more inputs and outputs than humans do. To give you a sense of scale, if you spend half an hour chatting with a tool like ChatGPT, you’ll probably generate somewhere in the range of 15-20 thousand words, about the length of a short business book. An agent like Google Antigravity or Claude Cowork, in that same half hour, will generate 15-20 MILLION words, three orders of magnitude greater. Why? Partly because AI tools can read far faster than any human can, and partly because agent tools aren’t sequential. When you use ChatGPT, it’s you and one conversation thread. When you’re using something like Claude Cowork, behind the scenes there can be 50 different agents working in 50 different conversations at the same time. My record was having 120 agents all in flight at the same time, all simultaneously. (It was writing and editing a book, so each chapter had 4 agents working together - a writer, an editor, a reviewer, and a proofer) If an MCP is involved, AI agents are reinventing the wheel every time they interact with it. That makes it very slow, introduces the potential for lots of randomness and lots of mistakes, and when you’re using something like an API where you pay by the word, agents using MCPs can run up massive bills very quickly. In the early days of OpenClaw, early adopters weren’t watching the meter and more than a few got ten thousand dollar bills from companies like Anthropic. Oops. The alternative to the MCP is the command line interface, or CLI. If you’re old like me, you used to use these when you interacted with computers. Back in the days of the Apple IIe and VisiCalc, AppleWorks, Lotus 1-2-3, and MS-DOS, there was no graphical user interface (GUI). There was no mouse. There was you and the text command line, and every program you interacted with had text-based commands. Perhaps you’ve heard of Zork or similar role-playing games where you’d type things like direction names, or instruct your character to open a treasure chest, and the game would tell you in words what was happening:
These text-based apps were all the rage until the Macintosh computer in 1984 (well, technically the Xerox PARC computer had the mouse and GUI first, but Apple commercialized it first) when we got the mouse and GUI. But the command line interface never went away. It’s been present in every computer since the dawn of computing. On a Windows computer, it’s the Command Prompt, CMD. On a Mac and Linux, it’s the Terminal. And operating systems themselves on computers have many, many little applications built into the command line, simple pieces of software that do one thing, but do it well. This was the original architecture of computer systems like UNIX, because memory was so scarce that it was better to have a tool that did one thing very well than a tool that tried to do many things and did them all poorly. For example, on the Mac and Linux systems, there’s an app called wc. wc counts things - letters, words, and bytes. Give it a text document like this:
And it will tell you how many words are in that document. That’s all it does. That’s all it needs to do. Hopefully you see where this is going. Who loves to type and is great at typing at ridiculous speeds? AI. Instead of a slow, cumbersome, error-prone MCP, a command line application (also known as a CLI) provides a deterministic (non-random) way to access the same services that provide MCPs. But here’s the twist: in testing, many folks have found that AI agents burn through 2/3 less usage using a CLI as it does an MCP. If you’re on Anthropic’s Claude subscription plans, you’ve no doubt hit usage limits. If you’re using APIs, you’ve no doubt gotten bills from AI providers. What if you could cut those bills down significantly? With CLIs instead of MCPs, you can do exactly that. Part 2: Switching OverThere’s a gotcha with CLIs. You have to tell AI agents how to use them. Depending on the system you’re working with, your AI agent may need to install it themselves (this is often the case in container-based systems like Claude Cowork, which run in a sandbox on your computer, isolated from the rest of your computer), or your AI agent may be able to access it directly (Claude Code, OpenClaw) which is more risky because the agent is working directly on your computer. That’s the first hurdle to overcome. The easiest way to overcome it is to ask your AI tool of choice how it can access CLIs, what it needs to do. I should point out here that NONE of the web-based tools (like ChatGPT, for example) can use CLIs at all, so MCPs ar |