I invite you to upgrade to a paid subscription. Paid subscribers have told me they have appreciated my thoughts & ideas in the past & would like to see more of them in the future. In addition, paid subscribers form their own community of folks investing in improving software design—theirs, their colleagues, & their profession.
From Patch Boards to Punched CardsYou see those old photographs of programmers hunched over patch boards, physically rewiring circuits with cables and plugs? That was programming in the 1940s. You wanted to compute a trajectory or process some data, you literally rewired the machine. The hardware was the software. Then we borrowed an idea from the business world. Companies had been using punched cards for decades to sort and tabulate data—payrolls, census records, inventory. Herman Hollerith figured out in the 1890s that you could encode information as patterns of holes, and machines could read those patterns automatically. So we thought: why not use this for code? Instead of manually setting hundreds of switches and plugging cables, we could punch our program into cards and feed them to the machine. One gesture—dropping a deck of cards into the reader—and boom, you're at a known state. The machine knows exactly what you want it to do. This was brilliant reuse of an existing technology, not some grand invention. We already had the infrastructure: card punches, card readers, the whole ecosystem of handling and storing these cardboard rectangles. We just pointed it at a different problem. But here's what we didn't fully grasp at the time: we'd created a fundamental separation. The tools we used to create programs lived in a different world from the programs themselves. You wrote code on paper, punched it onto cards with a mechanical keypunch machine, then carried boxes of cards to the computer room. If there was an error, you walked back to the keypunch, made new cards, walked back to the computer. The feedback loop stretched across physical space and human walking speed. The Reinforcing LoopSome computing pioneers realized that a reinforcing loop was there for the taking. Run the tools for instructing the computer on the computer and the better the computer got, the better the tools ran, which make it possible to further improve the computer. And the tools. And around and around. Smalltalk was an early exemplar of this loop. The better the environment for programming together got, the better the environment for programming got. The same machine that ran your programs ran your editor, your debugger, your version control. When someone improved the garbage collector, everyone's tools got snappier. When someone built a better code browser, everyone could navigate their programs more fluently. The cycle started feeding on itself. The IDE was the natural conclusion of putting programming tools on the same machine as the programs. If your editor, compiler, and debugger all live together, why not make them talk to each other? Click on an error message, jump straight to the problematic line. Set a breakpoint in your editor, watch variables change in your debugger. Refactor a method name, update all the references automatically. IDEs gave us tight integration between tools that had previously been separate. You could stay in flow, moving seamlessly between writing code and understanding what it did. The feedback loops got even tighter. Change something, see the effect immediately. The machine wasn't just helping you write programs; it was helping you think about programs. The Peak of Mechanical EfficiencyAnd for decades, this was progress (snarky me is oh so tempted to tell you how much better Smalltalk was 25 years ago, but I won’t). Each generation of IDE got more sophisticated. Better auto-completion, smarter refactoring, deeper integration with version control and testing frameworks. We kept adding features, making the tools more powerful, more comprehensive. But somewhere along the way, we started to notice the limits. The IDE knows about your code, but it doesn't really understand what you're trying to accomplish. It can rename variables but it can't suggest better abstractions. It can find syntax errors but it can't tell you when your logic doesn't match your intentions. The IDE represents the peak of what you can achieve by making tools faster and more integrated. But speed and integration alone don't address the fundamental challenge: translating human intention into working software. We've made the mechanical parts of programming incredibly efficient, but we're still doing most of the thinking ourselves. Coding with GeniesThe genie changes everything about how we spend our time programming. IDEs are built around the idea that the workflow is "intention for feature or structure → find the place(s) to change → spend a long time making the change (including getting feedback from tests)". The IDE optimizes for that middle step—helping you navigate code, providing auto-completion, catching syntax errors as you type. It assumes you'll be doing lots of careful, incremental typing. But with the genie, the flow becomes "intention for feature or structure → genie makes changes → review changes". The genie is prone to making mistakes, so the critical skill shifts from making changes carefully to reviewing changes effectively. Suddenly we're spending most of our time on a task the IDE barely acknowledges exists. This new distribution of time spent means we need new tools. The old workflow had us editing for hours and reviewing for minutes. The new workflow has us reviewing for hours and editing for minutes. We've been improvising. First came the separate chat window with copy-paste—a temporary measure that felt clunky but worked. Then chat embedded in the IDE seemed like progress, integrating the genie into our familiar environment. But here's the thing: the IDE isn't optimized for the "review changes" task. It's still showing you syntax highlighting and auto-completion suggestions when what you really need are better diff views, easier ways to understand what changed and why, tools for quickly testing whether the changes actually work. Notice how command-line genie interfaces have been gaining traction? That's not nostalgia for the teletype—it's recognition that when your primary task is reviewing generated code, the IDE's elaborate features become more hindrance than help. The feedback we get from the IDE is so useless for this new workflow that we can dispense with it without losing much. We're in another transition moment. The tools that served us well in the IDE era aren't the tools we need in the genie era. A New ArchitectureOna represents an important step beyond the current status quo. Here, the genie becomes the primary entry point—you start with intention, not with files. But you can have several genies active at once, each working on different aspects of your system. The IDE comes up subordinate to the genie, but right there in the browser. When you want to make manual changes, it's right there. Most exciting to me is that there's now a natural space for the next generation of "review the genie's changes" tools. Instead of trying to retrofit review capabilities into an IDE built for typing, we can design tools from scratch around understanding and validating generated code. Tools that help you quickly grasp what the genie did, why it made those choices, and whether the result matches your intention. The reinforcing loop is starting to spin again. Better review tools will help us work more effectively with genies. More effective genie collaboration will justify building even better review tools. And around and around. You’re currently a free subscriber to Software Design: Tidy First?. Buying me more time to think & write means more thoughts & ideas for you. |