Almost Timely News: 🗞️ Cultivating an AI Mindset, Part 2 (2025-11-16) :: View in Browser The Big Plug🚨 Download the Unofficial LinkedIn Algorithm Guide now with completely new findings as of October 2025! Free, form fill required. Content Authenticity Statement95% of this week’s newsletter was generated by me, the human. You will see Gemini’s output about pizza and activation sequences in the first section. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: Cultivating an AI Mindset, Part 2This week, we continue our series on the AI mindset, the mindset that I use when I’m working with AI to get the most out of it. Last week, we started with the idea that AI is like making a salad - the recipe (prompt) is important, but so is good ingredients, the right tools, a knowledgeable chef, a clear menu and purpose, etc. Let’s move onto the next topic, which is task decomposition. Part 6 (Parts 1-5 were last week): Task DecompositionAI models are smart, unquestionably. They score, as a rule, 20 points higher than any human PhD in their area of expertise on tests like Diamond GPQA. But for all their latent intelligence, they’re still prediction engines that predict the next item in a sequence. This is their fundamental nature, and that in turn means they’re not good necessarily at taking a big picture item and breaking it down into steps. Certainly, they’re much better at it than they used to be; almost all AI coding tools now have the ability to run plan mode as well as act mode, to let them effectively do a rough draft of changes first before they make those changes live. This is basically what reasoning models do, models like GPT-5-Thinking, Gemini 2.5 Pro, Claude Sonnet 4.5 with Extended Thinking turned on, etc. They do a rough draft behind the scenes first - and by doing so, they can catch and fix many of their own mistakes when they produce the second draft, which is what we see. In general, AI models do best when they can do multiple iterations of a task to inspect their own work, to check their own work. However, that still leaves a LOT for them to tackle, if we give them a giant task like “build my Q1 marketing strategy”. That is a big task. That is a big ask. Asking that of your intern would be unreasonable. So how do we help them help us? By doing task decomposition. Task decomposition really just means taking a big task and breaking it down into subtasks. There are two basic ways to do this, linear and top down. Linear task decomposition means figuring out how to take a big task and arrange it in a sequence of smaller tasks. For example, instead of “build my Q1 marketing strategy”, we ask an AI model to build our strategy just for January, or just for the first week of January, and then have it take each week in sequence in much smaller chunks. This lets it think more, expand more, and reflect more, leading to better outcomes. If we give it an overall goal and have it write the small chunks in a linear fashion, it does a better job because it has more time to think, and more time to review its work. The second way is top down. Again, instead of “build my Q1 marketing strategy”, I might start with asking the model to build me the outline for my Q1 marketing strategy, or a high level summary of my Q1 marketing strategy. Then, on the next pass, I’d ask it to take my Q1 outline and turn it into a monthly outline. Then on the third pass, I’d ask it to turn my monthly outline into a weekly outline. Each pass digs deeper, from the top down. Again, we are leveraging AI’s nature and its ability to do better quality work with multiple iterations on the same thing. What both task decomposition strategies have in common is that they let the AI model approach a big task as a collection of smaller tasks, taking advantage of its predictive nature to predict smaller pieces. In doing so, and in doing multiple passes over the same data, it improves it. Think about it almost like carving something out of wood. If you tried to just carve in one shot, from the top to the bottom, that’s a super difficult task. If you carve away gross stuff on the first pass and refine it with successive passes, finer tools, and more focus, you get to a better result faster. If you want to make the most of AI, learn how to do task decomposition for the machines. You can ask them for help, certainly, but it’s a fundamental, necessary skill. The more you can break down tasks for the machines in advance of them working on it, the more successful they will be at s handling all of those little tasks. Part 7: ToolingLots of people try to have AI do everything. They want it to write and edit and create and polish and distribute, all from within one place or space. The worst examples are people trying to have AI do something really big with just a single prompt. This is the source of tremendous frustration with AI, with why people don’t get value out of AI. They are simply asking it to do things that it’s not designed to do. One of AI’s greatest superpowers is its ability to code, to write code. When it does that, it can overcome many of its own limitations - like its latent inability to do any kind of math. There are two fundamental outcomes we generally want, deterministic and probabilistic. Probabilistic outcomes are those that involve some degree of random chance - when we ask for a weather forecast, we get probabilities. It’s probably going to be sunny tomorrow, or rainy, or meteors. It’s never definite. There are other outcomes that are deterministic, which means no randomness. 2 + 2 in a base 10 system always equals 4, no exceptions. When we do tasks like build slide decks, our brand standards require specific fonts and colors, no exceptions. These are deterministic outcomes. Where AI shines, if we know to ask for it, is in building its own tools. Here’s why this matters. If AI builds its own tools to do non AI tasks, it can do deterministic tasks by calling those tools and succeed where its own probabilistic nature would fail. Here’s a simple example. Say you want to calculate something, maybe do a statistical analysis of some kind. You know ChatGPT is just going to hallucinate an answer in all likelihood. How do you get a correct, deterministic answer? Instead of asking it to think very carefully or any number of prompting tricks, simply tell it to build the calculation in the canvas, its built in web page designer and document area. You could give it an instruction like “construct this Bollinger band analysis in the canvas using HTML, CSS, Tailwind, JavaScript, and CDNJS. Choose the appropriate statistical library from CDNJS.” For those who don’t know, CDNJS is Cloudflare’s JavaScript library. It’s a free, publicly accessible library that you can uh invoke in any of your code that has thousands of different packages of pre-built software that AI tools know what to do with. What that does is lift the burden of trying to do math from the AI model. Instead, we’re telling it to functionally build its own calculator, its own tooling. It will do this marvelously well and deliver correct, deterministic answers rather than guessing or approximating, which is what the model natively tries to do. That is its nature as a probability engine. If you’re vibe coding, you can kick this up several notches. Instead of having AI try to test and run its own code - which never ends well - have it leverage built in testing tools or build its own testing tools in code that it can then run. That capability - to build its own tools - is how experienced AI practitioners get AI to do things in repeatable, reliable ways. It’s a core part of the AI mindset. Part 8: APIsSpeaking of repeatable and reliable, let’s talk about data for a moment. As you know from the basics (and from the chapter in Almost Timeless), the more data you bring to AI, the better it tends to perform. Asking AI to do tasks without data is like trying to cook dinner with no ingredients. No amount of clever recipes makes up for an empty pantry. Here’s the good news: you have an incredible amount of data available to you. You have literal libraries full of data, just waiting for you to come get it. But a lot of it is trapped inside systems that may not lend themselves well to manual export. That’s where APIs, or application programming interfaces, come in. These are connectors, spigots that you can connect a data hose to and pipe data from where it lives into your AI. There’s an entire cottage industry in the AI world that sells access to pre-built spigots, often for a hefty monthly fee. Well, if you’ve thought through the tooling section above, it& |