<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Outcode Thinking]]></title><description><![CDATA[The developer playbook is broken. Tutorials, roadmaps, and syntax drills won't prepare you for what's coming. Outcode Thinking is a weekly newsletter that teaches developers how to think strategically, build with AI, and navigate a career that's changing ]]></description><link>https://www.outcodethinking.com</link><generator>Substack</generator><lastBuildDate>Fri, 15 May 2026 10:15:39 GMT</lastBuildDate><atom:link href="https://www.outcodethinking.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Thiago Valentim]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[outcodethinking@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[outcodethinking@substack.com]]></itunes:email><itunes:name><![CDATA[Thiago Valentim]]></itunes:name></itunes:owner><itunes:author><![CDATA[Thiago Valentim]]></itunes:author><googleplay:owner><![CDATA[outcodethinking@substack.com]]></googleplay:owner><googleplay:email><![CDATA[outcodethinking@substack.com]]></googleplay:email><googleplay:author><![CDATA[Thiago Valentim]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The output isn't the answer. It's the diagnosis]]></title><description><![CDATA[Every time AI gets it wrong, it's showing you exactly where your thinking is incomplete. Most developers fix the prompt and miss the lesson.]]></description><link>https://www.outcodethinking.com/p/the-output-isnt-the-answer-its-the</link><guid isPermaLink="false">https://www.outcodethinking.com/p/the-output-isnt-the-answer-its-the</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Tue, 21 Apr 2026 11:54:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/268712e0-2c9d-4cd8-8dd5-478e396fca76_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129302; Building with AI &#183; For developers rethinking how they build</p><p><strong>This week&#8217;s challenge:</strong> take a recent AI output that disappointed you and refine the thinking behind it instead of the prompt.</p></blockquote><p>My Slack was a context-switching tax I couldn&#8217;t stop paying. Direct messages from people waiting on answers. Public channels I should have been following. Threads I&#8217;d dropped into and lost the plot of. Every time I opened the app to reply to one thing, I had to reconstruct what that thing was about, who was involved, what had already been said. By the time I had the context back, half an hour was gone and the reply was two sentences. The cost was real: decisions delayed, things slipping through, the slow weight of knowing I wasn&#8217;t keeping up.</p><p>So I spent one week building a tool to fix it. A context retrieval agent that would scan my channels and surface what mattered. The spec was clean: related messages, link summaries, timeline. No synthesis. Synthesis was a separate concern, deliberately kept out of scope.</p><p>The AI executed exactly to spec. I ran the first real test on my actual inbox. The output was completely useless.</p><p>Reading it took about as long as reading the originals would have. The structure was clean, the format was clear, and I sat there looking at days of work, feeling the exact same weight as before. Same backlog. Same triage waiting. The tool had solved the problem on paper. The actual problem hadn&#8217;t moved.</p><p>The reflex, in that moment, is to fix the prompt. Add structure. Constrain the response. Most developers I watch do exactly this:</p><ol><li><p>Output misses.</p></li><li><p>Prompt gets tweaked.</p></li><li><p>Retry.</p></li></ol><p>The loop runs until the output looks acceptable, and they move on. They&#8217;ve built nothing except a slightly better prompt.</p><p>I almost did the same thing.</p><p>Then I made myself stop and read what the output was actually telling me. The spec was built on a principle I&#8217;d never tested: that separating retrieval from synthesis protected clarity. The output had just demonstrated the opposite. Synthesis was the work that made retrieval useful, and I had specified it out of scope. I rewrote the spec to fold the two together, ran it again, and the output became something I could act on.</p><p>The AI hadn&#8217;t failed. It had executed perfectly against thinking that was incomplete in a way I couldn&#8217;t see until I read the result.</p><p>That&#8217;s what AI does. It compiles your thinking and shows you what you actually thought.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The diagnostic loop</h2><p>Before the diagnostic loop makes sense, you need to recognize the anti-pattern it replaces. Call it the prompt chase: tweaking the request until the AI produces something acceptable, then moving on. Alter, execute, evaluate. Alter, execute, evaluate. The chase ends when the output looks good enough, not when the thinking behind the request is right.</p><p>Each lap of the chase costs almost nothing. That&#8217;s the trap. Two hundred small alterations later, the developer has something usable and has learned nothing about how they think. The diagnosis was available in every output and went unread.</p><p>The diagnostic loop replaces the chase with a different movement. The trigger is the same, an output that missed, but the next five steps run one layer up from the prompt:</p><ol><li><p><strong>Stop.</strong> Don&#8217;t touch the prompt. Don&#8217;t try again. The pause is where the loop happens.</p></li><li><p><strong>Name the gap.</strong> Write one sentence describing what the output delivered. Write another describing what you needed. The difference between the two is the diagnosis.</p></li><li><p><strong>Locate the layer.</strong> The gap lives in one of three places: the <em>spec</em> (you asked for the wrong thing), the <em>category</em> (you used a word that carried an untested assumption), or the <em>context</em> (you knew something you never articulated).</p></li><li><p><strong>Refine that layer.</strong> Write down what was implicit and is now explicit. The tested assumption. The corrected category. The missing context. This is the work. It takes longer than rewriting a prompt and produces something a prompt never will.</p></li><li><p><strong>Rebuild the brief.</strong> Now the prompt comes back to the table, but informed by refined thinking instead of by guessing what phrasing might work better.</p></li></ol><p>The opening was the easy version of this loop, with the gap sitting on the surface of a spec I could just reread. Most gaps don&#8217;t sit on the surface. They hide inside the words you used or in the things you knew so well you never said them out loud.</p><h2>Categories that hide in plain sight</h2><p>The retrieval example was a specification gap. The next gap I hit was deeper, and the diagnostic was harder to read.</p><p>After the tool was working, I started inventorying its reusable parts, with one eye on something bigger. The Outcoders program needed building blocks that other developers could pick up and run, not just code I wrote for myself. Six components inside the tool were generic enough to live independently of the Slack-specific orchestration: a context retriever, a voice processor, a deduplication pattern, a few others. The intuitive move, the one any developer would reach for, was to extract them into shared libraries. Reusable code, single source of truth, standard pattern.</p><p>I asked AI to help me think through the extraction. The output came back: clean library structure, clear interfaces, sensible naming. Exactly what I&#8217;d asked for.</p><p>I almost moved on. The output was good. It matched the request. Then something started bothering me, an intuition that something could go wrong. The kind of low-grade discomfort you get when you&#8217;re about to ship something that&#8217;s correct on the surface and going to hurt you in six months. I made myself sit with it long enough to find out what.</p><p>The diagnostic question that surfaced was unexpectedly mundane: if I extracted these as libraries, what would the next person who wanted to use one actually do? The answer was uncomfortable. They&#8217;d import the library. They&#8217;d inherit its dependencies. Any change would force a coordinated release across every project that imported it. I&#8217;d be handing every Outcoder a tight coupling I was supposed to be helping them avoid. The reuse I was optimizing for would create exactly the kind of drag that kills momentum in a small program.</p><p>The output was correct against the wrong category.</p><p>What I had treated as utilities were capabilities. A library shares code at build time; a capability is something you call at runtime. A library couples its consumers together; a capability stays independent. The word &#8220;library&#8221; had been carrying assumptions I hadn&#8217;t examined, and the AI had executed against those assumptions perfectly.</p><p>Refining the prompt would have produced a better-organized library, and I would have shipped the wrong foundation to a program I was just starting. Refining the category produced an entirely different architecture: independent agents, composed at runtime, no shared imports. The implementation work that followed took a week. The decision itself took an afternoon, once the category was correct.</p><p>This is the part of the diagnostic loop that&#8217;s easiest to miss. The gap isn&#8217;t always in what you specified. Sometimes it&#8217;s in the words you used to specify it. The output reveals the assumption inside the word, but only if you&#8217;re listening for it.</p><h2>The context you didn&#8217;t articulate</h2><p>The third gap is the one I see most often in other developers&#8217; work, and it took me longest to recognize in my own.</p><p>Someone in Slack asked a question. I drafted a reply with AI. The response came back: technically accurate, well-structured, complete. I read it and almost sent it. Then I noticed it sounded condescending. Not by much. Just enough that the person reading it would feel slightly talked down to. The information was right. The tone was wrong in a way that would matter.</p><p>The first instinct, again, was to adjust the prompt. Tell the AI to be warmer, less formal, more collaborative. That instinct produces an output that&#8217;s better in a generic way and still wrong for the specific situation.</p><p>The diagnostic question that worked was different: what does the AI not know that&#8217;s making this miss? The answer became obvious as soon as I asked. The AI didn&#8217;t know the person was non-technical. It didn&#8217;t know we were in a public channel where other people would read the exchange. It didn&#8217;t know that the question, which sounded innocent, came from someone who lacked the vocabulary to ask for what they actually needed. None of that was in the brief, because none of that was articulated in my own thinking. I knew it implicitly. I hadn&#8217;t externalized it.</p><p>What needed refining was the model of what makes a reply work, not the prompt. Audience metadata. Channel context. The gap between the literal question and the underlying need. Once those were explicit pieces of the brief, the output stopped missing.</p><p>The AI made visible something I&#8217;d been holding implicitly for years: the work of replying well lives in modeling the situation correctly, long before any words get written. I&#8217;d been doing the modeling unconsciously, which meant I couldn&#8217;t teach it, couldn&#8217;t delegate it, and couldn&#8217;t improve it. The diagnostic loop forced the model out into the open, where it could be refined.</p><h2>What the loop actually builds</h2><p>The previous editions were about making thinking visible: first to other people, then to yourself. This edition extends that practice to the third audience most developers interact with every day but never think of as an audience. AI consumes the same externalized thinking your colleagues do, and it consumes it more literally. When your thinking is implicit, a human colleague fills in the gaps from shared context. AI doesn&#8217;t. It executes what&#8217;s there. That&#8217;s what makes it a diagnostic.</p><p>The diagnostic loop is the simplest version of that practice. It&#8217;s available every time you work with AI, which for most developers now means several times a day. Each output is a chance to read the diagnosis. Most developers skip it. The ones who don&#8217;t skip it are building a different kind of capability with every interaction, while everyone else is running the prompt chase.</p><p>Every cycle through the diagnostic loop leaves a residue: a tested assumption, a refined category, a piece of context you&#8217;d been carrying implicitly that&#8217;s now explicit. The residue accumulates. Over months, that accumulation is what people call structured thinking. It isn&#8217;t a thing you have before you start. It&#8217;s the by-product of a loop you ran enough times.</p><p>A better prompt has a short shelf life, because the ground underneath it keeps moving: models get retrained, interfaces shift, and the exact phrasing that worked yesterday needs adjustment tomorrow. Refined thinking sits on more stable ground. A category you understood correctly last month will still be correct next month, an assumption you tested last quarter doesn&#8217;t need testing again, and a model of what makes a reply actually work isn&#8217;t going to expire when the next version of the AI ships.</p><p>The developers running the diagnostic loop are quietly building an asset that never shows up in any output. The ones running the prompt chase are stuck producing deliverables and nothing else.</p><h2>Try This</h2><p>Pick one AI output from the last week that disappointed you. Not a catastrophic failure. A mild miss. Something that was technically fine but didn&#8217;t actually solve your problem.</p><p>Run the diagnostic loop on it. The five steps are above. No shortcuts, no skipping to step five. The value lives in step one, the pause, and in step three, naming which layer the gap lives in. Most people rush through those two and end up refining the wrong thing.</p><p>When you reach step five and rebuild the brief, use the new brief with the AI and compare the outputs. The interesting part isn&#8217;t whether the second output is better. The interesting part is what you had to articulate to get there. That articulation is the residue. Save it. Over enough loops, those pieces of articulated thinking are the structured thinking everyone says you should have.</p><p>The point of the exercise isn&#8217;t to fix one output. It&#8217;s to run the loop once with full attention, so the shape of it becomes yours. After that, you&#8217;ll start catching yourself running the prompt chase and have a choice you didn&#8217;t have before.</p><h2>The Deeper Cut</h2><p>The hardest part of the diagnostic loop is the pause itself. The chase runs at the speed of reflex, and the loop runs at the speed of attention. Bridging that gap reliably, every time, has been the real practice. So I&#8217;m building something to help.</p><p>Most AI tools aimed at developers do the opposite of what this edition argues for. They optimize the prompt. They suggest better phrasing, restructure the request, add missing constraints. They make the chase faster. The agent I&#8217;m building does the inverse: when an output disappoints, it walks me through the diagnostic loop instead of helping me write a better prompt. It asks where the gap lives. It pushes back when I try to skip step one. It makes the pause structural instead of optional.</p><p>It&#8217;s early. The first version is rough, useful enough for me, not yet ready for anyone else. But it&#8217;s the next building block in the program, and like every other artifact, paid subscribers will get access as it matures. The next few editions will follow the build: what the agent does, what I had to refine in my own thinking to make it work, where it surprised me, what it still gets wrong. The same diagnostic loop the edition describes, applied to building the thing that helps run it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[HackerRank agreed with us. Coding skills are no longer the test]]></title><description><![CDATA[Your best thinking is invisible. The platform that built an empire grading developers on code just started grading something else.]]></description><link>https://www.outcodethinking.com/p/hackerrank-agreed-with-us-coding</link><guid isPermaLink="false">https://www.outcodethinking.com/p/hackerrank-agreed-with-us-coding</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Mon, 13 Apr 2026 11:45:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/59fe717a-7781-48ff-b104-29599c7a2c4a_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129504; Mindset Shifts &#183; For developers rethinking how they grow</p><p><strong>This week&#8217;s challenge:</strong> take a recent technical decision and write one page explaining not what you built, but why.</p></blockquote><p>Last month I spent two hours on a Slack conversation that changed the direction of a client&#8217;s product. The CTO wanted to rebuild their authentication system from scratch. The engineering team had already started estimating the work. They&#8217;d even diagnosed the technical problems: session handling was fragile, the token refresh had edge cases on mobile, the original implementation had accumulated years of patches. All of that was true. None of it was the point.</p><p>The team had asked &#8220;what&#8217;s the problem?&#8221; multiple times. But every time they asked it, they asked it inside the frame of the auth system itself. The diagnosis was technical, the solution was technical, and the scope grew from there. Nobody had stepped outside the frame to ask a different question: what&#8217;s the user actually experiencing, and does fixing that require rebuilding the whole system?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Forty minutes of conversation later, it was clear. The actual user problem was a session management bug that made mobile users log in twice. A fix that would take days, not months. The rebuild was a technically correct solution to a correctly diagnosed set of problems, and it was still the wrong decision, because the problems it solved weren&#8217;t the ones that mattered to the business.</p><p>I didn&#8217;t write a single line of code that day. The most valuable thing I did was a reframe: stepping outside the level where the team was thinking and asking the question from one level up. There&#8217;s no commit for that. No PR. No ticket that says &#8220;prevented three months of wasted work.&#8221; The output was a Slack thread that took two minutes to read. The thinking behind it took two hours, plus twenty years of watching teams solve the wrong problem because nobody stepped outside the frame long enough to check.</p><p>That thinking is invisible. And if you&#8217;re a developer, most of your best work probably is too.</p><h2>The gap between what you do and what people see</h2><p>Think about your last week of work. You shipped features, closed tickets, reviewed pull requests. Those are the visible artifacts: the things your manager sees, your performance review counts, your team tracks in standups.</p><p>Now think about everything else you did. The three approaches you evaluated in your head before picking one for that API design. The other two vanished the moment you started coding, along with the reasoning that eliminated them. The pattern you noticed across two unrelated bugs that pointed to a deeper problem in how the service handles retries, but you fixed the bugs individually, and the systemic insight never left your head. The moment in a planning meeting where you asked a question that shifted the entire scope of a feature, and everyone moved on as if the new direction had been obvious from the start.</p><p>Every developer I&#8217;ve worked with does this kind of thinking constantly. Evaluating trade-offs, noticing patterns, making judgment calls that shape the quality of everything they produce. It happens so naturally that most don&#8217;t even register it as work. But it is work. It&#8217;s the hardest part of the work, and it&#8217;s the part that differentiates a developer who delivers from a developer who delivers the right thing.</p><p>The problem is that none of it leaves a trace.</p><p>Developer culture has a deep bias toward visible output. Commits, contributions, shipped features, green GitHub profiles. Career advice reinforces it: build more projects, write more posts, contribute to open source. The message is clear: if people can&#8217;t see it, it didn&#8217;t happen.</p><p>So developers get evaluated, promoted, hired, and trusted based on the fraction of their work that&#8217;s visible. The reasoning, the judgment, the decisions behind the decisions, the part that actually determines whether the visible output was worth building in the first place, stays inside their heads. They give it away for free, every day, because nobody taught them to make it seen.</p><h2>Why it stays hidden</h2><p>Part of this is structural. The tools and rituals of software development are designed to capture outputs, not reasoning. Pull requests show diffs, not the decision process that shaped them. Standups ask what you did yesterday and what you&#8217;ll do today, not why you chose that approach over the alternatives. Sprint retrospectives look at what went wrong, not what was prevented by good judgment that nobody noticed.</p><p>There&#8217;s also a cultural dimension. Making your thinking visible can feel like showing off, like you&#8217;re claiming credit for something that should be implicit. Developers tend to let the code speak for itself, which sounds humble but is actually a form of self-sabotage. The code doesn&#8217;t speak for itself. The code shows what you built. It says nothing about what you considered and rejected, what risks you anticipated and mitigated, or why you chose the simpler approach when a more complex one was technically superior. All of that context evaporates the moment the PR is merged.</p><p>The result is a career built on a fraction of your actual contribution. Two developers who ship the same feature look identical from the outside: same PR, same outcome, same velocity metric. But one of them spent an hour thinking through edge cases, evaluating approaches, and choosing the design that would be easiest to extend in three months. The other copied a pattern from somewhere else and hoped it would hold. The difference between them is entirely invisible until something breaks, and by then, the developer who thought carefully has already moved on to the next project, with no record of the judgment that made the first one durable.</p><p>This invisibility compounds over time. Every year, you accumulate more judgment, more pattern recognition, more ability to see what others miss. And every year, that growing capability remains hidden behind the same visible output: code, tickets, deploys. Your most valuable asset appreciates in the dark, and the people making decisions about your career can only see the surface.</p><h2>What changes when the thinking becomes visible</h2><p>A developer once asked me a question that changed how I evaluated everyone after him. We were talking about coding practices, best approaches for structuring modules, the usual technical conversation. Then he said: &#8220;What should I do to make my colleagues&#8217; job better?&#8221;</p><p>That question wasn&#8217;t about code quality. It was about how he thought. In one sentence, he told me that he didn&#8217;t see his code as a standalone product of his individual work. He saw it as something other people would have to live with, extend, debug at 2am, and build on top of for months after he&#8217;d moved to something else. That perspective is rare, and it&#8217;s invisible in a pull request. The code might look the same whether the developer was thinking about their colleagues or not. But the question gave me access to his mental model, and that access was worth more than any code review.</p><p>After that conversation, I could work with him differently. I could give him architectural responsibility, because I knew he was already thinking about the team&#8217;s experience, not just the feature. I could trust his judgment on trade-offs, because his frame extended beyond the immediate task. One question, one sentence of visible thinking, and my entire assessment of his trajectory shifted.</p><p>When I started managing teams, I carried that lesson forward. I looked for the developers who made their thinking legible, not just their output. And I started sharing that perspective with others: the developers who show you how they think unlock a kind of trust that no amount of shipped code can build.</p><p>This doesn&#8217;t require grand gestures. It means small, consistent acts of externalization. A PR description that includes one paragraph explaining what you considered and why you chose this approach. A Slack message that starts with the problem you&#8217;re solving before jumping to the solution. A standup answer that names the decision you&#8217;re facing, not just the task you&#8217;re working on. A brief comment before a code review that frames what the reviewer should pay attention to and why.</p><p>Each of these takes thirty seconds. Each one transforms how people perceive your work.</p><p>When a developer makes their reasoning visible, three things happen. First, other people start trusting their judgment, because judgment that&#8217;s visible can be evaluated and calibrated. The developer who asked about his colleagues&#8217; experience earned architectural ownership from a single question. Second, they get pulled into higher-level conversations, because the people making decisions recognize someone who thinks about problems the way they do. Third, their own thinking sharpens, because the act of articulating a decision forces clarity that staying in your head never does.</p><p>That third effect is the one most developers underestimate. Writing down why you made a choice often reveals that your reasoning has gaps you didn&#8217;t notice. The decision that felt obvious when it lived in your head turns out to rest on an assumption you haven&#8217;t tested. The act of externalizing doesn&#8217;t just communicate your thinking: it improves it.</p><p>Edition #8 was about learning to read systems: seeing the business behind the code, the dynamics behind the team, the shifts behind the market. That skill is powerful, but only if someone else can see that you have it. Reading is perception. This is its complement: making your perception legible. Together they form the complete skill: the ability to see clearly and to be seen clearly.</p><h2>The interview that proved the point</h2><p>I&#8217;ve conducted hundreds of technical interviews over my career, including live coding sessions. The conventional understanding is that live coding tests whether the candidate can solve the problem. That&#8217;s wrong, or at least incomplete. An experienced technical manager already knows that most competent developers can solve most coding problems given enough time and a calm environment, neither of which exists in an interview.</p><p>What live coding actually tests is whether you can see how someone thinks. How they decompose a problem before writing anything. What questions they ask. How they respond when they&#8217;re stuck. Whether they can name what they don&#8217;t know and reason toward it out loud.</p><p>I&#8217;ve hired developers who didn&#8217;t solve the problem. They got hired because the way they thought about it told me more than the solution would have. They asked questions that revealed they understood the domain. They explained their approach before coding, so I could see the reasoning, not just the syntax. They said &#8220;I think this might fail here because...&#8221; and that sentence was worth more than a working function, because it showed judgment in real time.</p><p>I&#8217;ve also passed on developers who solved the problem quickly and silently. They produced the right output, but I learned nothing about how they got there. In a thirty-minute conversation, they gave me a diff and kept their thinking invisible. I had no way to evaluate the one thing I cared about most.</p><p>A company that only advances candidates who produce a correct solution in live coding doesn&#8217;t understand what the exercise is for. What the exercise actually reveals is the thinking, and a correct solution delivered in silence hides exactly what the interviewer needs to see.</p><p>The industry&#8217;s evaluation infrastructure is starting to formalize this. HackerRank, the platform that defined coding challenges for a generation, now evaluates what they call &#8220;AI fluency&#8221;: how developers collaborate with AI tools, the quality of their judgment when reviewing AI-generated code, the reasoning they apply when choosing between approaches. Their assessments increasingly measure thought process and problem-solving signals, because as AI handles more of the raw coding, the developer&#8217;s thinking becomes the primary differentiator, and evaluation is finally catching up to that reality.</p><h2>The artifact library and what it&#8217;s been doing</h2><p>Looking back at the exercises across these editions, I notice a pattern I didn&#8217;t plan. Every artifact I asked you to create was an exercise in making invisible thinking visible. Separating judgment from execution. Writing down how you make decisions. Capturing the reasoning behind a build before it fades into &#8220;that&#8217;s just how we do things.&#8221; Describing your vision precisely enough that someone (or something) else could act on it. Reading a system and writing down what you see.</p><p>None of those exercises asked you to write code. Every one of them asked you to externalize something that normally stays in your head. That&#8217;s what making thinking visible actually looks like in practice: not a single grand gesture, but a growing habit of giving your reasoning a shape it can survive in.</p><p>This week&#8217;s artifact makes that habit explicit.</p><h2>Try This</h2><p>Take a technical decision you made recently. An architecture choice, a tool selection, an approach to a bug, a feature you decided not to build, a refactoring you prioritized over new work. Something where your judgment shaped the outcome.</p><p>Write one page: not about what you built, but about why.</p><p>What problem were you solving? What was the context that made it a problem worth solving now? What approaches did you consider? What did you reject, and what was your reasoning? What trade-offs did you accept? What would change your mind about the choice you made?</p><p>When you&#8217;re done, show it to someone on your team. Not for approval, not for feedback on the decision itself. Show it for the experience of making your reasoning visible. Watch what happens. Notice what questions they ask: those questions tell you exactly what was invisible before. Notice whether the conversation that follows is different from the conversations you usually have about your work.</p><p>That difference is the gap this edition is about. The gap between what you do and what people see.</p><p>One page. One decision. The habit starts there.</p><h2>The Deeper Cut</h2><p>One of the hardest habits to build in developers is the practice of externalizing reasoning. In mentoring conversations, I see it constantly: developers who can evaluate code, design systems, and spot problems with real precision, and none of it reaches anyone else because they never developed the reflex to say it out loud. The thinking is sharp. The output is silent. The gap between those two is where careers stall without anyone, including the developer themselves, understanding why.</p><p>Paid subscribers get the thinking memo template: a structured format for documenting the reasoning behind technical decisions, designed to make invisible thinking legible without turning every choice into a formal document. It&#8217;s the same format I use in client work when a codebase assessment needs to convey not just what I found, but why it matters and what it means for the business.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What you see is what you're worth]]></title><description><![CDATA[Every career leap I've made came from the same place: seeing something in a system that the people inside it couldn't see.]]></description><link>https://www.outcodethinking.com/p/what-you-see-is-what-youre-worth</link><guid isPermaLink="false">https://www.outcodethinking.com/p/what-you-see-is-what-youre-worth</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sun, 05 Apr 2026 11:31:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4d9f5ba6-b2b3-4ce2-b3b2-efe21ced7607_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129517; Career Navigation &#183; For developers rethinking how they grow</p><p><strong>This week&#8217;s challenge: </strong>pick one system you work with daily (a codebase, a team, a product) and write a one-page assessment of what it tells you about the business. Not the technical state. The business implications.</p></blockquote><p>I&#8217;ve been evaluating other companies&#8217; codebases recently. A client sends me a repository, and I produce an assessment: what&#8217;s the real state of this technology, what are the risks, and what does it mean for their business.</p><p>The interesting part isn&#8217;t the technical analysis. Any experienced developer can look at a codebase and spot missing tests, hardcoded credentials, or a monolith that grew beyond control. The part that creates value is translating those findings into language a business owner can act on. &#8220;Your deploy pipeline has no automated tests&#8221; means nothing to a CEO. &#8220;Every time your team ships a feature, they&#8217;re gambling that it won&#8217;t break something in production, and you have no way of knowing until a customer calls&#8221; means everything.</p><p>I didn&#8217;t build that skill by studying assessment methodologies. I built it by spending twenty years inside systems where the technical problems were always symptoms of business problems that nobody had named yet.</p><p>That ability to look at a technical system and see the business underneath it is the single most valuable skill I&#8217;ve developed in my career. And almost no developer I&#8217;ve met, at any level, deliberately practices it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The invisible career ladder</h2><p>Developer careers have a standard progression. Junior, mid, senior, staff, principal. Each rung is defined, implicitly or explicitly, by technical depth. You get promoted because you can handle harder technical problems, mentor other developers, make larger architectural decisions. Above senior, the expectation does shift: soft skills, stakeholder management, and business understanding gradually become part of the job description. The progression isn&#8217;t purely technical at every level, and anyone who&#8217;s reached staff or principal knows that.</p><p>But even where business understanding is expected, it&#8217;s treated as a complement to technical depth, not a skill in its own right. The career ladder still assumes technical execution as the foundation, and everything else as something you layer on top once you&#8217;ve proven yourself technically. That made sense when technical execution was the bottleneck. When writing the code was the hard part, the people who could write harder code were worth more. The career ladder tracked the scarcity.</p><p>That scarcity is dissolving. AI writes increasingly complex code. The technical execution gap between a mid-level developer and a senior developer shrinks every month. If your career growth depends entirely on being better at the thing AI is getting better at faster than you are, the math doesn&#8217;t work.</p><p>But there&#8217;s a parallel ladder that most developers never notice, because nobody names it and no job description lists it. I think of it as the perception ladder: what can you see that others can&#8217;t?</p><p>A junior developer looks at a codebase and sees files, functions, classes. Syntax. They&#8217;re reading the words.</p><p>A mid-level developer looks at the same codebase and sees patterns, architectural decisions, technical debt. They&#8217;re reading the sentences.</p><p>A senior developer looks at it and sees the team&#8217;s history: where they rushed, where they were careful, where they changed their minds. They&#8217;re reading the story.</p><p>The developer who builds a career that lasts looks at the same codebase and sees the business: where it&#8217;s headed, where it&#8217;s stuck, what the technology is enabling and what it&#8217;s preventing. They&#8217;re reading the implications.</p><p>Each level of perception changes what you can do, which changes what you&#8217;re worth. And the jump between them has almost nothing to do with writing better code.</p><h2>Three systems you should learn to read</h2><p>The codebase assessment work made me articulate something I&#8217;d been doing instinctively for years. Every career opportunity I&#8217;ve had came from reading a system (technical, human, or market) and seeing something that the people inside it had stopped noticing.</p><h3>Reading codebases</h3><p>When I evaluate a repository, I&#8217;m not looking for bugs. I&#8217;m looking for patterns of decision-making embedded in the code. A codebase with three different HTTP clients isn&#8217;t a technical problem. It&#8217;s a signal that the team has no technical leadership, or that leadership changed hands multiple times, or that nobody owns the standards. The real remediation starts with understanding why three coexist, which is a people question dressed up as a technical one.</p><p>Hardcoded credentials in a tracked file aren&#8217;t just a security issue. They tell you that the team is moving faster than their process can support, which means other corners are being cut too, which means the real risk isn&#8217;t the credentials; it&#8217;s everything else you haven&#8217;t found yet.</p><p>I look at test coverage and see hiring strategy. A codebase with zero tests tells me the team either has no one senior enough to enforce testing discipline, or they&#8217;re under pressure that makes anything beyond shipping feel like a luxury. Both of those are business problems that will compound.</p><p>The technical findings are inputs. The career skill is the interpretation.</p><h3>Reading teams</h3><p>The mentoring work I described in Edition #6 taught me the same lesson from the human side. Miguel wanted coding exercises. Alexandre wanted shortcuts. Both were telling me, through what they asked for, something important about where they were and what they needed. But what they asked for and what they needed were different things.</p><p>This happens in every team I&#8217;ve worked with. A developer who constantly asks for code reviews on trivial changes is usually looking for reassurance about their position on the team, not about the code itself. A team lead who micromanages pull requests has often been burned by a production incident they didn&#8217;t catch, and the control is their way of processing it.</p><p>Reading a team means looking past what people do and understanding why they do it. The developer who can do this becomes the person others trust, the person who gets called into rooms where decisions happen, the person whose influence extends beyond their technical scope. And none of that shows up on a GitHub profile.</p><h3>Reading markets</h3><p>In 2005, I could see that web development was shifting from server-rendered pages to richer client-side experiences. That wasn&#8217;t a prediction. The signs were visible to anyone paying attention: broadband adoption, browser capabilities improving, user expectations rising. I invested in learning JavaScript deeply at a time when most backend developers dismissed it as a toy language.</p><p>That bet shaped the next decade of my career. Reading the present clearly enough to see where the pressure was building turned out to be more useful than any prediction could have been.</p><p>The AI shift follows the same pattern. The developers who are positioning themselves well right now aren&#8217;t the ones making bold predictions about what AI will or won&#8217;t do. They&#8217;re the ones watching closely: which tasks are AI handling reliably today? Which ones require human judgment every time? Where is the boundary moving, and how fast? The answers to those questions change how you invest your learning time, which changes where you end up in two years.</p><h2>Why this skill is invisible</h2><p>Developer culture celebrates the visible: commits, contributions, conference talks, blog posts, open-source projects. Career advice for developers focuses almost entirely on building visible artifacts: ship more projects, write more posts, contribute to open source, make your GitHub green.</p><p>These things aren&#8217;t useless. But they optimize for a signal that&#8217;s becoming less scarce (the ability to produce output) while ignoring the one that&#8217;s becoming more valuable: the ability to understand context. The gap between those two things is where the real career leverage lives.</p><p>The developers I&#8217;ve seen grow fastest, the ones who jumped from mid-level to staff in three years instead of eight, had one thing in common: they could walk into a situation and see what was actually going on. The client who says they need a rewrite actually needs better deployment. The product manager who keeps changing requirements hasn&#8217;t figured out the business model. The team that&#8217;s &#8220;slow&#8221; isn&#8217;t lazy; their architecture forces them to coordinate on every change.</p><p>Seeing these things didn&#8217;t require exceptional technical talent. It required paying attention to the systems around the code, not just the code itself.</p><p>The reason this skill stays invisible is that it doesn&#8217;t look like a skill. It looks like intuition, or experience, or &#8220;just knowing.&#8221; But it&#8217;s a practice, and you can start building it deliberately.</p><h2>How to practice</h2><p>The perception ladder isn&#8217;t something you climb by accumulating years. You climb it by changing what you pay attention to.</p><p>Start with the system you know best: your own codebase. Tomorrow, before you write any code, spend fifteen minutes reading it as if you were an outsider. Not debugging, not implementing. Reading. What does the structure of this project tell you about the team that built it? What decisions were made under pressure? What patterns reveal a change in direction? Where is the complexity, and why is it there?</p><p>Then extend the same practice to the people around you. In your next meeting, instead of focusing on the content being discussed, watch the dynamics. Who speaks? Who defers? Who gets interrupted? Who gets silence after they talk, the kind of silence that means people are actually thinking about what was said? The answers to these questions tell you more about how decisions actually get made in your organization than any org chart.</p><p>Finally, apply it to the market. Every week, read one thing that isn&#8217;t about your technology stack. A company&#8217;s earnings call summary. A product teardown by someone outside engineering. An industry analysis. You&#8217;re not trying to become a business analyst. You&#8217;re training the muscle that connects technical reality to business context, which is the muscle that makes your technical judgment valuable beyond the codebase.</p><p>None of this will show up on your CV. All of it will show up in every conversation, every decision, every moment where someone needs a developer who understands the bigger picture. And those moments are the ones that move careers.</p><h2>Try This</h2><p>The previous editions built a growing library of artifacts: a delegation map, a judgment breakdown, a problem statement, a decision analysis, an evaluation document, a mentoring reflection, a creative project brief. Each one externalized a different kind of thinking.</p><p>This week&#8217;s artifact is a system reading. Pick one of the three systems from this edition (your codebase, your team, or your market) and write a one-page assessment.</p><p>If you choose the codebase: don&#8217;t list technical problems. Write down what the code tells you about the business. What does the test coverage suggest about the team&#8217;s priorities? What does the architecture reveal about the product&#8217;s growth trajectory? What would you tell the CEO if they asked you, honestly, what shape their technology is in?</p><p>If you choose the team: write down the three unspoken rules that govern how decisions actually get made. Not the official process. The real one. Who has influence? Where does information flow, and where does it get stuck? What does the team avoid talking about?</p><p>If you choose the market: write down what you see changing in the demand for your type of work. Not what the headlines say. What you&#8217;re observing firsthand: in job postings, in client conversations, in the projects you&#8217;re being asked to work on. Where is the pressure shifting?</p><p>One page. Observations, not solutions. The point isn&#8217;t to fix anything this week. The point is to practice seeing what&#8217;s there, clearly, before jumping to action. That practice, repeated consistently, is what builds the perception that separates developers who grow from developers who stall.</p><h2>The Deeper Cut</h2><p>The codebase assessments I&#8217;ve been doing revealed something I didn&#8217;t expect about my own career. When I look back at every significant move (the health tech company that grew to 80,000 users, the search engine that unlocked thousands of users overnight, the travel tech architecture that processed over a billion dollars) the pattern is the same. I walked into a technical environment and saw a business problem that hadn&#8217;t been named. The technical work followed from the naming, and the naming is what created the value.</p><p>I used to think this was just experience accumulating passively. What the assessment work showed me is that it can be practiced deliberately. Every codebase I evaluate now sharpens the same skill: translating between what the technology is doing and what the business needs it to do. That translation is the career skill with the highest ceiling and the lowest competition, because almost nobody thinks of it as a skill at all.</p><p>Paid subscribers get the system reading template: a structured format for producing the kind of assessment described in this edition, applicable to codebases, teams, or markets. It includes the specific questions I use in client assessments, adapted so you can practice the same analysis on your own work environment.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Remove the execution. See what's left.]]></title><description><![CDATA[I went where I had no expertise to see which skills actually matter.]]></description><link>https://www.outcodethinking.com/p/remove-the-execution-see-whats-left</link><guid isPermaLink="false">https://www.outcodethinking.com/p/remove-the-execution-see-whats-left</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sun, 29 Mar 2026 11:45:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6d5ad5bd-553b-4841-9613-2012708940d7_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129302; Building with AI &#183; For developers rethinking how they work.</p><p><strong>This week&#8217;s challenge: pick something you&#8217;ve always wanted to build outside your expertise. Spend 30 minutes describing it to AI. Not asking AI to build it, just describing what you want. Notice where your clarity runs out.</strong></p></blockquote><p>I wanted to prove something. Everything I&#8217;ve written in this newsletter argues that thinking matters more than coding, but I&#8217;d only ever tested that claim inside software, where I already know what I&#8217;m doing. Easy to believe you&#8217;re winning because of your thinking when you also happen to have the technical skill to execute.</p><p>So I removed the skill. Deliberately. I picked two domains where I have zero technical ability, music production and game development, and tried to ship real products using nothing but AI for execution and my own thinking for everything else.</p><p>There&#8217;s a musician on Spotify right now that I created. I also built a fully playable strategy game. I don&#8217;t play any instrument, I&#8217;ve never composed anything, I don&#8217;t know game engines, and I&#8217;d never built a game in my life. It worked anyway.</p><p>What made it work was what has always made software work: the thinking. I already believed this. Every edition of this newsletter argues it. The experiment wasn&#8217;t to discover it. It was to prove it, in domains where I couldn&#8217;t cheat.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Building music without knowing music</h2><p>A friend asked me not long ago what music I listen to during focused work. Every developer has an answer to this. Lo-fi beats, ambient soundscapes, classical, electronic. I told him I don&#8217;t listen to music, and haven&#8217;t in decades. He didn&#8217;t believe me. But it was true. I like heavy metal, I enjoy a few bands, and somewhere along the way I stopped engaging with music entirely. I&#8217;m a quiet person who thinks constantly, processes ideas through reflection rather than noise. Music was never part of that process.</p><p>That gap made me curious. And curiosity plus AI turned out to be enough to build something I never could have built alone.</p><p>I created an artist. His name is Magnus Jow. He&#8217;s on Spotify, and every one of his songs explores the cosmos, the universe, and scientific concepts expressed through poetry. He&#8217;s the artist I would have listened to if I&#8217;d ever found music that matched the way I think.</p><p>Here&#8217;s what I don&#8217;t know how to do: play an instrument, compose a melody, write lyrics that scan properly, arrange instruments into a coherent track, mix audio, produce a finished song. Every technical step in the music production process was beyond my ability.</p><p>Here&#8217;s what I did know: what I wanted to express. The themes. Cosmic, scientific, reflective. The feeling I wanted a track to carry. The difference between lyrics that captured an idea precisely and lyrics that sounded nice but said nothing. I knew the <em>what</em> and the <em>why</em>. I had no access to the <em>how</em>.</p><p>AI handled the how. It generated lyrics, melodies, instrumentation, vocals. Every technical layer of music production that I couldn&#8217;t perform, AI could. In theory, this should have been simple: describe what you want, get music back.</p><p>In practice, it demanded more from me than most software projects I&#8217;ve built.</p><h3>The refinement was the real work</h3><p>The first outputs weren&#8217;t right. They were competent, technically sound, sometimes even pleasant, but they didn&#8217;t match what I had in my head. The problem was never that AI couldn&#8217;t produce music. The problem was that I needed to figure out how to communicate what I wanted with enough precision for AI to get close, and then refine the gap between close and right.</p><p>Four dimensions needed to be solid before anything good came out:</p><p>The <strong>lyrics</strong> required cycles of generation, reading, filtering, injecting my own ideas, regenerating, reading again, cutting what felt wrong, sharpening what felt close. AI could write lyrics. I couldn&#8217;t. But I could read a verse and know immediately whether it carried the weight of the idea I was trying to express. That judgment, the ability to evaluate creative output against an internal standard I couldn&#8217;t articulate technically, was the entire contribution.</p><p>The <strong>music style</strong> needed definition that went beyond genre labels. Saying &#8220;ambient electronic&#8221; produced generic results. Describing the specific texture I wanted, the pace, the mood, how the sound should feel at different moments in the track, produced something closer. Every round of refinement taught me that the quality of the output depended entirely on the quality of my description.</p><p>The <strong>singer&#8217;s voice</strong> was a decision with consequences. The wrong voice made good lyrics feel hollow. The right voice made the same words land differently. This was a taste decision, not a technical one, and no amount of AI capability could substitute for knowing what I wanted to hear.</p><p>The <strong>album theme</strong> held everything together. Individual tracks needed to work on their own, but the album needed a coherent arc. That arc existed in my head before any music was produced. It was a product vision, the same kind of thinking I apply when designing a software system&#8217;s architecture, just expressed through sound instead of code.</p><p>Each of these dimensions went through multiple iterations before reaching a version I was willing to ship. The process looked exactly like the refinement loops I use in software development, except I couldn&#8217;t cheat by writing any of the implementation myself. Every version was produced by AI. My job was to evaluate, direct, and decide.</p><h3>The pattern I recognized</h3><p>Halfway through the album, I realized I&#8217;d seen this workflow before. The hardest software projects I&#8217;ve worked on were never the ones with the most complex code. They were the ones where the requirements were unclear, the stakeholder didn&#8217;t know what they wanted, and no one had taken the time to think through what the product actually needed to do.</p><p>When those pieces are solid, when the vision is clear, the requirements precise, the decisions made deliberately, the coding becomes the straightforward part. Difficult, sometimes. Creative, occasionally. But the execution follows from clarity. The thinking is where the real work lives.</p><p>I&#8217;d known this for years. But knowing it while you&#8217;re the one who can also write the code is different from knowing it when you <em>can&#8217;t</em>. Music stripped away the safety net. I couldn&#8217;t fix a bad track by jumping into the production layer, the way I might fix a bad feature by jumping into the code. All I had was my thinking: my vision, my taste, my ability to evaluate and direct.</p><p>And it was enough to ship an album.</p><div><hr></div><h2>Building a game without knowing game development</h2><p>When I was a teenager, I played a Korean-made strategy game called Vital Device. Think StarCraft: you build bases, manage resources, command units in real time. This one had a biological sci-fi theme. Units that felt alive, organic structures, a world that was strange and specific enough to stick with me for decades.</p><p>I played the demo version because I couldn&#8217;t afford the full game. I spent countless hours in those limited missions, imagining what the rest of the game would be like. When I eventually had the money to buy it, the game had disappeared. Too old, too obscure, too hard to find.</p><p>I know StarCraft exists. I could buy it right now and play essentially the same genre. But I didn&#8217;t want the genre. I wanted <em>that</em> game. The specific one from my memory, with its particular aesthetic and the feeling it gave me as a teenager who couldn&#8217;t afford the full version.</p><p>So I rebuilt it. Using Claude, Gemini AI Studio, and a game engine I&#8217;d never opened before, I built a fully playable version of the game I&#8217;d been carrying in my head for over two decades.</p><h3>Same pattern, different domain</h3><p>I don&#8217;t know game engines. I don&#8217;t know game architecture. I don&#8217;t know how to build a strategy game from scratch. What I know is how to think about systems: what components need to exist, how they interact, what the user experience should feel like, where the complexity lives. Software architecture gave me a mental model for decomposing a game into buildable pieces, even though I&#8217;d never built a game before.</p><p>The same refinement cycle from the music project played out here. And the result was the same: the game is fully playable. Built by someone who had never made a game, using tools he&#8217;d never used, in an engine he&#8217;d never opened.</p><div><hr></div><h2>What building outside your domain reveals about building inside it</h2><p>Here&#8217;s what I couldn&#8217;t see clearly until I left software: when you&#8217;re the person who can write the code, the execution and the thinking blur together. You move between them so fluidly that it&#8217;s hard to tell where the valuable work ends and the mechanical work begins. You might spend an hour on a problem and attribute the difficulty to the code, when the real difficulty was figuring out what the code needed to do.</p><p>The Portuguese word for this is <em>desapego</em>. Letting go. Letting go of the execution, of the identity tied to being the one who writes the code, of the comfort that comes from knowing you could always jump in and fix things at the implementation level.</p><p>When I removed that comfort, what remained was enough to ship. That&#8217;s the point.</p><p>Edition #4 covered the skills that don&#8217;t expire. This was the stress test. Every skill I used to ship music and a game was a skill I&#8217;d built through software, not through those domains. The domain-specific execution didn&#8217;t transfer, because it didn&#8217;t need to. AI handled that part.</p><div><hr></div><h2>The real lesson from Magnus Jow</h2><p>There&#8217;s a personal dimension to this that goes beyond the professional argument.</p><p>I spent decades as someone who doesn&#8217;t engage with music. That was never a problem to solve. But creating Magnus Jow gave me access to something I didn&#8217;t know I was missing: a way to express ideas that don&#8217;t fit into technical writing, into newsletter essays, into the formats I&#8217;m comfortable with. The cosmic themes, the poetic framing of scientific concepts, that was a part of my thinking that had never found an outlet.</p><p>AI didn&#8217;t give me creativity I didn&#8217;t have. It gave me execution capability I didn&#8217;t have, which unlocked creativity that was already there. The ideas existed. The taste existed. The vision existed. What was missing was the craft, and for the first time in history, the craft can be delegated.</p><p>This is what I mean when I say developers need to rethink what they are. You are not your ability to write code. You are your ability to think about problems, envision solutions, evaluate results, and direct execution toward an outcome. The code was always just one medium. Now that AI can handle the medium, what&#8217;s left is you.</p><div><hr></div><h2>Try This</h2><p>The exercises from previous editions built outward: a delegation map, a judgment breakdown, a problem statement, a decision analysis, an evaluation document, a mentoring reflection. This week, the exercise goes sideways.</p><p>Think of something you&#8217;ve always wanted to build that has nothing to do with your professional expertise. An illustrated book. A short film. A mobile app for a hobby you care about. A visual identity for a side project. Anything where you care about the outcome but you don&#8217;t have the technical skill to produce it.</p><p>Now spend 30 minutes describing that thing to an AI tool. Describe it in detail: what it should look like, how it should feel, what it needs to accomplish, who it&#8217;s for, what makes it different from existing versions. Write it as if you&#8217;re briefing someone who has every technical skill in the world but no understanding of your vision.</p><p>When you run out of things to say, look at what you wrote. The places where your description is precise are the areas where your thinking is clear. The places where you got vague, where you wrote something like &#8220;it should feel right&#8221; or &#8220;something cool,&#8221; are the areas where you haven&#8217;t done the thinking yet.</p><p>That gap between precision and vagueness is the same gap that exists in your software work. You&#8217;ve just never seen it this clearly because your technical ability fills in the blanks automatically. When you can&#8217;t fill them in, the gaps become visible.</p><p>Keep that description. If you want, use it. Start a project outside your expertise and see what transfers. Magnus Jow started as 30 minutes of description. He ended up on Spotify.</p><div><hr></div><h2>The Deeper Cut</h2><p>The most counterintuitive discovery from these projects was that my taste improved as I created. I started with a vague sense of what I wanted and ended with a precise internal standard I didn&#8217;t know I was developing. Each round of evaluation, each moment of recognizing that an output wasn&#8217;t right, each decision to push further. All of it built a judgment muscle I couldn&#8217;t have trained by consuming music or playing games passively.</p><p>That&#8217;s the compound effect of building: you don&#8217;t just produce an output, you develop the ability to produce better outputs next time. This applies to software, to music, to games, and to every domain where the thinking matters more than the execution. The builders who start now, in any domain, will have a judgment advantage that passive consumers will never catch.</p><p>Paid subscribers get the creative project briefing template: a structured format for describing a non-technical project to AI, with specific prompts for vision, style, evaluation criteria, and refinement loops. It&#8217;s the same framework I used for both Magnus Jow and the game, adapted so you can apply it to whatever you&#8217;ve always wanted to build but never had the craft for.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The month where nothing satisfying happened, and the lesson I almost missed]]></title><description><![CDATA[Teaching people how to think is the hardest thing I've done. Harder than the code. Harder than the architecture. Harder than anything technical.]]></description><link>https://www.outcodethinking.com/p/the-month-where-nothing-satisfying-happened-and-the-lesson</link><guid isPermaLink="false">https://www.outcodethinking.com/p/the-month-where-nothing-satisfying-happened-and-the-lesson</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 21 Mar 2026 12:45:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c0803c20-0338-483e-acdd-7dac13e8801c_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#128295; Field Notes &#183; &#129504; Mindset Shifts &#183; From the field, no filters</p><p><strong>This week&#8217;s challenge: find one moment this week where you followed instructions without questioning whether the instructions were right. Write down what question you should have asked instead.</strong></p></blockquote><p>This edition was the hardest to write.</p><p>I sat down, opened the doc, and tried to write about what I got wrong this month. That was the plan: a Field Notes edition about mistakes, vulnerability, the things I learned by getting it wrong. Honest and raw.</p><p>The problem is that nothing broke this month. The month was slow and incremental, without a single clear failure I could point to or a story with a satisfying turning point. The kind of progress that doesn&#8217;t make for a good opening paragraph.</p><p>I nearly forced it. Manufactured drama. Turned small things into big lessons so the edition would have the shape readers expect: conflict, insight, resolution.</p><p>Then I realized the forced shape was itself the thing worth writing about. Because the pressure I felt, the urge to package a month of quiet, grinding work into something clean and teachable, is the same pressure I&#8217;ve been fighting in a different context all month.</p><p>The pressure to turn thinking into steps.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>Two mentees, one pattern</h2><p>I mentor several developers. Two of them taught me something important this month, the same lesson from completely different angles.</p><p>Miguel is entering the software development market. He&#8217;s early in the journey, building foundational skills, and I&#8217;m preparing him to work as a developer in the AI era. Not the era that&#8217;s coming. The one we&#8217;re already in.</p><p>Alexandre is an entrepreneur. Already running things, already dealing with real business pressure, trying to use AI to navigate situations that don&#8217;t have clean answers.</p><p>Different profiles, different goals, different stages. And both, in the same month, showed me the same thing: <strong>they wanted me to tell them exactly what to do.</strong></p><p>Miguel&#8217;s version was subtle. I had a feeling he was expecting to practice the way every developer resource teaches: write code, solve coding challenges, follow a curriculum. He never said it directly, but the expectation was there, and it made sense, because that model is everywhere.</p><p>But that&#8217;s the model this whole newsletter argues against. So my job wasn&#8217;t to give him harder coding challenges. It was to create exercises where the code wasn&#8217;t the point, where he needed to think, question what the AI produced, identify what was missing, give context, connect documentation, push back. Exercises where the answer wasn&#8217;t in the syntax but in the judgment around it.</p><p>That took real energy to design. Any mentor can hand someone a list of coding challenges and move on. Building exercises that trained his mind instead of his muscle memory was a different kind of work</p><p>I found the way. It took more from me than I expected, but the exercises landed. Miguel started questioning AI output instead of accepting it. Started noticing gaps. That shift is worth more than a hundred solved algorithms.</p><p>Alexandre&#8217;s version was louder.</p><p>We entered into several deep discussions because I noticed a pattern: he was jumping steps. Consistently. He&#8217;d encounter a problem and reach for AI immediately, without sitting with the problem first. Without understanding it. Without refining what he was actually trying to solve.</p><p>The loop I kept pushing was: understand, question, refine, learn, then use AI. Build clarity before you build speed. Every time he skipped the understanding phase, the AI output looked useful but missed something fundamental, because he hadn&#8217;t given it (or himself) the context it needed.</p><p>He resisted this, and I needed real effort to understand why.</p><div><hr></div><h2>Why entrepreneurs skip the thinking</h2><p>Alexandre wasn&#8217;t being lazy. He wasn&#8217;t ignoring the process because he didn&#8217;t care. He was ignoring it because of something deeper in how entrepreneurs operate.</p><p>I&#8217;ve worked with entrepreneurs before, across different industries and stages. There&#8217;s a pattern that shows up consistently: the idea is more motivating than the path. Entrepreneurs are energized by possibility. The vision of what could be, the optimism that this time it will work, the momentum of a new direction. That&#8217;s the fuel.</p><p>When you ask them to slow down, define the problem precisely, question their assumptions, refine their understanding before acting, that process feels like it&#8217;s killing the idea. The clarity they gain comes at the cost of the excitement that was driving them forward. And for someone like Alexandre, who has real financial pressure to accelerate, every hour spent on clarity feels like an hour not spent on results.</p><p>I get it. The pressure is real. But the Outcode Thinking program isn&#8217;t built for speed. Learning to think is a skill that takes time to absorb and develop. There are no shortcuts, because the shortcuts produce the exact problem the program is designed to solve: people who can follow instructions but can't figure out what to do when the instructions don't exist.</p><p>The conversation with Alexandre pushed me to articulate something I&#8217;d been feeling but hadn&#8217;t said clearly: the hardest part of this work is holding the line when the person you&#8217;re helping is uncomfortable. His discomfort wasn&#8217;t a sign that the process was wrong. It was a sign that the process was working, because it was forcing him into the part of thinking that he&#8217;d been skipping.</p><div><hr></div><h2>The tension that runs through everything</h2><p>The thread connecting Miguel, Alexandre, and this edition is the same one that runs through the entire newsletter: the people who need to learn how to think are the same people who expect to be taught in steps.</p><p>This is the tension I underestimated when I started writing Outcode Thinking. Developer content has been tutorial-shaped for so long that the format itself carries expectations. A weekly newsletter from an experienced developer should give you frameworks, checklists, specific actions. Step one, step two, step three. Follow this and you&#8217;ll get there.</p><p>I feel that pull every week. The pressure to turn every edition into a tutorial. To give you five steps to evaluate AI output, three rules for better communication, a roadmap for career navigation. That format is comfortable. It&#8217;s familiar. It gets clicks and shares. And it completely misses the point, because the developers who follow those steps without understanding why they exist will be lost the moment they face a situation the steps don&#8217;t cover.</p><p>The irony is that this is exactly what I was fighting with Miguel and Alexandre. They wanted steps. I kept redirecting them toward thinking. And here I am, writing a newsletter, feeling the same pressure from the other side.</p><p>Every edition I&#8217;ve published has tried to balance this: enough structure that readers have something to apply, enough depth that the application requires their own judgment. Some editions landed that balance better than others. I&#8217;m still calibrating. This month made me realize that the calibration itself is the work, and there&#8217;s no final version where the tension disappears.</p><div><hr></div><h2>What the quiet month actually taught me</h2><p>The month where nothing broke taught me more than a month full of dramatic failures would have. The lesson wasn&#8217;t in any single event. It was in the accumulation of small, difficult moments where the easy path was to give someone what they wanted instead of what they needed.</p><p>Every time Miguel wanted a coding exercise and I gave him a thinking exercise instead, I was choosing the harder thing. Every time Alexandre wanted to jump to AI and I pulled him back to the problem itself, I was choosing discomfort over speed. Every time I sat down to write this newsletter and resisted the urge to turn it into a listicle, I was making the same choice.</p><p>None of these moments felt like wins at the time. They felt like friction. The satisfying version of this month, the one where I could write a clean story about a mistake I made and a lesson I learned, didn&#8217;t happen. What happened instead was a month of holding positions that are easy to argue for in an essay and hard to hold in real life.</p><p>The developers who grow the most are the ones who can sit with the discomfort of not knowing what to do next and use that discomfort to think harder. I&#8217;ve written that sentence in different forms across five previous editions. This month, I had to practice it myself, not as a writer but as a mentor, over and over, with real people who were counting on me to help them.</p><p>The work of learning how to think is slow and uncomfortable and rarely produces a good story. That, maybe, is the most honest thing I&#8217;ve written in this newsletter so far.</p><div><hr></div><h2>Try This</h2><p>The exercises from previous editions built external artifacts: a delegation map, a judgment breakdown, a learning target, a decision analysis, an evaluation document. This week, the exercise turns inward.</p><p>Think about the last time someone asked you for help. A colleague, a friend, someone you mentor. Think about what they asked for and what you gave them. Were those the same thing?</p><p>If you gave them exactly what they asked for, the answer, the solution, the step-by-step, ask yourself whether that was the right call. Sometimes it is. Sometimes people just need an answer. But sometimes the request for steps is a signal that the person hasn&#8217;t thought enough about the problem yet, and the most helpful thing you can do is redirect them toward the question they should be asking.</p><p>Write down one instance where you gave someone the answer when you should have helped them find it. And one instance where you held back the answer and helped them think instead. The comparison between those two moments will tell you something about your own defaults, whether you lean toward giving steps or building judgment.</p><p>If you&#8217;ve been collecting the artifacts from previous editions, this one connects directly to Edition #4 (the skills that don&#8217;t expire). The ability to teach thinking instead of giving answers is a skill that compounds, and it starts with noticing when you&#8217;re defaulting to the easier path.</p><div><hr></div><h2>The Deeper Cut</h2><p>The moment I understood Alexandre&#8217;s resistance, something clicked about a broader pattern. The people who push back hardest against learning to think are often the ones who have the most to gain from it, because their instinct to move fast is strong, and the discipline to slow down and understand first would give them a leverage most of their competitors will never develop.</p><p>The entrepreneurs who learn to think before they act don&#8217;t move slower. They move fewer times in the wrong direction. And in a world where AI can execute faster than anyone, the quality of the thinking before the execution is the only real advantage left.</p><p>Paid subscribers get the mentoring reflection template: a format for reviewing your own teaching and mentoring interactions, designed to surface moments where you gave steps instead of building thinking. It&#8217;s the same practice I&#8217;m building into my own mentoring process after this month.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How to evaluate what AI gives you (because most devs don't)]]></title><description><![CDATA[AI writes code that compiles, runs, and passes tests. That's exactly why it's dangerous.]]></description><link>https://www.outcodethinking.com/p/how-to-evaluate-what-ai-gives-you</link><guid isPermaLink="false">https://www.outcodethinking.com/p/how-to-evaluate-what-ai-gives-you</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 14 Mar 2026 12:45:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fb4c8d6a-c24d-40f4-be48-f5174a9bddeb_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129302; Building with AI &#183; For developers rethinking how they work<br><strong>This week&#8217;s challenge: take one piece of AI-generated output you accepted this week and find the assumption it made that you didn&#8217;t check.</strong></p></blockquote><p>Last month, my Slack agent had 269 passing tests. Every function worked. Every edge case I&#8217;d written a test for behaved correctly. The architecture was clean: dependency injection, separation of concerns, proper error handling. By every standard measure, the code was solid.</p><p>Then I used it for a week.</p><p>On the third day, I almost missed a critical conversation. A team member posted a question in a channel. The agent evaluated it, decided it didn&#8217;t need my attention, and moved on. Four hours later, three people had replied in that thread, one of them explicitly asking for my input. The agent never saw it. It had already dismissed the parent message, and it had no mechanism to re-evaluate a thread that grew after the initial assessment.</p><p>The code was correct. The tests passed. The tool still failed at its actual job: telling me what mattered.</p><p>That gap, between technically correct and functionally reliable, is where most developers stop paying attention. And it&#8217;s the gap where AI-generated work is most dangerous, because AI is exceptionally good at producing output that looks right.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The confidence problem</h2><p>When a junior developer writes bad code, you can usually tell. The naming is off. The structure feels uncertain. There are signs: hesitation marks in the logic, inconsistent patterns, the kind of roughness that signals someone is still learning.</p><p>AI doesn&#8217;t do this. AI-generated code arrives with the confidence of a senior developer and the context awareness of someone who just walked into the room. It uses the right design patterns, follows consistent naming conventions, handles error cases, and produces something that reads like it was written by someone who knew exactly what they were doing.</p><p>This is what makes evaluation hard. The surface quality is high enough to bypass the instinct that tells you to look more closely. You read it, it makes sense, the tests pass, and you move on. The failure modes are hiding underneath, in assumptions that were never stated because the AI doesn&#8217;t know they exist.</p><p>I&#8217;ve seen this play out the same way multiple times now. I asked AI to review a piece of my code, and it came back with an analysis I couldn&#8217;t fault technically. Bottlenecks identified with precision, race conditions flagged with clear explanations of why they were dangerous, each one accompanied by a solution I would have been impressed to see in a senior engineer&#8217;s review.</p><p>Then I asked a simple question: under what conditions would these bottlenecks actually occur? The answer, once I pressed, was almost never. The race condition required a concurrency pattern my application doesn&#8217;t use. The bottleneck would surface under loads I&#8217;ll never see.</p><p>If I had accepted the review at face value, my code would have become more complex, harder to maintain, and more difficult for the next developer (or AI) to read, all to prevent problems that had almost no chance of happening. Wtf!!?</p><p>That&#8217;s the pattern my Slack agent exposed too. The failure wasn&#8217;t in any function. It was in a model of reality the AI had no reason to doubt.</p><div><hr></div><h2>Three layers of evaluation</h2><p>The developers who use AI well have learned to evaluate at three levels, and most stop at the first one.</p><h3>Does it work?</h3><p>This is where almost everyone starts and stops. Run it, test it, check the output. If it compiles and produces the expected result, move on.</p><p>This layer catches syntax errors, logic bugs, and obvious failures. It&#8217;s necessary and insufficient. My agent passed this layer completely. Every function did what it was supposed to do. The problem wasn&#8217;t in any individual function. It was in the architecture&#8217;s model of reality.</p><h3>Does it fit?</h3><p>This is the layer most developers skip. The question isn&#8217;t whether the code works in isolation but whether it works in the system it&#8217;s joining. Does it match the existing patterns? Does it make the same assumptions the rest of the codebase makes? Does it introduce a dependency that creates problems elsewhere?</p><p>AI-generated code frequently passes the first test and fails the second. It solves the problem you described while quietly ignoring the constraints you didn&#8217;t mention. You asked for a caching layer, and it built one with an in-memory store, which works perfectly until your application runs across multiple servers. You asked for input validation, and it added thorough checks that happen to duplicate validation already handled by the middleware. Nothing broke. Everything is subtly wrong.</p><p>The fix isn&#8217;t to prompt better. The fix is to evaluate differently. Before accepting any substantial piece of AI-generated code, ask: what does this assume about the context it&#8217;s operating in? Then check whether those assumptions hold.</p><h3>Does it survive real use?</h3><p>This is the layer that only shows up over time, and it&#8217;s the one where the most consequential failures hide. Real use introduces conditions that no spec anticipated because the people writing the spec didn&#8217;t know those conditions existed.</p><p>My keyword-based search is a good example. When the agent gathered context for a trigger message, it searched other channels by extracting the longest words from the topic summary and running text matches. This worked for obvious connections. If someone mentioned &#8220;deploy pipeline&#8221; in two channels, the keyword search found both. But when someone in the engineering channel said &#8220;the deploy pipeline is stuck&#8221; and someone in the incidents channel reported &#8220;CI/CD timeout affecting production,&#8221; the search missed the connection entirely. Same issue, different words. The most valuable cross-references, the ones that actually saved me time, were exactly the ones it couldn&#8217;t find.</p><p>No amount of testing would have caught this. It required using the tool on real conversations over real days and paying attention to what it wasn&#8217;t surfacing.</p><div><hr></div><h2>The evaluation habit</h2><p>Developers I mentor complain about this constantly. Reading AI output is exhausting. Not the act of reading itself, but the mental effort of understanding what was generated, interpreting the decisions behind it, and thinking critically about whether it actually makes sense. It demands real cognitive energy, and they didn&#8217;t sign up for that. They wanted AI to make their work easier, not to add a new layer of intellectual labor on top of it.</p><p>I get it. But here&#8217;s the thing: in the past, we wrote the code ourselves. It took days, sometimes weeks. Now AI writes it in minutes and we just need to read and think about what we&#8217;re reading. The intellectual part is still ours. The choice is yours. Do you want to delegate everything to AI and do nothing, or do you want to be recognized for your intellectual work and be more productive than ever?</p><p>The first option sounds comfortable. Nobody will pay well for it. There&#8217;s no career path in being the person who clicks &#8220;accept&#8221; without reading. Your best option is to learn how to think and use AI. And that starts with how you evaluate.</p><p>So here&#8217;s what the practice actually looks like.</p><p>The first pass is immediate: does this output match what I asked for? Read it carefully, not to verify that it runs, but to understand what it&#8217;s actually doing. Look for the decisions the AI made that you didn&#8217;t ask for: the data structure it chose, the error handling strategy, the assumptions about inputs.</p><p>The second pass is contextual: does this belong here? You&#8217;re looking at a diff, in git, in a PR, in whatever tool you use to review what AI changed. The code difference is right there. But understanding whether that change makes sense requires context the diff doesn&#8217;t show: who is this feature for, what problem does it solve, what did the team decide three weeks ago that shaped how this part of the system works. The AI wrote code that addresses your prompt. Whether it addresses the actual need behind the prompt is a question only you can answer, and only if you understand the context well enough to ask it. This is the pass where experience matters most, and where junior developers need to be most deliberate.</p><p>The third pass is temporal: does this hold up in the real world? Most developers never do this one, because most developers stop thinking about their code the moment it&#8217;s merged. What happens in production is someone else&#8217;s problem. That mindset was already limiting before AI. Now it&#8217;s dangerous, because the volume of code you ship with AI is higher than ever, and each piece carries assumptions you may not have examined.</p><p>This pass requires a product mindset. You&#8217;re no longer asking whether the code works. You&#8217;re asking whether it solves the problem for the person using it, under the conditions they actually face. And that means doing something most developers never do: go back and check.</p><p>A week after you ship, look at how the feature is being used. Talk to the people using it. Read the support tickets. Check whether the assumptions you accepted during the code review actually held up in production. If you built a search feature, are people finding what they need? If you built a notification system, are the notifications reaching the right people at the right time, or are they creating noise?</p><p>Most developers do the first pass automatically. The second is where the real skill lives. The third is the one that turns a developer into someone who builds products, not just features.</p><div><hr></div><h2>What this means for your career</h2><p>The developers who will be most valuable in the coming years are the ones who can evaluate AI output at all three layers. The code generation is becoming commodity. The ability to determine whether that code actually solves the right problem, in the right context, under real conditions. That&#8217;s a judgment skill, and it compounds with every project you apply it to.</p><p>This connects directly to what we covered in Edition #4: the skills that don&#8217;t expire are the ones AI can&#8217;t perform for you. Evaluation is one of them. AI can generate the code. AI can even generate tests for the code. But AI cannot determine whether the code&#8217;s model of reality matches your reality. That requires understanding the domain, the users, the constraints, and the history. In other words, it requires everything you accumulate by paying attention to your work over time.</p><p>The irony is that AI makes this skill both more important and harder to practice. When the output arrives looking polished and professional, the temptation to accept it without deep evaluation is strong. Every time you resist that temptation and look harder, you&#8217;re training the judgment that makes you irreplaceable.</p><div><hr></div><h2>Try This</h2><p>If you&#8217;ve been following the previous editions, you&#8217;ve built a set of artifacts: a delegation map (Edition #1), a judgment-versus-execution breakdown (Edition #2), a problem statement and learning target (Edition #3), and a decision analysis (Edition #4). Each one built on the last.</p><p>This week, the exercise is different. Instead of building a new artifact, go back to something you built recently: a piece of code, a tool, a workflow where AI contributed substantially to the output.</p><p>Run it through the three layers from this edition.</p><p><strong>Layer 1: Does it work?</strong> You probably already checked this. Confirm it anyway.</p><p><strong>Layer 2: Does it fit?</strong> Look at the assumptions the AI made about the context. What conventions does your codebase follow that the AI didn&#8217;t know about? What constraints exist in your system that the AI wasn&#8217;t told about? Write down at least two assumptions the AI made that you never specified.</p><p><strong>Layer 3: Does it survive real use?</strong> If the code has been running for a while, think about the edge cases you&#8217;ve encountered. If it hasn&#8217;t, imagine the real-world conditions it will face. What happens when the data isn&#8217;t clean? When the user behaves differently than expected? When the load changes? Write down one scenario the AI couldn&#8217;t have anticipated.</p><p>By the end, you should have a short document with two hidden assumptions and one untested scenario. That document is the beginning of an evaluation practice. Run this on every significant piece of AI output, and over time you&#8217;ll develop the instinct to spot these gaps before they become problems.</p><p>The exercises from previous editions gave you a map of your work, a target for automation, and a framework for decisions. This week adds the quality layer: the skill that determines whether what you build actually holds up when it matters.</p><div><hr></div><h2>The Deeper Cut</h2><p>There&#8217;s a pattern I notice in every developer I mentor who starts building with AI seriously. At first, they evaluate too little: they accept output at face value because it looks professional. Then they overcorrect and evaluate too much, spending more time reviewing AI output than it would have taken to write it themselves. Both extremes miss the point.</p><p>The calibration happens through volume. The more AI output you evaluate critically, the faster your instinct develops for where the failures tend to hide. After enough reps, you stop reading every line with equal suspicion and start knowing where to look, which is usually in the assumptions, not the syntax.</p><p>Paid subscribers get the evaluation checklist: a structured tool for running the three-layer evaluation on any AI output, with specific prompts for each layer and a format for capturing what you find. It&#8217;s the same process I run on my own tools. It turns the thinking from this edition into a repeatable practice.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The skills AI can't replace and how to build them]]></title><description><![CDATA[Communication, decisions, focus, questions, strategy. These don't expire.]]></description><link>https://www.outcodethinking.com/p/the-skills-ai-cant-replace-and-how</link><guid isPermaLink="false">https://www.outcodethinking.com/p/the-skills-ai-cant-replace-and-how</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 07 Mar 2026 13:01:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/23741cad-4ece-478b-ab49-c98f3b068357_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129504; Career Navigation &#183; For developers navigating the AI era.</p><p><strong>This week&#8217;s challenge: pick one skill from this edition and notice once today where you used it on autopilot. That&#8217;s where the practice starts.</strong></p></blockquote><p>A few months ago, a junior developer sent me a message I&#8217;ve received in different forms at least a dozen times since.</p><p>&#8220;I&#8217;m learning X and Y and studying Z. Am I wasting my time? Will any of this matter in two years?&#8221;</p><p>I didn&#8217;t answer right away. I wanted to give him the right one, not the fast one.</p><p>The truth is, I&#8217;ve interviewed hundreds of developers over the years. The ones I remember, the ones I hired and kept, were almost never the ones with the most tools in their stack. They were the ones who, when I gave them a problem with no clear answer, could slow down, ask the right question, and find a direction.</p><p>That&#8217;s a skill. It doesn&#8217;t expire. It doesn&#8217;t get deprecated. And most developers are so busy learning the next framework that they never build it.</p><p>So here&#8217;s my answer, for him and for you.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The skills that will matter in 2 years</h2><p>If you ask me what made the most difference in my career across twenty years of building software, the honest answer tends to surprise people: communication.</p><p>Not the technical stack I mastered early. Not the architecture patterns I learned the hard way. Communication: the ability to express ideas clearly, understand what others actually mean, and move information between people without losing its meaning in transit.</p><p>The reason this surprises people is the same reason most developers spend years underestimating it: we assume we already have it. We&#8217;ve been communicating since we were children. We can write an email. We can explain what we&#8217;re working on in a standup. How hard can it be?</p><p>That assumption is expensive, and I held it myself for longer than I&#8217;d like to admit.</p><p>Communication is a practice, not a threshold you cross once. It degrades when you stop paying attention to it and compounds slowly when you do. Reading a book about it changes nothing unless you apply it consciously: in every message, every question, every meeting, every pull request comment. And it covers far more ground than speaking: it&#8217;s how you write a Slack message at 11pm when context is missing and someone needs to act on it, how you structure a document that people actually read instead of skim, how you ask a question that opens a conversation instead of shutting one down.</p><p>Most developers operate on autopilot across all of this. That&#8217;s where the gap lives, and where the biggest opportunity is hiding.</p><div><hr></div><h3>Two categories, two different futures</h3><p>Everything a developer needs to grow falls into two categories with very different trajectories.</p><p>The first is technical skills: languages, frameworks, tools, platforms. These have always evolved, but AI is compressing the cycle dramatically. A specialization that would have stayed relevant for a decade now has a shorter shelf life. This doesn&#8217;t mean abandoning technical depth. It means being strategic about where you invest it. The durable bet in this category is foundations: system design, data structures, how distributed systems fail, how databases actually work under the hood. These don&#8217;t go out of fashion because every new tool is built on top of them. A developer who understands why a system degrades under load will adapt to any new infrastructure faster than someone who only knows how to configure the current one.</p><p>The second category is mental skills: the cognitive habits that make you better at any technical challenge, regardless of what the tools look like. Communication is one of them. The others are: how you make decisions, how you structure your thinking, how you protect your focus, how you ask questions, and how you think strategically. These don&#8217;t expire. They don&#8217;t get deprecated. Every year you invest in them, they compound. And with AI handling more of the execution layer, they become the primary differentiator between developers who thrive and those who plateau.</p><p>The rest of this edition is about those five.</p><div><hr></div><h3>How to make decisions</h3><p>Making decisions seems straightforward until you realize what they actually cost. Every decision you make today removes options you could have had tomorrow. That invisible cost is what I call the marginal cost of decisions. Most developers never account for it.</p><p>Start by separating decisions by stakes. Some decisions have no real downside if you get them wrong: you can reverse them cheaply, learn quickly, and move on. For those, speed is the right optimization. But low and high-stakes decisions, the ones where you will lose something if you choose poorly, deserve a different kind of thinking.</p><p>For those, I use a structured approach: map the pros and cons explicitly, then spend time specifically on the cons. For each one, ask whether it can be diminished or removed through a different implementation, a phased rollout, a fallback mechanism, or a different sequencing of the work. The goal isn&#8217;t to eliminate cons, which is rarely possible, but to understand whether the pros genuinely outweigh them once you&#8217;ve accounted for the side effects.</p><p>This matters especially in product development, where decisions are rarely isolated. A feature decision affects architecture. An architecture decision affects hiring. A hiring decision affects what you can build next quarter. One choice can quietly close off an entire direction before you realize it&#8217;s gone. The developers who build lasting products aren&#8217;t the fastest decision-makers. They&#8217;re the ones who understand the downstream consequences before committing.</p><p>Without this kind of thinking, you&#8217;re not really deciding. You&#8217;re reacting. And the compounding costs of reactive decisions show up six months later as technical debt, misaligned product direction, and options you no longer have.</p><div><hr></div><h3>How to structure your thinking</h3><p>A complex problem that stays in your head as a tangle of intuitions is still a complex problem. The act of structuring, deciding what&#8217;s the main point, what depends on what, what&#8217;s a cause and what&#8217;s a consequence, does something that pure mental effort can&#8217;t: it forces clarity.</p><p>The mechanism is simple but requires discipline. Before starting any non-trivial piece of work, write down three things: what problem this solves, what your approach is, and what could go wrong. Not as a formal document, but as a thinking exercise. The writing isn&#8217;t the deliverable. The clarity you arrive at before writing the first line of code is.</p><p>From there, practice hierarchy. Every idea has a level: there are top-level conclusions, the arguments that support them, and the evidence or examples that support those arguments. When your thinking feels tangled, ask yourself: am I confusing a conclusion with an argument? Am I treating an example as if it were a principle? Most mental confusion comes from mixing levels: treating a symptom as a cause, or a specific case as a general rule. Learning to separate them is a skill you build by doing it slowly and deliberately until it becomes instinctive.</p><p>This transfers everywhere: technical proposals, feature breakdowns, conversations where you need to bring someone else to your point of view. It also determines the quality of what you can build with AI. A developer who gives a well-structured brief will get dramatically better output, not because they&#8217;re better at prompting, but because they&#8217;ve already done the hard thinking before they typed a single word.</p><div><hr></div><h3>How to focus and go deep</h3><p>The industry we&#8217;ve built rewards responsiveness. Slack notifications, standups, context switches, urgent messages. The average developer barely reaches genuine depth of focus before something pulls them back to the surface.</p><p>Deep work, the kind where you hold a complex problem in your head long enough to actually understand it, is becoming rarer exactly as it becomes more valuable. The shallow problems are being handled by automation. The ones worth solving require sustained attention.</p><p>The how starts with protection, not willpower. Block time in your calendar before the day fills with other people&#8217;s priorities. Communicate clearly when you&#8217;re unavailable. Treat a broken focus block the same way you&#8217;d treat a broken deployment: as something that needs a fix, not an inevitability.</p><p>But protection is only the first step. The deeper practice is building tolerance for the discomfort that comes at the start of deep work: the resistance, the urge to check something, the feeling that you should be responding somewhere. That discomfort is not a signal that something is wrong. It&#8217;s the threshold between surface-level thinking and the depth where real understanding lives. The practice is learning to recognize it as a threshold rather than an obstacle, and crossing it anyway. Repeatedly. Until crossing it becomes easier than avoiding it.</p><div><hr></div><h3>How to ask the right questions</h3><p>A well-formed question is one of the most underrated skills in software development. But the goal isn&#8217;t to memorize a list of good questions, but to develop the capacity to generate the right question for any situation you haven&#8217;t seen before.</p><p>The mechanism behind a good question is always the same: you identify what&#8217;s being assumed and hasn&#8217;t been examined. Before asking anything, ask yourself: what does everyone in this conversation seem to take for granted? What would have to be false for this plan to fail? What are we not talking about that we should be? Those internal checks are the raw material. The question you ask out loud is just the output.</p><p>Years ago, I was in a meeting with an investor. He opened his notebook and showed me a single line he had written: &#8220;What haven&#8217;t we tried that we should try?&#8221;</p><p>That moment confused me at the time. I wasn&#8217;t prepared to understand what was happening. But what that investor was doing wasn&#8217;t asking a question. He was sharing a mental posture. A way of entering a situation with a deliberate blind spot detector. That question doesn&#8217;t ask for solutions. It asks for the shape of the unexplored space. It assumes that somewhere in the room there&#8217;s an untested path, and it creates the conditions for someone to name it.</p><p>That&#8217;s the quality a good question has. It doesn&#8217;t just request an answer. It shifts the frame of the conversation. And you can build that capacity by starting with one habit: before accepting any situation as fixed, ask what&#8217;s being treated as impossible that might not be.</p><div><hr></div><h3>How to think strategically</h3><p>Most developers think at the level of the task. The best ones also think at the level of the system, the larger context in which the task exists, and the direction that context is moving.</p><p>The how is a practice of deliberate elevation. When you receive a task or face a decision, take thirty seconds to ask: why does this exist, and what does it move toward? Then go one level higher: what does the team or product need to be true in three months, and does this task contribute to that, or just to completion?</p><p>The second habit is consequence mapping. Before committing to any significant piece of work, ask: who else is affected by this decision that hasn&#8217;t been considered yet? What does this close off that we might want later? Strategic mistakes rarely feel like mistakes at the time. They feel like the obvious next step. The discipline is to pause long enough to look for what you can&#8217;t see from inside the immediate problem.</p><p>Over time, these two habits, elevating the question and mapping the consequences, become a mental reflex. You stop seeing tasks as isolated units and start seeing them as moves in a longer game. That shift is what separates developers who build things that matter from developers who build things that ship.</p><div><hr></div><h3>Where to start</h3><p>These skills don&#8217;t come with completion certificates. There&#8217;s no course that teaches you to ask better questions. They develop through deliberate attention: noticing how you communicate, examining the decisions you make, protecting your focus, and treating every interaction as something worth doing consciously.</p><p>The reassuring part is that you already have the foundation. Every developer who has shipped real software, navigated team dynamics, and worked through real ambiguity has been exercising these skills, whether deliberately or not. The shift is simply to make the practice conscious.</p><p>That&#8217;s where the compounding begins.</p><div><hr></div><h2>Try This</h2><p>If you&#8217;ve been following the previous editions and doing the exercises, you already have two artifacts: a map of tasks you&#8217;d delegate to five versions of yourself (Edition #1), and a breakdown of one of those tasks into judgment versus execution (Edition #2). Edition #3 asked you to identify one real problem in your work that no one is solving and write a single sentence describing it.</p><p>This week, take that problem and run it through the decision framework from this edition. Map the pros and cons of the most obvious solution. Spend twice as long on the cons as on the pros. For each con, ask whether it can be diminished, removed, or sequenced differently. Then ask: what am I assuming here that I haven&#8217;t examined?</p><p>These aren&#8217;t isolated exercises. Every artifact you&#8217;ve built across these editions is a compounding asset: a map of how you work, where your leverage is, and what you&#8217;re building toward. The goal was never practice for its own sake. It&#8217;s a system you&#8217;re assembling, one piece at a time, that will keep working for you long after you build it.</p><p>If this is your first edition, I&#8217;d encourage you to go back and do the Try This from editions #1, #2, and #3 before continuing. The exercises build on each other deliberately. Each one produces something you&#8217;ll use in the next. Starting here without that foundation means starting without the map. And the map is the point.</p><p>We&#8217;re scaling ourselves. That only compounds if the work accumulates.</p><div><hr></div><h2>The Deeper Cut</h2><p>I&#8217;ve mentioned this before, but it&#8217;s worth saying clearly: I&#8217;m doing exactly what I&#8217;m asking you to do in the Try This sections. This is what walk the talk looks like.</p><p>From <a href="https://www.outcodethinking.com/p/you-are-not-a-coder-anymore-you-are-a-builder">Edition #1</a>, I analyzed my work week to understand where I could scale myself. The exercise produced a concrete list: over 3 hours a day reading and responding to messages from customers and colleagues, 2 hours on code review, 3 hours coding, 1 hour on architecture, and 2 hours creating content. Just writing it down made one thing obvious: the messaging block wasn&#8217;t just the biggest time sink. It was draining the energy I needed for everything else on the list.</p><p>From <a href="https://www.outcodethinking.com/p/the-builders-who-scale-themselves">Edition #2</a>, I picked that problem and broke it down into judgment versus execution. Most of the 3 hours was execution: retrieving context, piecing together what had already been said, figuring out who was asking and why. That breakdown made the target clear, and I built the Slack context retrieval agent directly from it.</p><p>From <a href="https://www.outcodethinking.com/p/stop-following-roadmaps-start-thinking">Edition #3</a>, while working on the agent, I realized that identifying the gaps required recovering every decision and discussion that had shaped the build. Without that context, I couldn&#8217;t give AI a clear enough brief to help me think through the problems. That need produced the process log: a structured way to capture what was built, what was decided, and what actually happened. The process log then made the gaps visible, which led to the architecture decision: breaking the monolith into independent, composable agents.</p><p>Each edition produced an asset. Each asset made the next decision clearer.</p><p><a href="https://github.com/Outcode-Thinking/Deletation-Map-Template">The delegation map template</a>, the format I used to produce the Edition #1 list, is available to all subscribers. Use it to map your own week. Run it regularly as your work changes.</p><p>Paid subscribers get the decision framework: the structured method for low and high-stakes decisions covered in this edition. The tool that turns the thinking in this edition into a practice you can apply to the next real decision you face.</p><p>The map shows you where your leverage is. The framework helps you move toward it without closing off the options you&#8217;ll need later.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Stop following roadmaps. Start thinking in opportunities. ]]></title><description><![CDATA[The developers who followed the checklist are now competing with AI. The ones who learned to think are using it.]]></description><link>https://www.outcodethinking.com/p/stop-following-roadmaps-start-thinking</link><guid isPermaLink="false">https://www.outcodethinking.com/p/stop-following-roadmaps-start-thinking</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 28 Feb 2026 12:45:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8898e7ce-863c-4ae9-9721-8b112b2fa3f2_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129504; Mindset Shifts &#183; For developers rethinking how they grow This week&#8217;s challenge: separate what you learned from experience from what you learned from a checklist.</p></blockquote><p>In 2005, becoming a developer was hard in ways that no longer exist, and easy in ways nobody remembers.</p><p>Hard because the resources weren&#8217;t there. Documentation was incomplete or outdated. There were no bootcamps, no polished YouTube channels walking you through every framework. Stack Overflow didn&#8217;t exist yet. If you got stuck, you read source code, asked on a mailing list, and waited. Learning was slow because the infrastructure for learning didn&#8217;t exist.</p><p>But it was easy too. The cognitive load was smaller. You didn&#8217;t need to learn twelve tools to get a job. The systems were simpler. The ecosystem was smaller. And most importantly, you had <em>time</em>. Time to absorb concepts gradually, build things, break them, understand why they broke. Time to develop intuition alongside knowledge.</p><p>Nobody called it a roadmap. You just built things and figured it out along the way.</p><p>Then the profession got popular.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The hype wave changed who enters and how</h2><p>Software development became one of the most talked-about career paths in the world. High salaries, remote work, the promise of building the future. It attracted millions. And for good reason. The demand was real.</p><p>But decades of accumulated knowledge can&#8217;t be compressed into a twelve-week bootcamp. The field needed developers faster than the field could train them. So the market adapted. Bootcamps, tutorials, roadmaps, certifications. Structured paths that promised: follow this, and you&#8217;ll get there.</p><p>And people followed. They completed the React roadmap. Checked off the JavaScript fundamentals playlist. Finished the full-stack bootcamp. Got the certificate.</p><p>Then they showed up to work.</p><h3>What the roadmaps didn&#8217;t teach</h3><p>The last few teams I trained revealed a pattern I keep seeing. Developers who could build a feature from a tutorial but froze when the problem didn&#8217;t match one. Developers who couldn&#8217;t resolve a merge conflict in Git &#8212; not because it&#8217;s difficult, but because no tutorial had walked them through what to do when things go wrong.</p><p>The worst version of this: developers who introduced massive complexity into simple problems. Not because the problem demanded it, but because the only approach they knew was the one they&#8217;d learned from a tool or framework. They had the vocabulary but not the understanding. They could use the abstraction but had no idea what was happening underneath.</p><p>The system shaped them this way. It optimized for output over understanding. The roadmap said &#8220;learn Docker,&#8221; so they learned Docker. It didn&#8217;t say &#8220;understand why Docker exists and when you shouldn&#8217;t use it.&#8221; So they didn&#8217;t.</p><h3>The industry tolerated this</h3><p>For years, the gap didn&#8217;t matter much. Demand was so high that companies hired anyone who could ship code, regardless of how deep their understanding went. Junior developers with surface-level skills found jobs because the market couldn&#8217;t afford to be selective. The shortage was real, and the business needed bodies.</p><p>That era is ending.</p><p>Companies didn&#8217;t raise the bar. AI did.</p><h3>AI fills the gap and exposes it</h3><p>Here&#8217;s the uncomfortable truth: every skill gap that tutorials created, AI can now cover. Can&#8217;t write a Docker config from scratch? AI can. Don&#8217;t understand what a merge conflict actually means? AI resolves it. Need boilerplate code for a new feature? AI generates it in seconds.</p><p>The developers who were trained to follow instructions now compete directly with a tool that follows instructions better than any human.</p><p>But the developers who learned to <em>think</em> &#8212; to understand systems, to question decisions, to see the problem behind the ticket &#8212; those developers aren&#8217;t competing with AI at all. They&#8217;re using it.</p><p>That&#8217;s the divide. And it was always there. The roadmaps just masked it.</p><div><hr></div><h2>The gap between learning and knowing</h2><p>I&#8217;m not against structured learning. I&#8217;m against the illusion that structured learning is enough.</p><p>A developer who learned React from a roadmap knows how to create components, manage state, and call APIs. A developer who understands frontend development knows <em>when</em> React is the wrong choice. They can look at a project and say: &#8220;This doesn&#8217;t need a framework. A few pages of HTML and vanilla JavaScript will ship in half the time and be easier to maintain.&#8221;</p><p>That second developer earned it through building, mistakes, and judgment about what works.</p><p>A roadmap gives you vocabulary. Experience gives you fluency.</p><p>And fluency is what lets you see opportunities.</p><h3>Opportunities come from problems, not from plans</h3><p>Every meaningful career move I&#8217;ve made started the same way: I noticed a problem that nobody was solving, and I put myself in a position to solve it.</p><p>The health tech company from Edition #1 &#8212; the one that grew from 500 to 80,000 users and got acquired &#8212; didn&#8217;t happen because I followed a career roadmap. It happened because I looked at a struggling app and asked a question nobody else was asking: &#8220;Why isn&#8217;t this business growing?&#8221;</p><p>The Slack agent from Edition #2 didn&#8217;t come from a learning path about AI tools. It came from a daily frustration &#8212; wasting thirty minutes on context retrieval that should take thirty seconds &#8212; and the realization that I could build something to fix it.</p><p>Roadmaps tell you to learn Kubernetes. Opportunities show you that your team wastes four hours a week on manual deployment and maybe you should fix that. Kubernetes might or might not be the answer.</p><p>The difference is direction. Roadmaps point you toward skills. Opportunities point you toward outcomes. The best developers I&#8217;ve worked with always started with the outcome and worked backward to whatever skills they needed. They were always looking at the landscape. The developers who followed roadmaps were always looking at the checklist.</p><div><hr></div><h2>The skill that matters most now</h2><p>The developers who will thrive aren&#8217;t the ones with the longest list of completed courses. They&#8217;re the ones who can look at an unfamiliar situation and figure out what to do.</p><p>That&#8217;s not a skill you can put on a roadmap. It&#8217;s a skill you develop by building, failing, questioning, and paying attention. By treating every project as a chance to understand something deeper, not just to add a bullet point to your resume.</p><p>AI makes this more urgent, not less. When AI can follow any set of instructions, the person who decides <em>which</em> instructions to follow becomes the most valuable one in the room.</p><p>The roadmap told you to learn. The opportunity is asking you to think.</p><div><hr></div><h2>Try This</h2><p>If you didn&#8217;t do the exercises from the previous editions, go back and do them. Edition #1 asked you to list the tasks you&#8217;d delegate to five clones of yourself. Edition #2 asked you to pick one and break it into judgment versus execution. It sounds simple. Don&#8217;t underestimate it. Learning how to think is different from learning execution steps. This sequence is building that muscle.</p><p>If you did them, you have a list and a breakdown. This week, pick one execution task from that map &#8212; the kind where your time goes but your judgment doesn&#8217;t &#8212; and ask yourself:</p><p><strong>What would I need to learn to build a tool that handles this for me?</strong></p><p>Don&#8217;t look for a course. Don&#8217;t search for a roadmap. Look at the problem. What does it actually require? Maybe it&#8217;s an API you&#8217;ve never used. Maybe it&#8217;s a way to connect two tools you already have. Maybe it&#8217;s something simpler than you think.</p><p>Now notice: whatever you need to learn, it didn&#8217;t come from a syllabus. It came from a real problem that&#8217;s costing you real time.</p><p><strong>By the end of this exercise, you should have one sentence written down: &#8220;I&#8217;m going to build [tool/automation] that solves [problem], and I need to learn [specific thing] to do it.&#8221;</strong></p><p>And the solution isn&#8217;t just learning. It&#8217;s an asset. A tool, a workflow, an automation that keeps working after you build it.</p><p>This is how builders scale. They compound assets. Every solved problem becomes something that keeps working for them. Every problem you solve for yourself becomes something that works for you tomorrow, and the next day, and the next. The developers who figure this out early won&#8217;t just be faster. They&#8217;ll be operating at a different scale entirely.</p><p>Edition #1 gave you the map. Edition #2 gave you the target. This week, you start building toward it.</p><div><hr></div><h2>The Deeper Cut</h2><p>The irony of roadmaps is that the people who create them usually didn&#8217;t follow one. They built things, noticed patterns, and wrote down what they learned. Then they published it as a sequence. And the sequence became a shortcut that strips out the most important part: the messy process of figuring it out yourself.</p><p>I use a tool for this. I call it a process log: a structured way to capture what I&#8217;m building, what decisions I&#8217;m making, and what I&#8217;m learning along the way. Not a journal. Not a retrospective. A living document that turns every build into a thinking artifact.</p><p>It&#8217;s how I wrote this edition. It&#8217;s how I built the Slack agent from Edition #2. Every building block I create starts as a process log before it becomes anything else.</p><p>I&#8217;m sharing the process log template as an artifact for all subscribers. <a href="https://github.com/Outcode-Thinking/Process-Logs">Process Log Pattern</a>.</p><p>Use it this week when you start working on your Try This exercise. Write down what you&#8217;re building, what you decided, and what surprised you. That document becomes proof that you&#8217;re thinking, not just executing.</p><p>Paid subscribers get the artifacts that go deeper &#8212; the actual tools, the AI agents, the decision frameworks, and the building blocks behind every edition. The process log shows you <em>how</em> to think through a build. The paid artifacts give you a head start on <em>what</em> to build.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The builders who scale themselves will win. Here's how that starts]]></title><description><![CDATA[The code was the easy part. The decisions were the work.]]></description><link>https://www.outcodethinking.com/p/the-builders-who-scale-themselves</link><guid isPermaLink="false">https://www.outcodethinking.com/p/the-builders-who-scale-themselves</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 21 Feb 2026 13:45:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a82e1f3-e5ff-4e51-8fec-20173fd36944_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>&#129302; Field Notes &#183; For developers rethinking how they work.<br><strong>This week&#8217;s challenge: map one daily task into what&#8217;s judgment and what&#8217;s execution.</strong></p></blockquote><p>A few weeks ago, someone asked me a question on Slack. Simple situation: a team member reported that the admin panel was loading slowly. Another person confirmed the same issue. Then a non-technical colleague asked me in the channel: &#8220;What can I do to help you investigate the delays?&#8221;</p><p><strong>Innocent question. The kind that takes thirty seconds to read and thirty minutes to answer properly.</strong></p><p>The first problem wasn&#8217;t the answer itself. It was everything before it. To respond properly, I needed to reconstruct what had actually happened. Find the original report from earlier that morning. Find the second person&#8217;s confirmation. Check if there were shared recordings or screenshots. Piece together the timeline. That meant scrolling through messages, jumping between threads, hunting for context scattered across the channel. Most of the time went there. Not thinking, just searching.</p><p>The second problem was the answer. The question came from a non-technical person, in a public channel where others were reading, about something deeply technical. I needed to turn a diagnosis involving logs, traces, and time windows into something useful for someone who has no access to any debugging tools. And I needed to do it without being condescending.</p><p>So I opened an AI assistant and gave it everything I&#8217;d gathered. The raw messages, my diagnosis, who was asking and why, the fact that this was a public channel.</p><p>The AI gave me three options at different tones. I picked the shortest one, pasted it into Slack, and moved on.</p><p>That&#8217;s what building with AI actually looks like. Code generation is just one piece. The real gain is saving time across the entire process. Retrieving context, structuring communication, making decisions faster. AI handles the parts that used to eat your hours, so you can focus on the parts that need you.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The workflow nobody talks about</h2><p>Most content about &#8220;building with AI&#8221; shows you the glamorous version. Someone types a prompt, code appears, product ships. That&#8217;s like saying filmmaking is pointing a camera at something interesting.</p><p>Here&#8217;s what my actual day looks like.</p><p>I spend most of my time doing what I&#8217;ve always done: understanding problems, making decisions, communicating with people. The difference is that the execution layer, the part that used to take the most time, is increasingly handled by AI. It still needs supervision. It&#8217;s far from perfect. But it&#8217;s fast enough that the bottleneck has shifted.</p><p>The bottleneck used to be typing. Now it&#8217;s thinking.</p><p>If you did last week&#8217;s exercise, imagining five versions of yourself and mapping what you&#8217;d delegate, this is exactly what that looks like in practice. I looked at my day and found the bottleneck: not the decisions, not the communication strategy, not the judgment calls. The context retrieval. The mechanical act of hunting for information I already knew existed somewhere.</p><p>That&#8217;s the power of thinking before building. Without that map, I would have tried to automate the wrong thing. Maybe the answer drafting, maybe the triage. Instead, I saw that retrieval was the stage eating the most time with the least judgment involved. The exercise didn&#8217;t just identify a problem. It pointed directly at what to build.</p><p>So I built a tool to fix it.</p><h3>What I actually built</h3><p>I built an agent that monitors my Slack channels, identifies when something needs my attention, and gathers the relevant context before I even open the conversation. Think of it as a secretary. It watches, filters, and reports back, telling me what matters and why, not just a list of messages.</p><p>The technical stack doesn&#8217;t matter. It was built with whatever solved the problem fastest. No web app, no dashboard, no over-engineered architecture. A command-line tool, a bot with read access to channels, and an AI layer that processes messages into briefings. The simplest thing that works.</p><p>The hard part wasn&#8217;t the code. AI wrote most of it. The hard part was the decisions. Which channels should it monitor? What counts as &#8220;needs my attention&#8221;? A direct mention is obvious. But what about my team lead posting &#8220;looking into the deploy issue, might need help later&#8221;? No question, no mention, no explicit request. I&#8217;d want to know about that immediately, not because of what was said, but because of <em>who</em> said it. The AI missed this completely. It treated every sender equally. I had to teach it that some people&#8217;s ambient messages matter more than other people&#8217;s direct questions.</p><p>Every one of these decisions shaped the tool more than any line of code did.</p><h3>The plan was the starting point. Reality shaped the rest.</h3><p>Before building, I mapped out the process I was trying to automate. Five stages: triage, context retrieval, answer formulation, audience adaptation, and codebase lookup. Clean, logical, sequential.</p><p>The plan was to build them one at a time. Start with context retrieval, pulling related messages when a question comes in. Then add voice and standards to draft answers in my communication style. Then audience adaptation. Then the advanced stuff.</p><p>That plan lasted about two days.</p><p>Pure context retrieval without any intelligence layer wasn&#8217;t useful. Dumping twenty related messages on my screen didn&#8217;t save time. It just moved the problem from &#8220;find the messages&#8221; to &#8220;read all these messages.&#8221; The tool needed to process the context and tell me what mattered, not just fetch it.</p><p>So I compressed two steps into one. Context retrieval merged with summarization. The agent doesn&#8217;t just find related messages. It reads them, identifies the key points, and presents a briefing I can act on.</p><p>This is something every developer learns eventually but rarely from a tutorial: the spec is a starting hypothesis. The build is the experiment. What you ship is whatever survived contact with reality.</p><h3>The challenges showed up during use, not during building</h3><p>The tool worked on the first day. But using it daily revealed gaps that building it never would.</p><p>The biggest one: conversations that escalate. Someone posts a message. The agent evaluates it, decides it doesn&#8217;t need my attention, moves on. Hours later, someone replies in that thread asking for my input. The agent never sees it. It already dismissed the conversation. The most important discussions are the ones that grow over time, and those were exactly the ones it missed.</p><p>Another one: it treated every person the same way. A message from my closest collaborator got the same evaluation as a random message in a large channel. But in real work, there are people whose messages I cannot afford to miss, regardless of content. Not because of what they said, but because of who they are.</p><p>Each gap became clear only because I was using the tool every day, on real conversations, and noticing where it fell short. Then I&#8217;d adjust, use it again the next day, and see if the fix held. The build didn&#8217;t end when the tool started working. That&#8217;s when it actually started.</p><h3>This is what building with AI actually looks like</h3><p>It&#8217;s not typing a prompt and shipping the result. It&#8217;s a loop:</p><p>Build something. Use it for real. Notice where it fails. Understand <em>why</em> it fails, not technically, but functionally. What does &#8220;reliable&#8221; mean for your specific context? Fix the right thing. Use it again.</p><p>The code was the easiest part of the entire process. AI generated most of it. What AI couldn&#8217;t do was decide what to build, identify which failures mattered, or determine what &#8220;good enough to rely on&#8221; means for my daily work.</p><p>That&#8217;s the new skill. Not writing code, not even prompting AI &#8212; but knowing what to build, evaluating whether it actually works, and iterating based on real use rather than theoretical specs.</p><p>The developers who figure this out will build at a speed that would have been impossible five years ago. The speed comes from something deeper: the entire cycle from idea to working tool compresses when you stop treating code as the bottleneck and start treating decisions as the work.</p><div><hr></div><h2>Try This</h2><p>If you did last week&#8217;s exercise, you have a list. Tasks you&#8217;d delegate to other versions of yourself. This week, pick one. The one that eats the most time.</p><p>Now zoom in. Break that task into stages, not technically, but cognitively. For each stage, ask one question:</p><p><strong>Is this judgment or execution?</strong></p><p>Judgment is when you&#8217;re making a decision that depends on experience, context, or taste. Understanding what someone is really asking. Deciding whether an approach is sound. Choosing what to prioritize.</p><p>Execution is when you already know what to do and you&#8217;re just doing it. Searching for information. Formatting a response. Looking up documentation.</p><p>Be specific. &#8220;Do research&#8221; is vague. &#8220;Search on ChatGPT to understand how the payment API handles refunds&#8221; is a stage you can actually evaluate.</p><p>You&#8217;ll notice something. The execution stages are where your time goes. The judgment stages are where your value lives. That gap is your building map. Every execution stage is a candidate for a tool. Every judgment stage is a reason you&#8217;re still in the room.</p><p>Last week you saw the big picture. This week you have a target.</p><div><hr></div><h2>The Deeper Cut</h2><p>The biggest mistake I see developers make when they start building with AI isn&#8217;t technical. It&#8217;s scoping. They try to automate their entire workflow at once. Grand vision, massive spec, weeks of building before testing against reality.</p><p>My Slack tool started with one question: &#8220;Can it find the messages I would have found manually?&#8221; Just that. One specific, testable capability. When that worked, I expanded. When it didn&#8217;t, I knew exactly what to fix.</p><p>That tool, the one I described in this edition, is available to paid subscribers. The actual artifact, ready to use: the code, the instructions, the decisions behind it. You can use it, adapt it, learn from it. Every building block I create on this journey becomes an artifact that paid subscribers can pick up and make their own.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You're not a coder anymore. You're a builder.]]></title><description><![CDATA[Building is a fundamentally different skill than coding.]]></description><link>https://www.outcodethinking.com/p/you-are-not-a-coder-anymore-you-are-a-builder</link><guid isPermaLink="false">https://www.outcodethinking.com/p/you-are-not-a-coder-anymore-you-are-a-builder</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sat, 14 Feb 2026 11:45:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/374cc3e1-4c6c-4886-9c48-752d4b863917_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><blockquote><p>&#129504; Mindset Shifts &#183; For developers questioning their role in the AI era.<br> <strong>This week&#8217;s challenge: map out what five versions of you would do.</strong></p></blockquote></div><p>A few years ago, I was brought in to fix a search engine at a car-sharing startup. The problem was technical: performance issues that were costing the business real money. The kind of problem you&#8217;d find solutions for in any architecture book.</p><p>But books don&#8217;t ship software. The implementation demanded strategy, decisions, trade-offs. Two months of work, not writing clever code, but making hard choices about what to build, in what order, and what to leave alone.</p><p>Then the new version went live.</p><p>The next morning, the infrastructure team raised a red flag. Their monitoring dashboards showed a massive traffic spike, the kind that looks like an attack. They started incident response. Checked the logs. Checked the systems.</p><p>The system was fine. Responding faster than ever.</p><p>That traffic wasn&#8217;t an attack. It was thousands of real users who could finally use the search engine. They&#8217;d been there all along. The old system just couldn&#8217;t serve them.</p><p>No one on that team celebrated the architecture. No one mentioned the code. What mattered was that the thing <em>worked</em>, and a business could grow because of it.</p><p>That was the moment I stopped thinking of myself as someone who writes code. I was someone who builds things that make a difference. The code was just one tool in the process.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.outcodethinking.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>You're not a coder anymore. You're a builder.</h2><p>Ask a developer what they do and most of them will answer with a technology. &#8220;I write Python.&#8221; &#8220;I&#8217;m a React developer.&#8221; &#8220;I do backend in Go.&#8221;</p><p>Nobody outside of tech cares.</p><p>The business doesn&#8217;t care what language you write. The users don&#8217;t care about your architecture. The market doesn&#8217;t care how clean your code is. They care whether the thing works, whether it solves a real problem, and whether it can grow.</p><p>For decades, developers could ignore this. Code was a prerequisite to every business plan, good developers were hard to find, great ones even harder, and all of them were expensive. That scarcity gave developers leverage. The business needed you, even if you never understood what the business actually needed. You didn&#8217;t have to think about the product, the market, or the strategy. You just had to write the code. That was enough.</p><p>That era is ending. AI writes code now. It writes it fast, it writes it clean, and it doesn&#8217;t ask for a raise. If the only thing you bring to the table is the ability to translate requirements into functions, you&#8217;re competing with a tool that does it cheaper and faster every month.</p><p>So what&#8217;s left?</p><p>Building.</p><p>And building is a fundamentally different skill than coding.</p><h3>The coder vs. the builder</h3><p>A coder receives a ticket and writes the implementation. A builder asks why the ticket exists and whether it should exist at all.</p><p>A coder solves the technical problem in front of them. A builder solves the business problem behind it.</p><p>A coder measures success by whether the code works. A builder measures success by whether the outcome changed.</p><p>This isn&#8217;t about seniority or job titles. I&#8217;ve met junior developers who think like builders and staff engineers who&#8217;ve spent a decade thinking like coders. It&#8217;s a posture, not a promotion.</p><h3>Own the problem, not just the implementation</h3><p>Early in my career, I worked with a health tech company. Their mobile app helped people manage diabetes. Noble mission, genuine impact on people&#8217;s lives. But the business was struggling. Around 500 users. No investor wanted to come in.</p><p>The app had real problems. Technically, it was rough. The obvious move was to fix the bugs, improve performance, ship a better version of what already existed.</p><p>But that wasn&#8217;t the actual problem. The actual problem was that nobody had asked the right questions. What do these users really need? What does the market look like? Where does the money come from?</p><p>We spent a year rebuilding the application around those answers instead of around the technical debt. The company grew to 80,000 users, found a revenue model through the pharmaceutical industry, and three years later was acquired for $22 million.</p><p>The code mattered. But the decisions about <em>what</em> to build mattered more. A coder would have fixed the bugs. A builder asked why the business wasn&#8217;t growing.</p><h3>Make the hard decisions</h3><p>At a retail startup, I inherited a situation that looked familiar: the platform couldn&#8217;t scale, technical issues everywhere, business growing faster than the technology could support.</p><p>The previous CTO had attacked this the way most technical leaders do. He split the monolith into microservices. Some problems got solved. New ones appeared. Without a proper technical team, the complexity multiplied. The business kept expanding into new countries while the technology kept falling behind.</p><p>More code didn&#8217;t fix it. More architecture didn&#8217;t fix it. The fix started with something that had nothing to do with code: understanding every business need, building the right team under the right culture, creating a plan that aligned technology with where the company was actually going.</p><p>The hardest part of that job wasn&#8217;t any technical decision. It was convincing smart people that the solution to a technology problem doesn&#8217;t always start with technology.</p><p>Coders add more code. Builders step back and ask what the system actually needs, even when the answer isn&#8217;t technical.</p><h3>Build for what the business actually needs</h3><p>I designed the architecture for a billing and booking system at a travel tech startup. The system handled payments, reservations, the operational backbone of the business.</p><p>The temptation was to build it right. Elegant abstractions, clean separation of concerns, a beautiful system.</p><p>But &#8220;right&#8221; didn&#8217;t matter. What mattered was speed. The business needed to replicate in new countries fast and then scale to millions of users. The architecture had to make it trivially easy to add payment options and booking flows for different markets, not technically impressive, but operationally fast.</p><p>Today that system processes over a billion dollars in transactions. Not because the code was brilliant, but because the decisions behind the code matched what the business needed to do.</p><p>Two years later, a car-sharing startup recruited me to do something similar with their search engine. They didn&#8217;t hire me for my code. They hired me because I&#8217;d solved that kind of problem before and understood the decisions involved.</p><p>That&#8217;s the builder&#8217;s career path. You don&#8217;t get hired for what you can type. You get hired for what you can figure out.</p><h3>Why this matters right now</h3><p>Every story I just told happened before AI could write production-ready code. In every case, the code was already the easy part. The hard part was understanding the problem, making the right decisions, and building something that actually moved a business forward.</p><p>Now remove the code entirely. AI handles it. What&#8217;s left?</p><p>Everything that mattered.</p><p>The developers who will thrive aren&#8217;t the ones who can write the best code. They&#8217;re the ones who can look at a messy situation with no clear answer and build something that works. Own the problem. Make hard decisions. Understand what the business actually needs.</p><p>That&#8217;s a builder. And that&#8217;s what I&#8217;d challenge you to become.</p><div><hr></div><h2>Try This</h2><blockquote><p>Here&#8217;s a thought exercise for this week. No code, no tools. Just thinking.</p></blockquote><p>Imagine you could clone yourself five times. Five versions of you, ready to work. What would you delegate? What tasks eat your time every day that another you could handle if you gave them the right instructions?</p><p>Write it down. Be specific. Not &#8220;help me code faster.&#8221; Think about it like you&#8217;re onboarding a real person: what would they need to know? What decisions can they make on their own? What needs your approval?</p><p>Now look at that list. That&#8217;s your map.</p><p>Every task you can describe clearly enough to delegate to a human, you can build a tool for. Every task that needs your judgment, your taste, your understanding of the problem: that&#8217;s where the builder lives.</p><p>You don&#8217;t need to build anything yet. Just the list. Just the thinking.</p><p>Next week, we start building.</p><div><hr></div><h2>The Deeper Cut</h2><p>I&#8217;m doing this exercise myself. Right now.</p><p>I&#8217;m mapping out the tools I need so that at least five versions of me are working at the same time. Tools that research, that draft, that evaluate, that organize, that handle the parts of my work that don&#8217;t need my judgment every time.</p><p>Outcode Thinking isn&#8217;t just a newsletter I&#8217;m writing for you. It&#8217;s my journey too. Every tool I build, every artifact I create, every workflow I figure out along the way, I&#8217;m sharing with paid subscribers. Not theory. The actual tools, ready to use, ready to help you scale yourself the same way I&#8217;m scaling myself.</p><p>We&#8217;re entering the age where a single developer with the right tools can do the work of a team. The builders who figure this out first will have an unfair advantage. I intend to be one of them. Paid subscribers get to build that advantage with me.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">For developers questioning their role in the AI era, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why everything you learned about being a developer is about to change]]></title><description><![CDATA[The code barrier is gone. What's left is what nobody taught you.]]></description><link>https://www.outcodethinking.com/p/why-everything-you-learned-about</link><guid isPermaLink="false">https://www.outcodethinking.com/p/why-everything-you-learned-about</guid><dc:creator><![CDATA[Thiago Valentim]]></dc:creator><pubDate>Sun, 08 Feb 2026 11:45:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1744fb53-854d-4338-809d-072491214467_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been through this before. Not once, many times.<br><br>I was a webmaster before "web developer" was even a job title. I watched the industry split into frontend and backend, then reunite, then split again. I learned new frameworks every few years, not because I wanted to, but because the old ones died. I migrated architectures from 3-tier monoliths to SOA, then watched SOA collapse under its own weight and microservices rise from its ashes. Then watched the industry mature enough to realize you don't start with microservices, and rediscover the value of a well-structured monolith.<br><br>Every cycle felt like the same pattern: something new arrives, something old becomes irrelevant, and developers who don't adapt get left behind. Frameworks and libraries kept making development faster and easier, and every time they did, more people flooded in. The market got more competitive. Interviews got harder. The bar kept rising.<br><br>But here's the part most people don't talk about: through all of this, employers were trying to get rid of us. Not out of malice, out of necessity. Developers are expensive, hard to find, and slow down product timelines. No-code tools, low-code platforms, automated frameworks, composability. Every few years, a new promise that you wouldn't need developers anymore. Every single one failed.<br><br>Until now.<br><br>I recently watched non-technical people deliver working software faster than experienced developers. I'm building things faster than at any point in my 20+ year career, and most of the time, I'm not writing code. I'm coordinating, evaluating, deciding. World-class developers I respect are saying the same thing: AI can do the job.<br><br>So the question is no longer <strong>will this change everything</strong>. It already has. The question is: what makes a developer valuable when the code writes itself?</p><h2>The old model is dead. Here's what replaces it.</h2><p>For decades, the developer's value was tied to a simple equation: you know how to write code, the business doesn't, so they need you. The scarcer the skill, the higher the pay. That's why companies invested millions trying to remove developers from the process, and why every attempt failed. The gap between "I have an idea" and "I have working software" was too wide for anyone but a developer to cross.<br><br>AI closed that gap overnight.<br><br>This isn't speculation. It's already happening. People with no programming background are shipping products. The code barrier, the thing that protected our careers for decades, is dissolving. And if your entire identity as a developer is "I write code," you have a problem.<br><br>But here's what I've learned after 20+ years of surviving industry shifts: every time a technical skill gets commoditized, the people who thrive are the ones who were never defined by that skill alone. They were defined by how they <strong>think</strong>.</p><h2>What thinking actually means</h2><p>I'm not talking about "computational thinking" or whatever CS courses sell you. I'm talking about the ability to look at a messy, ambiguous situation and make good decisions. To ask the right questions before writing a single line. To understand *why* something should be built before figuring out *how*.<br><br>Most developers never develop this muscle. They follow tutorials, complete roadmaps, memorize syntax, and wait for someone to hand them a ticket. That worked when writing code was the hard part. It doesn't work when AI can write the code for you.<br><br>The developers who will matter are the ones who can:<br><br><strong>See the product, not just the feature.</strong> Understanding what you're building and <strong>why it matters to someone</strong> is no longer optional. When AI handles implementation, the person who understands the problem becomes more valuable than the person who understands the language.<br><br><strong>Use AI as a power tool, not a magic trick.</strong> AI doesn't replace thinking. It amplifies it. The developers getting the best results aren't the ones writing the cleverest prompts. They're the ones who know what good output looks like, can spot what's wrong, and iterate with intention. That requires experience, taste, and judgment. Things you can't copy from a tutorial.<br><br><strong>Navigate uncertainty instead of following roadmaps.</strong> The industry changes too fast for any roadmap to survive. The developers who stay relevant are the ones who read the landscape, spot opportunities, and adapt. Not the ones who follow a checklist someone else made three years ago.<br><br><strong>Learn from what actually happens, not from what should happen.</strong> Theory is everywhere. What's rare is someone willing to share what they tried, what broke, and what they actually learned. The best developers I've worked with all share this trait: they treat every project as a feedback loop, not a finished product.</p><h2>This is what Outcode Thinking is about.</h2><p>It's a weekly newsletter built around one belief: the developers who will thrive in the AI era are the ones who learn to think, not just code.<br><br>Every week, I'll share one deep-dive across four pillars:<br><br><strong>Mindset Shifts.</strong> How to think strategically, develop product vision, and make better decisions. The mental operating system of the modern developer.</p><p><strong>Building with AI.</strong> How to actually build with AI as a core tool. Not prompt engineering tricks. Real workflows, real evaluation, real iteration.<br><br><strong>Career Navigation.</strong> What's changing, what remains valuable, and how to position yourself for what's coming.<br><br><strong>Field Notes</strong>. Real experiences from my own work. What I'm doing, what I'm learning, and what I'm getting wrong. No filters.</p><h2>This is Edition #0.</h2><p>If anything here resonated, you felt it for a reason. The shift is already happening, and the developers who move first will define what comes next.<br></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.outcodethinking.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe. Next week, we start building.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>