<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/feed.xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>MartinLabuschin.com</title>
  <subtitle>Writing on product management, knowledge work, and personal productivity. First-person, opinionated, written for mid-to-senior PMs navigating real environments.</subtitle>
  <link href="https://martinlabuschin.com/" rel="alternate"/>
  <link href="https://martinlabuschin.com/feed" rel="self"/>
  <id>https://martinlabuschin.com/</id>
  <updated>2026-04-14T08:06:00+02:00</updated>
  <author>
    <name>Martin Labuschin</name>
    <email>labuschin@hey.com</email>
  </author>
  <entry>
    <title>The Invisible Lane That Eats Your Quarter</title>
    <link href="https://martinlabuschin.com/journal/2026/april/the-invisible-lane-that-eats-your-quarter" rel="alternate"/>
    <id>https://martinlabuschin.com/RDF</id>
    <published>2026-04-14T08:06:00+02:00</published>
    <updated>2026-04-14T08:06:00+02:00</updated>
    <content type="html">
&lt;p&gt;I often walk into a planning session knowing that a significant portion of my team’s capacity is already spoken for. Not by features. Not by tech debt. By compliance. And when I present a roadmap that reflects this reality, colleagues look at it and ask: “Why is Q2 so thin?” It isn’t thin. It’s honest. But most roadmaps in regulated industries aren’t.&lt;/p&gt;

&lt;h2&gt;The Third Stream&lt;/h2&gt;

&lt;p&gt;Most product managers carry two dimensions on their roadmap: features and technical foundations. That model works for an unregulated SaaS team. In product management for an &lt;abbr title="Electronic IDentification, Authentication, and Trust Services"&gt;eIDAS&lt;/abbr&gt; qualified trust service provider, it’s a planning fiction. Regulation doesn’t sit inside either category. It has its own resource demands, its own deadlines, and its own consequences for failure. Treating it as a first-class planning dimension changes how you resource, communicate, and defend your roadmap. Treating it as anything less is how you build a plan that won’t survive contact with Q2.&lt;/p&gt;

&lt;p&gt;If you’ve read my earlier pieces on making invisible work visible, whether that’s tech debt or blockers, this is the same thesis applied to a higher-stakes domain. The pattern is consistent: what you don’t name on the roadmap controls the roadmap anyway. In regulated environments, the cost of that silence isn’t just a missed metric. It’s a certification issue. Certification issues end products.&lt;/p&gt;

&lt;h2&gt;Why Compliance Doesn’t Fit Inside the Other Two&lt;/h2&gt;

&lt;p&gt;The instinct most teams have is to absorb compliance work into existing categories. Audit preparation becomes an engineering task. Certification requirements get folded into infrastructure sprints. Regulatory documentation gets assigned to whoever has bandwidth.&lt;/p&gt;

&lt;p&gt;This is how compliance becomes invisible. And invisible compliance still consumes resources, still blocks engineers, still delays releases, without anyone outside the team understanding why. It competes with technical priorities and loses. It competes with feature priorities and loses harder. In a qualified trust service, the result of losing those fights isn’t a delayed dashboard. It’s a finding in your next conformity assessment.&lt;/p&gt;

&lt;h2&gt;The Audit Reality&lt;/h2&gt;

&lt;p&gt;It’s not only the extensive yearly audits. What most people outside regulated product teams don’t see is everything between them. Every new feature has a compliance surface: a document trail, a control to validate, an evidence requirement tied to your certification scope. Every architecture decision carries regulatory implications that your auditor will ask about twelve months later. Every release has documentation requirements that aren’t optional and don’t wait for a convenient sprint.&lt;/p&gt;

&lt;p&gt;Compliance isn’t an annual disruption. It’s a permanent operating condition. The PMs who treat it like a calendar event spend the other eleven months explaining delays they can’t name.&lt;/p&gt;

&lt;h2&gt;What Happens When You Hide It&lt;/h2&gt;

&lt;p&gt;There’s a predictable failure pattern in regulated environments. A team builds a roadmap that looks like one from an unregulated company. Leadership approves it. Commitments get made externally. Then midway through the quarter, engineers get pulled into documentation reviews, security assessments, and evidence collection.&lt;/p&gt;

&lt;p&gt;The PM absorbs this silently. Adjusting scope. Renegotiating timelines. Apologizing for delays they didn’t cause and can’t fully explain without sounding like they’re making excuses.&lt;/p&gt;

&lt;p&gt;Over time, it’s not just the quarter that erodes. It’s the PM’s standing. Each unexplained slip makes the next commitment harder to defend. Stakeholders stop asking what happened and start assuming the answer. The next planning cycle opens with pressure to catch up, which compresses compliance further, which produces the same slippage, which produces the same apology. It’s not a delivery problem. It’s a roadmap honesty problem.&lt;/p&gt;

&lt;h2&gt;Visibility as a Political Act&lt;/h2&gt;

&lt;p&gt;Putting compliance on the roadmap isn’t just organizational hygiene. It’s a political act.&lt;/p&gt;

&lt;p&gt;You’re telling leadership something they often don’t want to hear: this team cannot move as fast as an unregulated team, and pretending otherwise produces worse outcomes, not better ones. That’s uncomfortable to say out loud. But the roadmap is exactly where that discomfort belongs. Not in a side document. Not in a footnote. Not in a mid-quarter apology. The artifact leadership uses to ask “what are we delivering” needs to show compliance as a named, resourced, time-bound stream of work. Anything else is a negotiation you’ll lose after the commitments are already made.&lt;/p&gt;

&lt;p&gt;When compliance is visible on the roadmap, two things change:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expectations become realistic.&lt;/strong&gt; Leadership sees that available feature capacity is a portion of total capacity, not all of it. The gap between what they want and what’s possible stops being a surprise and starts being a planning input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trade-off conversations happen before commitments, not after.&lt;/strong&gt; If leadership wants to accelerate a feature, they can see what it would displace. You negotiate scope in planning, where the PM has leverage, not mid-sprint, where they don’t.&lt;/p&gt;

&lt;h2&gt;What Three Streams Actually Look Like&lt;/h2&gt;

&lt;p&gt;In practice, my roadmap has three explicit streams, each with a named capacity allocation, agreed before the quarter starts. Not assembled from whatever features and tech debt left over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt; User-facing capabilities, integrations, experience improvements. What gets demoed, marketed, and sold.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical foundations:&lt;/strong&gt; Infrastructure, performance, scalability, security hardening, debt reduction. What keeps the product viable over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Audit preparation, regulatory documentation, certification maintenance, policy implementation, evidence collection, control validation. What keeps us allowed to operate.&lt;/p&gt;

&lt;p&gt;The split isn’t fixed. A quarter anchored to certification renewal might allocate 35 to 40 percent of engineering capacity to compliance work alone. A steady-state quarter with no major audit milestones might run closer to 15 percent. But compliance never drops to zero, and it never competes silently with the other two streams. The negotiation happens in planning, not mid-sprint.&lt;/p&gt;

&lt;h2&gt;The Conversation You Need to Have&lt;/h2&gt;

&lt;p&gt;This is where most PMs stop: they agree with the framework but never say the words out loud. So here’s what the conversation actually sounds like.&lt;/p&gt;

&lt;p&gt;Before the quarter starts, you sit down with your engineering lead, align on a capacity split across the three streams, and bring those numbers to leadership. You say something like: “Next quarter, roughly 30 percent of our engineering capacity goes to compliance. Here’s what’s driving it: [certification renewal, control validation for the new feature scope, evidence collection for the upcoming audit]. That leaves this much capacity for features. Here’s what fits. Here’s what doesn’t. If you want to change the feature list, we can talk about what moves, but the compliance allocation isn’t optional.”&lt;/p&gt;

&lt;p&gt;The key is specificity. Not “compliance takes time,” but “compliance takes 30 percent next quarter because of these three obligations.” Not “we might slip,” but “here’s the capacity, here’s the plan, here’s what happens if we compress it.” You’re not asking permission. You’re presenting the operating reality and offering trade-offs within it.&lt;/p&gt;

&lt;p&gt;The cost of not having this conversation is specific. Mid-quarter scope cuts. Timelines renegotiated without a credible explanation. Credibility that erodes sprint by sprint. Regulated PMs who absorb compliance silently don’t protect their teams. They absorb consequences that were never theirs to own.&lt;/p&gt;



&lt;p&gt;Constraints that are named and planned for are just parameters. Constraints that are hidden and absorbed are the ones that break teams.&lt;/p&gt;

&lt;h2&gt;The Honest Roadmap&lt;/h2&gt;

&lt;p&gt;Being honest about where capacity goes is the most senior thing a regulated PM can do. It doesn’t feel strategic. It feels like admitting a limitation. But there’s only one version of that limitation that’s dangerous: the hidden one. A slow-moving crisis with your name on it.&lt;/p&gt;

&lt;p&gt;If your roadmap only has two streams, you’re lying to someone. Maybe to leadership. Maybe to yourself. Name the third stream, resource it, defend it. Anything less is a description of a product organization that doesn’t exist.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>Define Your Failure Signal Before You Ship</title>
    <link href="https://martinlabuschin.com/journal/2026/march/define-your-failure-signal-before-you-ship" rel="alternate"/>
    <id>https://martinlabuschin.com/F5P</id>
    <published>2026-03-31T08:52:00+02:00</published>
    <updated>2026-04-07T17:04:30+02:00</updated>
    <content type="html">
&lt;p&gt;In December 2022, my team at Trusted Shops killed the biggest feature of the year. Not because it stopped working. Because it worked too well on the wrong axis. We had pushed review questionnaire conversion up by a huge percentage. Internally, the win of the year. Three months later, we rolled it back. We learned that the volume of negative reviews mattered more to our customers than the volume of reviews overall.&lt;/p&gt;

&lt;h2&gt;The metric that promised everything&lt;/h2&gt;

&lt;p&gt;The feature was called Autosave. Simple concept: when a consumer clicked the star rating in an invitation email, we saved the review immediately, even if they never completed the full questionnaire. From a pure conversion standpoint, this was a dream. A huge improvement, overnight.&lt;/p&gt;

&lt;p&gt;We tracked completion, volume, funnel conversion. Everything moved in the right direction. What we never instrumented was the composition of what we were collecting. Whether these were reviews our customers, the businesses paying us, would actually want published.&lt;/p&gt;

&lt;p&gt;We optimized the funnel. We ignored what was flowing through it.&lt;/p&gt;

&lt;h2&gt;What customers actually pay for&lt;/h2&gt;

&lt;p&gt;The blind spot was structural, and it should have been obvious.&lt;/p&gt;

&lt;p&gt;Our customers are businesses that use reviews to build trust. They don’t pay for review volume as an abstract number. They pay for reviews that reflect genuine consumer experiences, reviews that help them earn trust with future buyers. Volume is a means. Trust is the product.&lt;/p&gt;

&lt;p&gt;Autosave broke that contract. When a consumer tapped a star rating in an email, sometimes accidentally, sometimes mid-scroll, we saved it. Many of those half-formed ratings were negative. Not because the consumer had a bad experience, but because a single star tap with no context defaults toward low ratings. The consumer didn’t mean to leave a review. We recorded one anyway.&lt;/p&gt;

&lt;p&gt;We had optimized our conversion metric while actively degrading the thing our customers were paying for. That’s the sentence I should have been able to write in January 2023. I couldn’t, because we weren’t measuring it.&lt;/p&gt;

&lt;p&gt;Between January and March 2023, the negative review ratio shifted measurably. Customer satisfaction scores appeared to drop, not because service had declined, but because our measurement method was distorting reality. We found out through complaints, not dashboards.&lt;/p&gt;

&lt;h2&gt;The rollback and what it actually cost&lt;/h2&gt;

&lt;p&gt;In March 2023, my team paused Autosave. “Paused” is the word we used internally. The reality was closer to a kill.&lt;/p&gt;

&lt;p&gt;The engineering cost was the easy part to absorb. A few sprints of work, reversible. The harder costs were the ones that don’t show up in Jira.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust with customers.&lt;/strong&gt; The businesses that had seen their review profiles shift had to be addressed individually. Some had already started asking whether our platform was reliable. When your product sits in the digital trust space, that question is existential. You can’t answer “we shipped a feature that inflated your negative reviews by accident” and expect the conversation to end well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal credibility.&lt;/strong&gt; We had presented Autosave as a success in leadership reviews. The conversion number had made it into planning discussions. Rolling it back meant explaining to the same stakeholders why a metric we had built roadmap commitments around had been wrong from the start. The feature was gone. The commitments it justified were not. Every subsequent pitch carried a little more weight to prove.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team morale.&lt;/strong&gt; The engineers and designers who built Autosave did good work. The implementation was clean. The problem wasn’t execution. It was goal definition. Telling a team “the feature worked exactly as designed, but the design was wrong” is a different kind of difficult than telling them “there was a bug.” &lt;strong&gt;Bugs are fixable. Goal definition errors force you to question the decision-making process itself.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;The second attempt, and why it’s different&lt;/h2&gt;

&lt;p&gt;We didn’t abandon the idea. The hypothesis behind Autosave was still sound: reducing friction in the review process should increase completed reviews. The problem was that we had removed too much friction, including the friction that separated intentional reviews from accidental ones.&lt;/p&gt;

&lt;p&gt;In 2024, we started testing what we called “Confirmed Autosave.” The difference is one interaction step: the review is only saved when the consumer actively clicks “Next” after selecting their star rating. Not on the star tap alone. That single additional confirmation separates signal from noise.&lt;/p&gt;

&lt;p&gt;But the bigger change isn’t the feature design. It’s the test infrastructure around it.&lt;/p&gt;

&lt;p&gt;Before the relaunch, we built a dashboard tracking the ratio of negative reviews relative to total volume, updated daily, broken down by test cohort versus control. We defined abort criteria before the test went live: if the negative ratio moves beyond a specific threshold within the test window, we stop. No debate, no “let’s see if it stabilizes.” The stop condition was agreed on before a single user saw the feature.&lt;/p&gt;

&lt;p&gt;We set a 7-day test limit for the initial cohort. Seven days because our historical data showed that review distribution patterns stabilize within the first five to six days of any feature change. A 7-day window gave us enough signal to evaluate the ratio shift without exposing a large user base to a potentially broken mechanic. Predefined metrics, a clear exit rule, written down before launch day.&lt;/p&gt;

&lt;p&gt;The difference between December 2022 and the 2024 relaunch isn’t a smarter feature. It’s that we defined what failure looks like before we shipped, not after complaints told us.&lt;/p&gt;

&lt;h2&gt;The actual lesson&lt;/h2&gt;

&lt;p&gt;Conversion rate is a proxy metric. Every PM knows this intellectually. Very few PMs instrument their launches as if they believe it.&lt;/p&gt;

&lt;p&gt;When conversion goes up, the narrative is immediate: the feature works. When the downstream effects surface weeks later (complaints, ratio shifts, trust erosion) the narrative has already hardened. You’re no longer evaluating a hypothesis. You’re defending a success story. And organizations are much better at defending success stories than they are at killing them.&lt;/p&gt;

&lt;p&gt;The thing I got wrong in December 2022 wasn’t shipping Autosave. It was defining success as a single metric without a corresponding failure signal. We had a target for conversion. We had no target for “review quality didn’t degrade.” We measured the accelerator but not the guardrail.&lt;/p&gt;



&lt;p&gt;Every primary metric needs a paired counter-metric that tells you when you’re winning the number but losing the customer. Conversion rate paired with complaint rate. Activation rate paired with churn within 30 days. Volume paired with composition.&lt;/p&gt;

&lt;p&gt;If you’re about to ship something and you can only articulate why it will succeed, you haven’t finished the instrumentation work. The failure signal is the part most teams skip. It’s also the part that would have saved us three months of damage.&lt;/p&gt;

&lt;p&gt;The conversion rate can go up while the product value goes down. If your instrumentation can’t detect that, you aren’t measuring success. You’re measuring activity. And you’ll celebrate all the way to the rollback.&lt;/p&gt;

&lt;p&gt;We shipped the most successful feature of 2022. Then we had to kill it. Then we tried again, with better questions. That’s the part most lessons-learned posts leave out: the second attempt is where the learning actually lives. In our case, it lives in a pre-launch checklist that now has one non-negotiable line item: “What signal would tell us this feature is hurting the customer, and are we measuring it before day one?”&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>You Don’t Have a Strategy. You Have a Vibe.</title>
    <link href="https://martinlabuschin.com/journal/2026/march/you-dont-have-a-strategy-you-have-a-vibe" rel="alternate"/>
    <id>https://martinlabuschin.com/UXG</id>
    <published>2026-03-17T12:00:00+01:00</published>
    <updated>2026-03-23T15:17:26+01:00</updated>
    <content type="html">
&lt;p&gt;Most product teams I’ve worked with believe they have a strategy. They can talk about it in meetings, reference it in planning sessions, and nod along when leadership mentions it. But when I ask a simple question, “Can you show it to me?”, the room gets quiet. That silence tells me more about the state of a product organization than any roadmap ever could.&lt;/p&gt;

&lt;h2&gt;The Strategy That Doesn’t Exist&lt;/h2&gt;

&lt;p&gt;Here’s something I’ve learned after years of building products in the SaaS industry: if your strategy isn’t written down, you don’t have a strategy. You have a vibe.&lt;/p&gt;

&lt;p&gt;That sounds harsh, but it’s precise. An unwritten strategy is a shared hallucination. Everyone thinks they’re aligned until the first real trade-off arrives. Then you discover that the CEO’s version of the strategy, the VP of Product’s version, and the engineering lead’s version are three different stories that happen to share a few keywords.&lt;/p&gt;

&lt;p&gt;I’ve seen this pattern repeat across organizations of every size. The symptoms are always the same. Prioritization debates that never resolve. Stakeholders who keep reopening decisions that were supposedly settled. Teams building features that are technically competent but strategically incoherent. The root cause isn’t bad judgment or poor communication skills. It’s that there’s nothing to point to. No written artifact that says: this is what we’re doing, this is why, and this is what we’re choosing not to do.&lt;/p&gt;

&lt;h2&gt;Why PMs Avoid Writing It Down&lt;/h2&gt;

&lt;p&gt;The obvious explanation is that people are busy. Strategy documentation feels like overhead, one more artifact to maintain in a world already drowning in Confluence pages nobody reads.&lt;/p&gt;

&lt;p&gt;But that’s not the real reason.&lt;/p&gt;



&lt;p&gt;The real reason is that writing forces clarity, and clarity is uncomfortable. When you write a strategy down, you have to commit. You have to say what you actually believe about your market, your customer, and your competitive position. You have to name the bets you’re making and, more painfully, the bets you’re not making. You have to be specific enough that someone could disagree with you.&lt;/p&gt;

&lt;p&gt;Most product organizations aren’t allergic to documentation. They’re allergic to commitment.&lt;/p&gt;

&lt;p&gt;An unwritten strategy gives you room to shift, reinterpret, and claim alignment after the fact. A written strategy holds you accountable. That’s exactly why it’s valuable, and exactly why it meets resistance. I’d go further: the degree of resistance you feel when trying to write your strategy down is a reliable signal of how much unresolved disagreement your organization is carrying. If it’s easy to write, your team is more aligned than most. If it feels impossible, you’ve just learned something important.&lt;/p&gt;

&lt;h2&gt;What Written Strategy Actually Does&lt;/h2&gt;

&lt;p&gt;A written strategy isn’t a plan. It’s not a roadmap, a vision statement, or a list of OKRs. It’s a clear articulation of the choices you’ve made and the logic behind them. When it’s done well, it does three things that no verbal agreement can replicate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It creates a shared reference point.&lt;/strong&gt; When a new opportunity lands on the table, a written strategy gives the team something to evaluate it against. Not “what does the CEO think?” or “what did we say in that meeting last month?” but “does this fit the strategy we committed to?” That shifts conversations from opinion to analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It surfaces disagreement early.&lt;/strong&gt; When strategy lives in people’s heads, disagreements stay hidden until they become expensive. Someone reads the written strategy and says, “Wait, I thought we were going after enterprise customers, not mid-market.” That moment of friction is a gift. It’s far cheaper to resolve a strategic disagreement in a document review than in a failed product launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It makes delegation real.&lt;/strong&gt; This is the benefit most product leaders underestimate. A senior PM or product leader can’t be in every room where decisions get made. A written strategy acts as a proxy for your judgment. It lets teams make good decisions without waiting for permission, because they can see the boundaries and the intent behind them. Without that written artifact, delegation breaks down in a predictable way: teams either wait for approval on everything (which kills speed) or they make their best guess (which produces strategically incoherent work). The teams I’ve seen operate with the most autonomy and the most coherence are always the ones that have a written strategy they can reference in real time. That document doesn’t replace judgment. It gives judgment a frame. A telling test of the strategy came when I took sick leave. The team made sound decisions without consulting me — unblocked, because they had the context they needed. The strategy gave them a foundation to act on. &lt;/p&gt;

&lt;h2&gt;The “Show Me” Test&lt;/h2&gt;

&lt;p&gt;Here’s a simple diagnostic: Go ask three people on a product team, separately, to describe the product strategy. Not the vision, not the mission, not the goals for this quarter. The strategy. The choices, the trade-offs, the “why this and not that.”&lt;/p&gt;

&lt;p&gt;If all three give you roughly the same answer, and they can point me to a document that captures it, the team has a strategy. If they give you three different answers, or if they all say something vague like “we’re focused on growth” or “we’re building the best platform,” the team doesn’t have a strategy. They have a direction at best, and an assumption at worst.&lt;/p&gt;

&lt;p&gt;What makes this test revealing isn’t the question. It’s what happens in the silence before people answer. That pause, where someone searches for language they’ve never actually had to produce, tells you whether strategy is an operating tool or a comfortable fiction in your organization. I’ve never run this test and been wrong about the result.&lt;/p&gt;

&lt;p&gt;I encourage you to try it this week. Don’t warn people in advance. The uncoached version is the honest one.&lt;/p&gt;

&lt;h2&gt;What Good Looks Like&lt;/h2&gt;

&lt;p&gt;A written strategy doesn’t need to be long. Some of the best ones I’ve seen are two to three pages. What it needs is specificity. Here’s a structure I’ve pressure-tested across multiple product organizations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Context.&lt;/strong&gt; What’s true about your market, your customers, and your position right now? Not aspirations. Facts. If your context section reads like a pitch deck, you’ve already gone wrong.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Diagnosis.&lt;/strong&gt; What’s the core challenge or opportunity you’re choosing to address? This is where most strategies fall apart. They skip the diagnosis and jump straight to solutions. A team that can’t clearly name the problem it’s solving will build confidently in the wrong direction.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Guiding approach.&lt;/strong&gt; What’s your overall method for addressing that challenge? This is the heart of strategy: the set of choices that guide everything else. It should be specific enough to rule things out. If your guiding approach is compatible with every possible initiative, it isn’t guiding anything.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Coherent actions.&lt;/strong&gt; What specific moves follow from that approach? These should reinforce each other, not just be a list of unrelated initiatives.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You’ll notice this structure forces you to do something most strategy documents avoid: make a diagnosis before you prescribe solutions. In my experience, the diagnosis is the hardest part to write and the most valuable. It’s where you have to say, out loud, “this is the real problem.” Not the polite version. Not the version that makes every stakeholder comfortable. The real one.&lt;/p&gt;

&lt;p&gt;Don’t confuse this with vision docs, roadmaps, or goal frameworks. Those are outputs that a strategy should inform. They’re not the strategy itself.&lt;/p&gt;

&lt;h2&gt;The Hidden Tax&lt;/h2&gt;

&lt;p&gt;Every unwritten strategy carries a cost that doesn’t show up on any dashboard. It shows up in the time spent re-litigating priorities. In the features that get built because someone assumed they were strategic. In the talented PMs who leave because they can’t figure out what the organization actually cares about.&lt;/p&gt;

&lt;p&gt;That last one deserves emphasis. The best product people I’ve worked with can tolerate ambiguity, but they can’t tolerate the absence of intent. When a strong PM starts asking “what are we actually trying to do here?” and gets a different answer from every leader they talk to, they don’t file a complaint. They update their LinkedIn. The cost of an unwritten strategy isn’t just inefficiency. It’s the quiet departure of the people you can least afford to lose.&lt;/p&gt;

&lt;p&gt;You can’t measure this tax directly, which is part of why it persists. But if you’ve worked in product long enough, you’ve felt it. That nagging sense that the team is busy but not making progress. That the roadmap is full but the strategy is empty.&lt;/p&gt;

&lt;h2&gt;Start Here&lt;/h2&gt;

&lt;p&gt;If you don’t have a written strategy today, don’t try to produce a perfect document by Friday. Start with one page. Answer three questions: What are we choosing to focus on? What are we choosing not to do? Why?&lt;/p&gt;

&lt;p&gt;Share it with your team. See if they agree. See where they push back. That pushback isn’t a problem. It’s the whole point.&lt;/p&gt;

&lt;p&gt;A strategy that lives only in your head feels safe because nobody can challenge it. A strategy that lives on paper feels risky because everyone can. That’s not a weakness. That’s the mechanism that makes it real.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>The Sunshine Manager</title>
    <link href="https://martinlabuschin.com/journal/2026/march/the-sunshine-manager" rel="alternate"/>
    <id>https://martinlabuschin.com/E71</id>
    <published>2026-03-03T12:00:00+01:00</published>
    <updated>2026-04-07T17:05:30+02:00</updated>
    <content type="html">
&lt;p&gt;Your VP rewrote the project scope in April, championed it in the all-hands in June, and by September, when the integration failed and three teams were blocked, opened the incident review with “Help me understand what went wrong on the execution side.” You were the execution side.&lt;/p&gt;

&lt;p&gt;This isn’t a story about a bad manager. It’s a story about a system that rewards a specific kind of leadership failure: being visible in success and invisible in failure. If you’ve worked in product management long enough, you’ve seen it more than once.&lt;/p&gt;

&lt;h2&gt;Meet the Sunshine Manager&lt;/h2&gt;

&lt;p&gt;The sunshine manager is present, engaged, and visible when things go well. Product launch goes smoothly? They’re in the Slack thread celebrating. Stakeholder demo lands well? They’re in the room taking partial credit. Quarterly review looks positive? They’ll present the numbers themselves.&lt;/p&gt;



&lt;p&gt;When things go wrong (a critical escalation, a stakeholder conflict, a failed release, a team member burning out) they become unreachable. Not literally, usually. They respond to messages. They join calls. But their role shifts from leading to distancing. The warmth disappears. The ownership transfers. “I’m leading this” becomes “What’s your plan to fix this?”&lt;/p&gt;

&lt;p&gt;This isn’t cowardice in the traditional sense. Most sunshine managers don’t consciously decide to abandon their team. They simply never defined crisis support as part of their job. Success feels like leadership. Failure feels like someone else’s operations problem.&lt;/p&gt;

&lt;p&gt;For PMs, this creates a confusing dynamic. You get positive reinforcement when you don’t need it and silence when you do. Over time, you learn to stop escalating problems, because escalation just results in the problem being handed back to you with more pressure and less support. You become self-reliant not out of strength, but out of learned helplessness.&lt;/p&gt;

&lt;p&gt;The organizational damage follows the warmth-withdrawal cycle directly. When your team watches a leader celebrate wins publicly and then go quiet during a production incident, they learn that visibility is only safe in one direction. People stop surfacing risks early because they’ve seen what happens: the sunshine manager engages when you bring good news and deflects when you bring bad news. So the bad news stops flowing upward. Knowledge sharing drops because showing vulnerability in this dynamic gets you a meeting where you’re asked to explain your remediation plan, not one where your leader helps you build it. Risk-taking stops because failure has one-sided consequences. The team that once proposed ambitious integration approaches starts scoping everything to what they can survive alone. Innovation doesn’t stall because people lack ideas. It stalls because nobody trusts the safety net.&lt;/p&gt;

&lt;h2&gt;Why the Org Chart Makes This Worse&lt;/h2&gt;

&lt;p&gt;The sunshine manager doesn’t operate in isolation. They exist inside a structure that enables and often rewards their behavior. Understanding that structure is the first step toward protecting yourself inside it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role clarity decreases as you move up.&lt;/strong&gt; A junior PM typically has a well-defined domain, clear KPIs, and known stakeholders. A Head of Product often works with a role description that could mean almost anything. A CPO’s responsibilities are frequently impossible to tell apart from those of a CTO or COO, depending on the company’s mood that quarter.&lt;/p&gt;

&lt;p&gt;This vagueness isn’t accidental. Senior roles are left open on purpose to allow for “strategic flexibility.” But in practice, a role description broad enough to cover everything effectively covers nothing. Senior leaders can take credit for anything that succeeds and distance themselves from anything that fails, because their responsibilities were never clearly defined. The sunshine manager doesn’t even need to actively dodge accountability. The org chart does it for them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Territorial overlap creates invisible conflicts.&lt;/strong&gt; When two senior leaders both believe a domain belongs to them, the PM in the middle becomes the battlefield. You end up managing upward into conflicting expectations with no clear way to escalate. In these situations, the sunshine manager’s instinct to step back during conflict isn’t just a personal weakness. It’s a rational response to an ambiguous power structure where taking a clear position carries political risk.&lt;/p&gt;

&lt;h2&gt;The Accountability Split&lt;/h2&gt;

&lt;p&gt;The structural problem underneath all of this has a simple name: the split between accountability and responsibility.&lt;/p&gt;

&lt;p&gt;When your VP changes scope three times but the delivery failure ends up on your performance review, you’re experiencing this split in real time. Healthy organizations align these two things. The person with authority to make the call also owns the outcome when it goes wrong. Dysfunctional ones push authority upward and responsibility downward.&lt;/p&gt;

&lt;p&gt;This is especially harmful in regulated environments, where decisions have compliance consequences. If your VP overrules a risk assessment but the audit finding lands on your desk, you’re not just dealing with bad management. You’re absorbing institutional liability.&lt;/p&gt;

&lt;p&gt;The warning sign isn’t the failure itself. It’s who speaks first in the post-mortem. If your leadership opens with explanations about downstream execution problems, the accountability structure is broken.&lt;/p&gt;

&lt;h2&gt;What Actually Works&lt;/h2&gt;

&lt;p&gt;The sunshine manager is a structural problem, and structural problems need structural solutions. Here’s what moves the needle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agree on crisis roles while things are still going well.&lt;/strong&gt; The sunshine manager disappears during a crisis because crisis support was never part of the deal. Make it part of the deal while the sun is still shining. During planning: “If this integration fails, I’ll need you to handle the VP-level escalation on their side. Can we agree on that now?” Get the commitment documented while they’re in a generous mood. When the crisis hits, you’re not asking for help. You’re calling in a specific agreement.&lt;/p&gt;

&lt;p&gt;This works because it removes the decision from the moment of stress. The sunshine manager doesn’t have to choose between stepping up and stepping back. They already committed. Most people honor explicit commitments, especially documented ones, far more reliably than implicit expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep a decision log that names names.&lt;/strong&gt; Not a meeting summary. A running document that captures three things: what was decided, who made the call, and what we agreed would happen if it went wrong. Share it after every important decision meeting.&lt;/p&gt;

&lt;p&gt;Most people won’t object to accurate notes in the moment. But when things fall apart months later, the log is the difference between “we all decided this together” and a clear record of who actually made the call. In regulated industries, this isn’t just smart politics. It’s your compliance paper trail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make the accountability split visible without making it personal.&lt;/strong&gt; If you’re in a position to give feedback (skip-levels, 360 reviews, retrospectives), describe the structural problem rather than the individual. “We lack a clear escalation path when stakeholder priorities conflict” produces more change than “My manager avoids hard decisions.” One is a system problem that can be solved. The other is a character judgment that will be defended.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test the safety net before you need it.&lt;/strong&gt; Don’t wait for a real crisis to find out whether your manager will show up. Escalate something medium-sized early. A minor stakeholder conflict, a small scope disagreement, a resource question that needs senior input. Watch what happens. If they engage, you have a baseline to build on. If they deflect, you have the information you need to adjust your strategy before the stakes are high.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When the system won’t change, plan your exit on your own terms.&lt;/strong&gt; If the patterns are built into the organization, if sunshine management is rewarded, if accountability is permanently disconnected from authority, then you’re not failing to manage up. You’re in the wrong system. The smart move is to document what you’ve learned, build case studies from the dysfunction (without burning bridges), and leave before the system teaches you its habits.&lt;/p&gt;

&lt;p&gt;The most dangerous outcome isn’t that you can’t change your manager. It’s that you stay long enough to stop noticing the problem.&lt;/p&gt;

&lt;h2&gt;The Uncomfortable Truth&lt;/h2&gt;

&lt;p&gt;Product management frameworks assume functional leadership. Stakeholder management advice assumes stakeholders act rationally. Escalation models assume someone at the top will catch what falls.&lt;/p&gt;

&lt;p&gt;Real organizations are messier than that. The leaders above you are sometimes the constraint, not the enabler. And the hardest version of that constraint isn’t the openly incompetent manager. It’s the one who stands next to you in the sunshine and is nowhere to be found in the rain.&lt;/p&gt;

&lt;p&gt;Recognizing that isn’t cynicism. It’s a requirement for doing effective work in imperfect conditions. And it’s stakeholder management at its most honest.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>Make Blockers Impossible to Ignore</title>
    <link href="https://martinlabuschin.com/journal/2026/february/make-blockers-impossible-to-ignore" rel="alternate"/>
    <id>https://martinlabuschin.com/D0X</id>
    <published>2026-02-17T12:00:00+01:00</published>
    <updated>2026-03-23T15:17:20+01:00</updated>
    <content type="html">
&lt;p&gt;Even talented PMs get crushed by a fundamental misunderstanding: they think accountability flows upward. The real skill isn’t documenting your blockers — it’s weaponizing transparency to force organizational change. This isn’t another article about documentation trails. It’s about making organizational inaction as visible as your own delivery commitments, and what it tells you when that still isn’t enough.&lt;/p&gt;

&lt;h2&gt;The one-way accountability trap&lt;/h2&gt;

&lt;p&gt;Most PMs internalize accountability by default. Deadline slipping? Must be a planning problem. Stakeholder didn’t deliver input on time? Should have followed up harder. Dev team underwater with maintenance? Should have fought for more capacity earlier.&lt;/p&gt;

&lt;p&gt;This instinct isn’t wrong. Ownership matters. But taken too far, it creates a one-way accountability trap: you’re accountable for outcomes you don’t fully control, while the people who do control them face no consequences for inaction.&lt;/p&gt;

&lt;p&gt;The fix isn’t blame redistribution. It’s making constraints visible before they become failures.&lt;/p&gt;

&lt;p&gt;When you flag a resource bottleneck in week 3, and it goes unaddressed, and the release slips in week 12, the conversation changes. It’s no longer “why is the product late?” It’s “why was the reported blocker not resolved?”&lt;/p&gt;

&lt;p&gt;But the record only works if it’s built to compel action, not just inform.&lt;/p&gt;

&lt;p&gt;Most blocker updates get ignored because they read like status line items. Compare these two versions of the same problem communicated in a sprint review:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weak:&lt;/strong&gt; &lt;em&gt;“Backend capacity remains a constraint. We’re monitoring the situation.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is easy to nod at and move past. There’s no decision required, no consequence attached, no timeline. It documents the problem technically, but it gives leadership permission to do nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strong:&lt;/strong&gt; &lt;em&gt;“Backend capacity is insufficient to deliver Feature X by the Q3 target. If we don’t add one senior backend engineer by March 15, we will miss the launch window by 4 to 6 weeks. I need a decision on one of three options: (1) add headcount, (2) cut scope to Y and Z only, (3) accept the delayed timeline. I’ll follow up on this decision by Friday.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The difference isn’t tone or politics. It’s structure. The strong version names the specific consequence, attaches a deadline to the decision itself, and makes inaction a visible choice rather than a passive default.&lt;/p&gt;

&lt;p&gt;Now apply the same principle to dependencies. Instead of “legal review pending” sitting passively in your risk register, you write: &lt;em&gt;“Legal sign-off on data processing agreement required before Module B development. Requested Jan 12, no response. Without completion by Feb 1, Module B timeline shifts from Q2 to Q3. Owner: Sarah, Legal.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now Sarah’s inaction carries the same weight and visibility as your project updates. Both behaviors exist in the same reporting framework, under the same scrutiny. The blocker becomes as visible as the blocked.&lt;/p&gt;

&lt;p&gt;The template matters less than the underlying move: putting the blocker and the blocked on equal footing in the same document, with the same specificity.&lt;/p&gt;

&lt;h2&gt;Timing is everything&lt;/h2&gt;

&lt;p&gt;I once presented a detailed timeline of every missed stakeholder commitment in a post-mortem. Technically accurate, every line of it. But it came across as a legal brief. The room went cold. Nobody disputed the facts. Nobody addressed them either. I was right and it didn’t matter.&lt;/p&gt;

&lt;p&gt;The difference between accountability and ass-covering isn’t content. It’s timing. When you surface risks before they explode, it’s partnership. When you surface them after, it’s litigation.&lt;/p&gt;



&lt;p&gt;&lt;abbr title="Cover your ass"&gt;CYA&lt;/abbr&gt; is retrospective. Accountability reporting is prospective. Make it boring, routine, expected. Blockers in every sprint review. Dependency risks in every roadmap update. Decision requests in every stakeholder sync. When it’s part of the rhythm, nobody perceives it as political. When it only appears after a missed deadline, everyone does.&lt;/p&gt;

&lt;p&gt;This extends to how you escalate. Flagging problems upward is necessary. But in organizations with insecure leadership, it can backfire. Some managers interpret reported blockers as implicit criticism of their decisions, or worse, as a setup to shift blame onto them. The trick isn’t better data. It’s making them the hero of the solution rather than the villain of the problem.&lt;/p&gt;

&lt;p&gt;Consider how you escalate a staffing constraint:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reads as criticism:&lt;/strong&gt; &lt;em&gt;“The team you approved is too small to deliver this.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reads as a decision request:&lt;/strong&gt; &lt;em&gt;“Given the current team size, we can deliver A and B by Q3 or A, B, and C by Q4. Which outcome do you prefer?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The second version does three things: it removes the implicit accusation, it gives leadership agency over the outcome, and it creates a documented decision point. If they choose the Q3 scope and later ask why C wasn’t delivered, the record is clear.&lt;/p&gt;

&lt;p&gt;Pair every escalation with options. Never present a problem without a trade-off. And keep the tone factual and forward-looking. The moment your reporting carries emotional weight, it stops being a management tool and becomes a political act.&lt;/p&gt;

&lt;h2&gt;The diagnostic that matters more than the fix&lt;/h2&gt;

&lt;p&gt;Here’s where this stops being about reporting technique and becomes about organizational intelligence.&lt;/p&gt;

&lt;p&gt;You’ve done everything right. You’ve flagged the same blocker for three consecutive sprints with an owner, a date, and a consequence. The owner hasn’t acted. The date has passed. You’ve re-raised it in the agreed forum. Nothing has changed. Your reporting is precise, prospective, and structurally sound. And it’s accomplishing nothing.&lt;/p&gt;

&lt;p&gt;When this happens, stop re-raising the same item in the same way. Name the pattern directly in a 1:1 with your lead: &lt;em&gt;“I’ve flagged the legal sign-off dependency in four consecutive sprint reviews. It’s still unresolved. I don’t think our current escalation path is working. What do you need from me to get this unstuck, or is this a constraint we should plan around permanently?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This forces a different kind of decision. But more importantly, it tells you something about the organization you’re in.&lt;/p&gt;

&lt;p&gt;If your lead can act on it, you had an escalation problem that’s now solved. If they can’t, if they shrug or deflect, you’ve just learned that the blocker isn’t the legal team’s responsiveness. It’s your lead’s authority. Or the company’s decision-making culture. Or your own positioning within the power structure.&lt;/p&gt;

&lt;p&gt;That diagnostic is worth more than the fix. Because it tells you whether you’re operating in a system where better reporting can change outcomes, or one where the dysfunction is structural and your documentation is just making you a more informed passenger.&lt;/p&gt;

&lt;p&gt;Most PM advice stops at “communicate better” or “escalate effectively,” as if every organization is a rational system that responds to clear inputs with proportional action. The senior realization is that some organizations aren’t. And recognizing the difference early is a career skill, not a cynical observation. It changes what you optimize for. It changes how you spend your political capital. And sometimes, it changes where you work.&lt;/p&gt;

&lt;p&gt;The next time you’re tempted to write “backend capacity remains a constraint,” stop. Write instead: &lt;em&gt;“We need one senior backend engineer by March 15 or Feature X misses Q3 by six weeks. Decision needed by Friday.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Put a name on the decision. Put a date on the consequence. Make someone else’s inaction as visible as your scrambling.&lt;/p&gt;

&lt;p&gt;And then watch what happens. Not just to the blocker, but to the system around it. Because the response you get will tell you something more valuable than whether this particular feature ships on time. It will tell you whether the organization you’re in is capable of responding to clarity, or whether it’s designed to absorb it.&lt;/p&gt;

&lt;p&gt;That’s the real move: from absorbing organizational dysfunction to diagnosing it. Making blockers impossible to ignore is the technique. Knowing what it means when they’re ignored anyway is the skill.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>They Aren’t Bad Leaders. They Are Misplaced.</title>
    <link href="https://martinlabuschin.com/journal/2026/february/they-arent-bad-leaders-they-are-misplaced" rel="alternate"/>
    <id>https://martinlabuschin.com/CJX</id>
    <published>2026-02-03T12:00:00+01:00</published>
    <updated>2026-03-23T15:17:17+01:00</updated>
    <content type="html">
&lt;p&gt;There is a specific kind of frustration that comes from looking upward in an org chart and realizing that multiple layers of leadership are staffed by people who are clearly intelligent, clearly skilled, and clearly wrong for their roles. Your direct manager rewrites your proposals. Their manager cannot make a strategic decision without a committee. The VP above them treats every product conversation like a sales pitch because that is what got them promoted fifteen years ago.&lt;/p&gt;

&lt;p&gt;None of them are lazy. None of them are malicious. They are simply operating with toolkits built for jobs they no longer have. And the compounding effect across multiple levels is what turns a manageable annoyance into a systemic blocker.&lt;/p&gt;

&lt;h2&gt;The Promotion That Breaks Everything&lt;/h2&gt;

&lt;p&gt;The pattern is predictable at every level. Your excellent engineer becomes an engineering manager who schedules daily standups to review code. Your strong engineering manager becomes a Director who still thinks in sprints instead of quarters. Your sharp Director becomes a VP who solves organizational problems by adding process, because process is what they understood two roles ago.&lt;/p&gt;



&lt;p&gt;Laurence Peter described this as inevitable incompetence in 1969, but he undersold the damage. The problem is not one misplaced leader. It is that promotion-based hierarchies tend to stack misplaced leaders on top of each other. Each level inherits the limitations of the level above it. A PM dealing with one Peter-Principled manager has a relationship problem. A PM dealing with three layers of it has a structural problem.&lt;/p&gt;

&lt;p&gt;The core issue is always the same: promotion rewards past performance instead of future capability. The skills that made someone successful as an individual contributor or a first-line manager (deep expertise, individual execution, technical problem-solving) are fundamentally different from what senior leadership demands: organizational thinking, comfort with ambiguity, and the ability to lead through others rather than doing the work yourself.&lt;/p&gt;

&lt;p&gt;And because these leaders were excellent at their previous jobs, they often do not see the gap. They believe they are maintaining quality, adding value, or keeping standards high. From below, it looks like they are blocking progress at every turn.&lt;/p&gt;

&lt;h2&gt;Three Patterns That Stack&lt;/h2&gt;

&lt;p&gt;The Peter Principle does not produce one type of misplaced leader. It produces at least three distinct patterns. In many organizations, you will find all three at different levels of the same reporting chain.&lt;/p&gt;

&lt;h3&gt;The Solver&lt;/h3&gt;

&lt;p&gt;The Solver built their reputation on being the person who could fix anything. Promoted into management, they never stopped fixing. Promoted again into senior management, they still cannot stop.&lt;/p&gt;

&lt;p&gt;At the direct manager level, the Solver rewrites your proposals. At the Director level, the Solver overrides their managers’ decisions on technical details. At the VP level, the Solver pulls entire teams into fire drills because they spotted a problem three levels below their pay grade and could not resist jumping in.&lt;/p&gt;

&lt;p&gt;The compounding effect: when Solvers exist at multiple levels, every decision gets solved and re-solved on the way up. A PM’s recommendation gets rewritten by their manager, then adjusted by the Director, then questioned by the VP who “just wants to take a quick look.” By the time the decision comes back down, it belongs to nobody and satisfies nobody.&lt;/p&gt;

&lt;h3&gt;The Expert&lt;/h3&gt;

&lt;p&gt;The Expert cannot delegate because they believe (often correctly) that they know more about the subject than anyone below them. At every level, this creates the same bottleneck: everything must pass through their review.&lt;/p&gt;

&lt;p&gt;At the manager level, this means slow document turnaround and excessive feedback loops. At the Director level, it means cross-team initiatives stall because one person insists on reviewing every workstream. At the VP level, it means strategic decisions get delayed for weeks while the Expert requests “just a bit more data” on details that should have been delegated three levels down.&lt;/p&gt;

&lt;p&gt;The compounding effect: when Experts stack, the organization develops a culture of over-documentation and under-decision. Teams learn that nothing moves without exhaustive preparation, so they spend more time preparing for reviews than doing actual work. Velocity drops across the board, and nobody can point to a single cause because the bottleneck is distributed across multiple levels.&lt;/p&gt;

&lt;h3&gt;The Avoider&lt;/h3&gt;

&lt;p&gt;The Avoider was good at execution by controlling variables, but leadership at every level means making calls with incomplete information. They respond to conflict by scheduling more meetings, seeking broader consensus, or pushing decisions back down to people who do not have the authority to make them.&lt;/p&gt;

&lt;p&gt;At the manager level, the Avoider answers your questions with “What do you think?” At the Director level, the Avoider forms a working group. At the VP level, the Avoider commissions a strategy review. The decision never gets made. It just gets surrounded by more process.&lt;/p&gt;

&lt;p&gt;The compounding effect: when Avoiders stack, the entire organization becomes allergic to commitment. Deadlines turn into suggestions. Priorities turn into wishful thinking. Someone has to make the trade-offs, but at every level, the person with the authority to decide is looking for someone else to go first.&lt;/p&gt;

&lt;h2&gt;What Actually Works&lt;/h2&gt;

&lt;p&gt;Generic advice does not help here because each pattern responds to different triggers, and dealing with multiple layers requires a different approach than managing a single relationship.&lt;/p&gt;

&lt;h3&gt;Against the Solver: Control the Entry Point&lt;/h3&gt;

&lt;p&gt;The Solver rewrites work because a finished document triggers their “I would have done it differently” reflex. This is true whether they sit one level or three levels above you.&lt;/p&gt;

&lt;p&gt;For your direct manager: present work at 60% completion with two or three options and a clear recommendation. Frame it as “I would like your input before I finalize this.”&lt;/p&gt;

&lt;p&gt;For senior leaders above your manager: control what reaches them and in what form. Work with your manager to agree on what gets escalated and what does not. When something does go up, make sure the framing gives the senior Solver a constrained choice (“Option A or B, here is my recommendation”) rather than an open canvas to redesign.&lt;/p&gt;

&lt;p&gt;The principle is the same at every level: reduce the surface area for rewriting by involving them at the right moment in the right format.&lt;/p&gt;

&lt;h3&gt;Against the Expert: Ask for Priorities, Not Approval&lt;/h3&gt;

&lt;p&gt;The Expert bottleneck breaks when you change the question from “Is this right?” to “Which of these things matters most?”&lt;/p&gt;

&lt;p&gt;For your direct manager: ask “Which of these three things should I work on first?” Experts cannot resist ranking. You shift the relationship from gatekeeper to advisor.&lt;/p&gt;

&lt;p&gt;For senior Experts above your manager: package decisions so that their review is scoped to the part where their expertise genuinely adds value. Instead of sending a full strategy document for review, send a targeted question: “We are choosing between vendor A and vendor B. Based on your experience with integration complexity, which would you lean toward?” You give them what they need (acknowledgment of their knowledge) while limiting the review scope to something that takes five minutes instead of five days.&lt;/p&gt;

&lt;h3&gt;Against the Avoider: Create External Pressure&lt;/h3&gt;

&lt;p&gt;Internal deadlines do not work for Avoiders at any level because they will simply push them. The fix is the same whether the Avoider is your manager or two levels above you: connect decisions to external events nobody can control.&lt;/p&gt;

&lt;p&gt;“Legal needs our position by Thursday for the filing.” “The API partner’s migration window closes March 15th.” “The auditor arrives on-site next week.” “The regulator’s comment period ends in ten days.”&lt;/p&gt;

&lt;p&gt;For senior Avoiders you do not interact with directly: work with your manager to frame escalations around external deadlines. Make the external consequence visible enough that the Avoider at the top does not need courage to decide. They just need the cost of inaction to be more visible than the cost of being wrong.&lt;/p&gt;

&lt;h3&gt;When Multiple Patterns Stack&lt;/h3&gt;

&lt;p&gt;The hardest scenario is when different patterns exist at different levels. Your manager is a Solver, their Director is an Avoider, and the VP is an Expert. Each layer requires a different approach, and the approaches can conflict (involving the Solver early while limiting what reaches the Expert while creating urgency for the Avoider).&lt;/p&gt;

&lt;p&gt;In these cases, focus on the level that creates the biggest bottleneck for your specific work. You cannot fix every layer simultaneously. Identify which misplaced leader is currently your primary blocker, apply the matching strategy, and accept that the other layers will create friction you manage around rather than resolve.&lt;/p&gt;

&lt;h3&gt;When None of This Works&lt;/h3&gt;

&lt;p&gt;Sometimes the gap between the roles and the people filling them is too large for relationship management to bridge. The Solver cannot stop solving. The Expert cannot stop gatekeeping. The Avoider cannot stop avoiding. And when these patterns are stacked three levels deep, no amount of tactical skill from a PM can compensate for a leadership structure that is fundamentally misaligned with the work.&lt;/p&gt;

&lt;p&gt;The honest question at that point is not “How do I manage this better?” but “How long should I stay?”&lt;/p&gt;

&lt;p&gt;A misplaced leadership chain is not a personal failure you need to fix. It is a structural reality you need to navigate, and sometimes navigating means leaving. Document what you have learned, build the case studies, and move on before you internalize the dysfunction as normal. The most dangerous version of the Peter Principle is not the leaders above you getting stuck. It is you getting stuck underneath them and not noticing that your own growth has stopped.&lt;/p&gt;

&lt;h2&gt;The Reframe&lt;/h2&gt;

&lt;p&gt;It is easy to resent the people above you who block your work. It is harder, and more useful, to see them clearly: skilled people doing their best with the wrong tools in the wrong roles, stacked in a hierarchy that nobody designed on purpose.&lt;/p&gt;

&lt;p&gt;That does not mean you accept the situation. It means you stop expecting them to change and start adapting your approach to the specific patterns in front of you. Constrain the Solver’s canvas. Scope the Expert’s review. Create external deadlines for the Avoider. And know when none of it is enough.&lt;/p&gt;

&lt;p&gt;The best PMs I have worked with do not just manage their product well. They manage the organization around their product with the same discipline, the same pattern recognition, and the same willingness to face what is actually in front of them instead of what the org chart says should be there.&lt;/p&gt;

&lt;p&gt;That is not a workaround. That is the job.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>The Tech Debt Sprint Will Not Save You</title>
    <link href="https://martinlabuschin.com/journal/2026/january/the-tech-debt-sprint-will-not-save-you" rel="alternate"/>
    <id>https://martinlabuschin.com/2DR</id>
    <published>2026-01-20T12:00:00+01:00</published>
    <updated>2026-03-23T15:17:15+01:00</updated>
    <content type="html">
&lt;p&gt;A debt sprint is a confession, not a strategy. It’s an admission that debt was never part of normal prioritization, dressed up as a responsible act. If you’re a PM who has ever scheduled one, you already know this. You felt the relief when it landed on the roadmap, and you felt the velocity drop resume two sprints later. Technical debt doesn’t accumulate despite your backlog decisions. It accumulates because of them.&lt;/p&gt;

&lt;h2&gt;The bi-directional ledger&lt;/h2&gt;

&lt;p&gt;Before going further, here’s the structural idea that holds everything else together: functional debt management requires a visible ledger where both the PM and engineering record their decisions openly.&lt;/p&gt;

&lt;p&gt;The PM side: no feature prioritization without acknowledging what it costs in maintenance capacity. If you’re pushing a feature that requires shortcuts, say so. Put it on the record. “We’re shipping this with known shortcuts in [area], and we’re accepting that as debt to be addressed within [timeframe].” That’s an informed decision. Shipping the same feature and pretending the shortcuts don’t exist is negligence.&lt;/p&gt;

&lt;p&gt;The engineering side: no invisible absorption of maintenance work, and no silent debt introduction. If a shortcut gets taken in implementation without flagging it, the PM is forecasting against a fiction.&lt;/p&gt;

&lt;p&gt;When both sides are legible, you can have an honest conversation about trade-offs. You can say: “We took on debt here, here, and here over the last three sprints. Our standing allocation will cover the first two within the next cycle. The third needs a deliberate investment, and here’s what I’m proposing to move to make room.” That’s a PM managing a portfolio, not managing symptoms.&lt;/p&gt;

&lt;p&gt;Until both sides of the ledger are visible, every conversation about technical debt is theater. You’re arguing about what to fix without an honest inventory of what’s broken, who broke it, and who decided it was acceptable at the time. The PM who makes this ledger real, who insists on it as an operating norm rather than a quarterly exercise, is the PM whose roadmap estimates actually hold. That’s not a coincidence. It’s the whole point.&lt;/p&gt;

&lt;h2&gt;When velocity drops, it’s a prioritization problem&lt;/h2&gt;

&lt;p&gt;The dynamic is predictable. Engineers flag debt. The PM negotiates it down or defers it. Features ship. Three months later, velocity drops and nobody connects the cause to the effect.&lt;/p&gt;

&lt;p&gt;Most PMs know the surface indicators: rising estimate variance, a growing bug tail that never clears, sprint failures with no clear root cause. The problem isn’t that PMs don’t see these signals. It’s that they consistently misattribute them.&lt;/p&gt;

&lt;p&gt;When estimates start drifting, the instinct is to treat it as a team performance issue or a planning accuracy problem. “We need to get better at estimation” is the most common response. It’s also the wrong one. These are lagging indicators of accumulated debt. By the time they show up in your sprint metrics, the underlying code has been degrading for weeks or months.&lt;/p&gt;

&lt;p&gt;When velocity becomes unpredictable, your first question shouldn’t be “why is the team slowing down?” It should be “what are they working around that we never prioritized fixing?” That reframing changes everything. You stop looking for a performance problem and start looking for a prioritization problem. And since prioritization is yours, the accountability loops back to you.&lt;/p&gt;

&lt;p&gt;The PM who owns prioritization owns the debt. Full stop.&lt;/p&gt;

&lt;h2&gt;Catching the spiral before it starts&lt;/h2&gt;

&lt;p&gt;By the time delivery symptoms are visible in sprint metrics, you’re already in a spiral. The better practice is to build signal detection into your regular operating rhythm. Not through dashboards, but through one specific question asked consistently.&lt;/p&gt;

&lt;p&gt;Ask your engineering lead: what is the team routing around every sprint that adds friction they’ve stopped mentioning because they’ve accepted it? Not what’s broken. Not what’s ugly. What has become invisible because the team has learned to live with it? That answer is where your highest-leverage debt lives. It surfaces the problems that won’t show up in any Jira filter because nobody files tickets for things they’ve given up complaining about.&lt;/p&gt;

&lt;p&gt;When the pattern is clear, stopping feature work is your call, not engineering’s. Engineers can advocate for it. They can escalate. But the delivery priority doesn’t change unless you move something on the roadmap. That’s your authority, and it’s also your responsibility.&lt;/p&gt;

&lt;p&gt;This is also where your stakeholder communication matters most. Don’t present a deliberate slowdown as “engineering needs time to fix things.” Present it as protecting delivery commitments: “We’re seeing early signs that our current velocity isn’t sustainable. I’m allocating capacity now so we don’t lose two weeks later.” That’s a PM protecting the roadmap, not an engineering team asking for a break. The difference is more than framing. It determines whether your VP sees the decision as a retreat or as risk management. One invites scrutiny. The other earns trust.&lt;/p&gt;

&lt;h2&gt;Why dedicated debt sprints fail&lt;/h2&gt;

&lt;p&gt;The industry has largely accepted debt sprints as responsible practice. They aren’t. A debt sprint is a coping mechanism that creates the appearance of resolution while the underlying cycle restarts immediately after.&lt;/p&gt;



&lt;p&gt;A dedicated debt sprint treats debt as a special category that gets its own time. That framing guarantees it will lose to feature work in every normal sprint, because it’s been structurally separated from normal prioritization. You’ve told the system that debt is an exception. The system behaves accordingly.&lt;/p&gt;

&lt;p&gt;What works instead: standing capacity allocation. A fixed, non-negotiable percentage of every sprint that goes to maintenance and debt reduction. It never competes with feature work on a case-by-case basis because it’s not a line item. It’s a constraint.&lt;/p&gt;

&lt;p&gt;The right percentage depends on the age and state of your codebase, but a reasonable starting range is 15 to 20 percent of total engineering capacity. A mature, well-maintained product might sustain 10 percent. A product that has been shipping features without maintenance investment for multiple quarters might need 25 percent or more before it stabilizes. If you don’t know which category you’re in, start at 20 percent and adjust based on what your engineering lead tells you after three to four sprints.&lt;/p&gt;

&lt;p&gt;This isn’t generosity toward engineering. It’s self-interest. Predictable velocity requires maintained code. If you want your roadmap commitments to hold, you need the codebase to support them. Standing allocation is how you protect your own estimates.&lt;/p&gt;

&lt;h2&gt;The PM’s actual leverage&lt;/h2&gt;

&lt;p&gt;Roadmap control is the only real lever in this system. Engineers can flag debt, raise it, escalate it. None of that changes a delivery priority unless the PM moves something.&lt;/p&gt;

&lt;p&gt;This means the PM who ignores debt isn’t neutral. They’re actively choosing to accumulate it, whether they frame it that way or not. Every sprint where maintenance capacity is zero is a decision, not a default.&lt;/p&gt;

&lt;p&gt;Reframe your own mental model: maintaining code health isn’t a favor to engineering. It’s how you protect your own commitments to stakeholders. Debt reduction work belongs on the roadmap with the same visibility as features. Not in a separate backlog. Not in a Jira tag nobody filters on. On the roadmap, with your name next to the prioritization decision.&lt;/p&gt;

&lt;p&gt;When a VP of Product asks why velocity is holding steady instead of accelerating, you want to be the PM who can point to sustained capacity allocation and its direct impact on estimate accuracy. That’s a more compelling answer than explaining why you need a recovery sprint because things fell apart.&lt;/p&gt;

&lt;h2&gt;Engineering accountability, enforced through PM levers&lt;/h2&gt;

&lt;p&gt;The PM owns prioritization. Engineering owns hygiene. These aren’t interchangeable responsibilities, and conflating them is how both sides avoid accountability.&lt;/p&gt;

&lt;p&gt;Start with a simple reporting norm. At the end of each sprint, engineering surfaces what maintenance was done, what debt was knowingly introduced, and why. This can be three bullet points in a Slack message or a standing row in your sprint review template. It shouldn’t take more than ten minutes to produce or five minutes to read. If the team is spending 15 percent of every sprint on undocumented cleanup, that’s capacity you can’t see and can’t plan around. Making it legible isn’t micromanagement. It’s how you forecast against reality instead of fiction.&lt;/p&gt;

&lt;p&gt;But accountability goes deeper than reporting. &lt;strong&gt;Engineering should own a clear, working standard for code health. Not aspirational documentation that lives in a wiki nobody reads, but a standard that shapes daily decisions.&lt;/strong&gt; And here’s where the PM role requires some care. It’s not your job to define that standard or to audit engineering practices directly. That crosses into engineering management territory, and most organizations have good reasons for keeping those responsibilities separate. What is your job is to make the outcomes of that standard (or its absence) visible and plannable. A simple way to pressure-test whether a working standard exists is to ask these questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Would you show this codebase to your next employer? &lt;/li&gt;
&lt;li&gt;Could a new engineering colleague create value in their first sprint?&lt;/li&gt;
&lt;li&gt;Is this codebase ready for a handover to a new team without a month of knowledge transfer?&lt;/li&gt;
&lt;li&gt;Do you use code reviews as learning opportunities, not just gatekeeping?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions belong in your sprint review, your quarterly planning, and your one-on-ones with engineering leads. Not as gotcha questions, but as standing expectations that connect code health to delivery outcomes. When the answer to any of them is “no,” the PM’s job isn’t to fix the standard. It’s to make the gap visible. Put it on the ledger. Tie it to the capacity allocation conversation. “If onboarding a new engineer takes three sprints instead of one, that’s a cost I need to plan around. What’s the investment required to change that, and where does it rank against the other debt we’re carrying?”&lt;/p&gt;

&lt;p&gt;That’s a PM using roadmap authority to enforce engineering accountability without pretending to own engineering decisions. You’re not telling them how to write code. You’re telling them that invisible debt is unplannable debt, and unplannable debt is a roadmap risk you won’t accept silently.&lt;/p&gt;

&lt;p&gt;The PM who makes this ledger real, on both sides, is the PM whose roadmap estimates actually hold. The one who schedules a debt sprint every quarter is just confessing the same sin on a regular schedule.&lt;/p&gt;
    </content>
  </entry>
  <entry>
    <title>Patch Your Product Spec</title>
    <link href="https://martinlabuschin.com/journal/2026/january/patch-your-product-spec" rel="alternate"/>
    <id>https://martinlabuschin.com/GAP</id>
    <published>2026-01-06T12:00:00+01:00</published>
    <updated>2026-03-23T15:17:12+01:00</updated>
    <content type="html">
&lt;p&gt;Your product spec was written once, then abandoned the moment the first sprint started. Now your backlog is the only record of intent. It’s not a plan. It’s a pile of corrections to a plan that was never revisited. But once you see User Stories as Product Spec patches, you can begin managing accordingly. This shift changes everything.&lt;/p&gt;

&lt;h2&gt;Product specs are usually written once&lt;/h2&gt;

&lt;p&gt;Most product teams treat the specification as a phase, not a practice. You write the spec to get alignment, to get approval, to kick off development. Then the real work begins and nobody goes back. The spec sits frozen at the moment of highest ignorance: before the team started building.&lt;/p&gt;

&lt;p&gt;This isn’t laziness. It’s structural. And it’s worse than structural, it’s incentivized. Nothing in the typical PM workflow creates a reason to reopen the spec. Sprint planning pulls from the backlog, not the spec. Refinement sessions focus on upcoming stories, not on the model those stories are modifying. Stakeholder reviews look at demos and dashboards, not at whether the product’s documented behavior still matches its actual behavior. In orgs that reward speed, maintaining the spec is the kind of work that makes you look slow. So PMs stop doing it. And nobody notices, because the absence of a current spec is invisible until something breaks.&lt;/p&gt;

&lt;p&gt;So the spec quietly becomes fiction. And once it’s stale, User Stories become the only living description of how the product works. But stories describe changes. They don’t describe the system. Reading your backlog to understand your product is like reading a changelog to understand an application. You can reconstruct something, but it will contradict itself, and the contradictions will hide until they’re expensive.&lt;/p&gt;

&lt;h2&gt;Treat User Stories as patches&lt;/h2&gt;

&lt;p&gt;Think about what a software patch actually is. It’s a targeted change applied to something that already exists. Patches aren’t strategy. They’re increments. They assume the existence of a base system, and they modify it in tracked, reversible, reviewable ways.&lt;/p&gt;

&lt;p&gt;Now look at how most teams use User Stories. Stories rarely emerge from a well-defined product model that the team deeply understands. They emerge from gaps. Someone realizes during refinement that the checkout flow doesn’t account for a particular payment method. Someone else flags that the onboarding sequence never addressed what happens when identity verification fails. A third person writes a story because a customer complained about something nobody had thought through.&lt;/p&gt;

&lt;p&gt;Each of these stories is a patch. It’s filling a hole in a specification that either doesn’t exist or was never kept current. The problem isn’t that stories are patches. The problem is that most teams don’t manage them the way engineers manage patches: with version control, clear diffs, and a living source of truth that gets updated every time something changes.&lt;/p&gt;

&lt;p&gt;And if you follow the analogy one step further, refinement is a pull request that still needs work and can’t be merged yet. A story in refinement is a proposed change to the product’s behavior that the team hasn’t agreed to apply. It needs review, discussion, and approval before it touches the system. Treating refinement this way forces a question most teams skip: what exactly are we changing, and does the current spec even describe what’s there now?&lt;/p&gt;

&lt;h2&gt;The bug-or-feature test&lt;/h2&gt;

&lt;p&gt;This brings up something I’ve found resolves an enormous amount of confusion in product work. The main factor for deciding whether a change is a bug or a new feature comes down to one question: does the spec cover it?&lt;/p&gt;

&lt;p&gt;If the specification doesn’t include the behavior at all, it’s a new feature. If the specification describes behavior that differs from what the software actually does, it’s a bug. That’s it.&lt;/p&gt;

&lt;p&gt;Simple as this sounds, it’s surprisingly powerful. And it exposes a painful truth about teams running in patch mode: when your specification is thin or nonexistent, almost nothing qualifies as a bug. Everything becomes a “new feature” or an “enhancement,” even when it’s clearly something the product should have handled from day one. You end up with teams burning feature budgets on what is really gap-filling work, and nobody can see it because there’s no specification to measure against.&lt;/p&gt;

&lt;p&gt;This test only works, of course, if the specification exists. Which brings us back to the core problem.&lt;/p&gt;

&lt;h2&gt;Your spec needs a commit history&lt;/h2&gt;

&lt;p&gt;When a User Story changes how your product behaves, the specification should be updated to reflect that change. Not eventually. Not when someone gets around to it. As part of closing that story.&lt;/p&gt;

&lt;p&gt;In SaaS products that handle recurring billing, a story that changes renewal logic, grace periods, or entitlement behavior should update the spec the same way a commit updates the codebase. Otherwise, nobody knows whether the current grace period logic was a deliberate decision or an accident. In digital trust and identity verification, if your team ships a story that changes what happens when a document scan is inconclusive, and the spec doesn’t reflect that change, you’re shipping compliance risk. You just can’t see it yet because the drift between spec and product is invisible.&lt;/p&gt;

&lt;p&gt;Specifications distributed across hundreds of stories in a backlog tool are specifications that contradict themselves. It’s just a matter of when you find out.&lt;/p&gt;

&lt;h2&gt;Spec debt is invisible until it isn’t&lt;/h2&gt;

&lt;p&gt;Picture two teams working from the same backlog tool but different subsets of the story history. One team shipped a change three sprints ago to entitlement logic after a failed renewal. The other team is building a self-service downgrade flow, assuming the old entitlement behavior is still in place.&lt;/p&gt;

&lt;p&gt;Then a customer downgrades, hits a failed renewal, and the system does something no one intended. The support ticket gets escalated. The engineering team traces the behavior through two conflicting code paths. Both are correct according to the stories that produced them. Neither is correct according to how the product should work.&lt;/p&gt;

&lt;p&gt;I’ve watched this exact post-mortem play out. The room always lands on the same uncomfortable realization: the product was working as specified. It’s just that the specification existed only in someone’s head, and that person wasn’t in the room. When the only coherent picture of your product’s intended behavior lives in one person’s memory, that person’s departure is a product incident waiting to happen.&lt;/p&gt;

&lt;p&gt;That’s specification debt. Unlike technical debt, you don’t find it in a code review or a monitoring dashboard. You find it when QA asks what to test against and the answer is “check the backlog.”&lt;/p&gt;

&lt;h2&gt;Make the spec part of the sprint, not a side artifact&lt;/h2&gt;

&lt;p&gt;The most actionable change a product team can make: hold a specification review alongside your sprint review.&lt;/p&gt;

&lt;p&gt;Sprint reviews celebrate what was built. Specification reviews ask: what changed in how our product works? What assumptions did we correct? What interactions did we discover that weren’t in our model?&lt;/p&gt;

&lt;p&gt;I’ll be honest about what happens in practice. The spec review is the first thing that gets dropped when a sprint runs long. People nod through it because the demo felt more interesting. The way to make it stick is to keep it to fifteen minutes and tie it directly to the stories that shipped. A concrete check: is the spec updated to reflect everything we shipped this sprint? If no, the story isn’t done.&lt;/p&gt;

&lt;p&gt;Earlier in this piece I described the political problem: spec maintenance is invisible work in orgs that optimize for velocity. The “story isn’t done” rule is what makes it visible. It turns an act of documentation into a definition-of-done criterion, which means it shows up in the workflow instead of competing with it.&lt;/p&gt;

&lt;p&gt;To make this sustainable, you need a product model worth maintaining. Not a fifty-page PRD that nobody reads. A single page per product domain. For identity verification, that page has a state diagram (pending, verified, inconclusive, expired, rejected, and what moves a verification between them), a decision table for boundary conditions, and a changelog showing which story last changed each element and when. That’s it. The minimum viable version that survives contact with a real sprint cadence isn’t comprehensive. It’s focused, current, and referenced weekly.&lt;/p&gt;

&lt;h2&gt;Stop patching into a void&lt;/h2&gt;

&lt;p&gt;User Stories are a powerful tool when they’re treated as managed patches to a living specification. They’re an expensive coping mechanism when they’re written into a void. If your backlog feels like an endless stream of corrections with no source of truth underneath them, the fix isn’t better stories. It’s making the spec a living part of your work, updated with every story you ship, reviewed every sprint. Because right now, if you can’t point to a current specification, you can’t tell the difference between a feature and a bug fix. And neither can anyone else on your team.&lt;/p&gt;
    </content>
  </entry>
</feed>
