top of page

Competitive Scenario Planning, Constrained by Design

  • Writer: bradluffy
    bradluffy
  • 1 day ago
  • 5 min read

Competitive Scenario Planning with Sparse Inputs: What the Output Reveals—and Why

For this post, I intentionally ran the Competitive Scenario Planning artifact using minimal inputs. This was not an oversight—it was a deliberate test of how SolveBoard’s artifacts behave when information is incomplete, ambiguous, or selectively provided.


What follows is an analysis of:

  • What I entered

  • Why the AI produced the scenarios it did

  • What was not produced (and why)

  • How different inputs would have driven materially different outputs


This is less about “AI intelligence” and more about input discipline, governance, and control—the core premise behind SolveBoard artifacts.


What I Actually Gave the Artifact

I completed only three substantive areas of the input guide:


1. Market Context (High-Level Only)

I defined the market as the AI artifacts industry and stated a general goal of reaching professionals and educators. I did not:

  • Name specific competitors

  • Define pricing models

  • Describe substitute products

  • Segment the market beyond audience type


This immediately constrained the analysis to structural patterns, not company-specific moves. That matters.


2. Key Uncertainties and Drivers (Very Focused)

I explicitly identified:

  • AI model drift as a limiting factor for agents

  • Consistency and uniformity of outputs as the dominant value driver


These two statements did most of the analytical work. They effectively anchored the entire scenario set around governance vs. drift, not speed, cost, or innovation breadth.


3. No Explicit Strategic Options

I never listed competing strategies in the “Strategic Options” section. That omission forced the artifact to infer only one viable option from the uncertainty framing itself: governance-first artifacts.

Per the bridge rules, the AI could not invent alternatives.


Why the Output Looked the Way It Did

Given those inputs, the artifact produced three scenarios, all variations on the same axis:

  • Drift-constrained AI

  • Deterministic artifact demand

  • Hybrid fragmentation


This was not redundancy—it was boundary-respecting behavior.


Because I never introduced:

  • Regulatory pressure

  • Cost competition

  • Platform lock-in

  • Enterprise procurement dynamics

  • Open-source threats

…the AI was not allowed to explore those dimensions. Instead, it rotated the same uncertainty (drift vs. determinism) across plausible futures, exactly as the bridge instructions require.


The result was a tight scenario cluster, not a broad strategic map.


Why “Governance-First” Emerged as the Robust Strategy

This is the most important insight.

The AI did not decide that governance-first artifacts were best. That conclusion was already embedded in the inputs.


By asserting:

  • Drift degrades trust

  • Professionals and educators value consistency

…I implicitly eliminated strategies centered on:

  • Autonomous agents

  • Exploratory tooling

  • Open-ended generation


The artifact simply tested that logic across futures and found it robust in two scenarios and viable in the third. In other words: the output was not creative—it was obedient.


What the Artifact Did Not Produce—and Why

Several things are notably absent:

  • No named competitors

  • No pricing pressure scenarios

  • No platform consolidation narrative

  • No regulatory shock

  • No education-specific procurement dynamics


This absence is not a weakness. It is proof that the bridge respected the Canonical Input Rule and refused to hallucinate missing context Competitive Scenario.


A traditional AI prompt would have filled these gaps. SolveBoard artifacts are explicitly designed not to.


How Different Inputs Would Have Changed the Output

Here are three concrete examples:


If I Had Named Competitors

Adding companies, platforms, or product categories would have forced:

  • Divergent rival behaviors per scenario

  • Asymmetric strengths and weaknesses

  • Strategy differentiation beyond “governance-first”


If I Had Added Strategic Options

Listing alternatives like:

  • Agent marketplaces

  • Hybrid artifact-agent models

  • Vertical-specific artifacts

…would have produced a matrix-style option comparison, not a single dominant strategy.


If I Had Introduced Constraints

Constraints such as:

  • Monetization requirements

  • Enterprise sales cycles

  • Regulatory compliance

…would have narrowed which strategies were feasible, not just attractive.


The Real Lesson: Outputs Are a Mirror of Inputs

This run demonstrates something subtle but critical:


SolveBoard artifacts do not amplify intelligence. They preserve intent.


The AI did not “miss” anything. It executed exactly within the boundaries I set—no more, no less.


That is the difference between:

  • AI as an idea generator

  • AI as a governed analytical instrument


And that difference only becomes visible when you intentionally run the system with sparse inputs.


A Second Run: What Changed—and Why the Output Expanded

After publishing the first scenario run, I reran the same Competitive Scenario Planning artifact using a slightly richer—but still disciplined—set of inputs. The goal was not to “get better answers,” but to observe how specific additions reshape the analytical surface area of the output.

The difference between the two runs is instructive.


What Changed in the Inputs

Compared to the first run, the second input set introduced four meaningful changes


1. Named Competitor Archetypes

Instead of leaving competitors abstract, I explicitly referenced:

  • Large consulting firms (e.g., McKinsey-, Bain-style organizations)

  • Smaller, automation-forward players


This single change gave the AI permission to differentiate behavior by organizational type, not just by strategy category.


2. A Clear Strategic Objective

I added a concrete objective:

Preserving the value of the artifact as a governed tool for AI use.


This shifted the output from descriptive scenarios to evaluative comparisons, because “value preservation” creates a judgment axis.


3. Explicit Strategic Options

For the first time, I allowed multiple strategies to be tested:

  • Governance-embedded artifacts

  • Broad AI agent enablement

  • Consulting-branded artifact libraries


This unlocked option-by-scenario performance scoring, which was impossible in the first run because no alternatives were declared.


4. Exploratory Mode Authorization

By explicitly allowing Exploratory Mode, I widened the hypothesis space without relaxing governance rules. The AI could expand laterally—but only using supplied context.


How Those Changes Showed Up in the Output

The resulting output is visibly more structured, but also more opinionated—and for a very specific reason.


Scenario Differentiation Became Behavioral, Not Just Structural

In the first run, scenarios varied primarily by market condition (drift vs. determinism).In the second run, scenarios also varied by who does what:

  • Large firms leaning on human-mediated frameworks

  • Smaller players exposing themselves to drift via automation

  • Segmented adoption patterns between professionals and educators


This happened because competitors were no longer abstract—they had identities


Strategic Options Were Stress-Tested, Not Implied

Previously, “governance-first artifacts” emerged as the only viable path because no alternatives existed.

In the second run:

  • Governance-embedded artifacts

  • AI agents

  • Consulting-branded libraries

…were all explicitly evaluated across all three scenarios, revealing why governance holds up—and where it does not.


This distinction matters. The output moved from affirmation to comparison.


Contingencies Became Operational

The contingency triggers in the second run are more actionable:

  • What to do if agent reliability improves

  • How to respond if education adoption accelerates independently


These triggers only emerged because:

  • Options existed

  • Segments were differentiated

  • Objectives were explicit


No extra “intelligence” was added—only decision hooks.


The Core Takeaway: Inputs Don’t Add Intelligence—They Add Degrees of Freedom

Comparing the two runs makes something very clear: the artifact did not get smarter. It was allowed to think in more directions.


The first run demonstrated constraint discipline. The second demonstrated controlled expansion. Both outputs are correct. Both are useful. They answer different questions because they were given different permissions.


That is the point of SolveBoard artifacts.


They do not reward verbosity. They reward intentional input design and they make the relationship between what you supply and what you get back explicit, auditable, and repeatable—exactly what a governed decision system should do.


© SolveBoard 2026

bottom of page