Opportunity Solution Tree (OST)
Teresa Torres' discovery framework: desired outcome → 3 opportunities → 3 solutions each → POC selection with feasibility/impact/market fit scoring and experiment design.
Your stakeholder asked for feature X, the exec team wants feature Y, and your backlog already has feature Z — but none of them connect to a business outcome you can actually move. The OST forces the conversation upstream: what outcome are we driving, which three customer problems matter most, and what's the cheapest experiment to learn before building anything?
Who it's for: PMs running discovery, teams adopting continuous discovery, product leads clarifying vague OKRs, founders moving from feature requests to outcomes, UX leads structuring problem-solution exploration
Example
"Build an OST for increasing trial-to-paid conversion from 15% to 25%" → 3 opportunities (no value in trial, pricing unclear, free plan good enough) + 3 solutions each (guided checklist, time-to-value triggers, concierge onboarding) + POC scoring table + A/B test plan
New here? 3-minute setup guide → | Already set up? Copy the template below.
# Opportunity Solution Tree (OST)
Build a Teresa Torres-style Opportunity Solution Tree by extracting target outcomes from stakeholder requests, generating opportunity options (problems to solve), mapping potential solutions, and selecting the best proof-of-concept (POC) based on feasibility, impact, and market fit. Move from vague requests to structured discovery — avoiding feature-factory syndrome and premature solution convergence.
Not a roadmap generator — a structured discovery process that outputs validated opportunities with testable solution hypotheses.
## What is an OST?
Visual framework (from Teresa Torres, *Continuous Discovery Habits*) connecting:
```
Desired Outcome (1)
|
+-----------+-----------+
| | |
Opportunity Opportunity Opportunity (3)
| | |
+-+-+ +-+-+ +-+-+
| | | | | | | | |
S1 S2 S3 S1 S2 S3 S1 S2 S3 (9 solutions)
|
Experiments (tests)
```
1. **Desired Outcome** (business metric to move)
2. **Opportunities** (customer problems/needs driving the outcome)
3. **Solutions** (ways to address each opportunity)
4. **Experiments** (tests to validate solutions)
## Why This Works
- **Outcome-driven** — starts with business goal, not feature requests
- **Divergent before convergent** — explores multiple opportunities before picking solutions
- **Problem-focused** — opportunities are problems, not solutions in disguise
- **Testable** — each solution maps to experiments
- **POC selection** — evaluates feasibility, impact, market fit before committing
## Anti-Patterns
- Not a feature list (opportunities = customer problems, not "we need dark mode")
- Not solution-first ("customers struggle with Y," not "we should build X")
- Not a project plan (discovery tool, not delivery)
- Not one-time (evolves with experiments)
## When to Use
**Use:** Stakeholder requests a feature, starting discovery, vague OKRs, prioritizing problems, aligning team on outcomes.
**Don't use:** Problem already validated, tactical bug fixes, stakeholders demand specific solution.
## Application (Two-Phase)
### Phase 1: Generate OST
#### Step 0: Gather Context
- Stakeholder request / product initiative
- Existing materials: PRD drafts, OKRs, strategy memos, customer complaints, research
- Product context: positioning, competitor reviews, usage data, support tickets, churn reasons
#### Question 1: Desired Outcome
1. **Revenue growth** — ARR, expansion, new streams
2. **Customer retention** — churn, activation, engagement
3. **Customer acquisition** — sign-ups, trial conversion, growth
4. **Product efficiency** — support costs, time-to-value, operations
**Make it measurable:** "Increase trial-to-paid conversion from 15% to 25%" — not "improve conversion."
#### Question 2: Identify Opportunities (3)
For the outcome, list **3 customer problems** that could drive it. Each with evidence from context.
**Example (Outcome = Increase trial-to-paid):**
1. Users don't experience value during trial — evidence: onboarding analytics, exit surveys
2. Pricing unclear or misaligned — evidence: funnel drop-off at pricing, sales objections
3. Free plan is "good enough" — evidence: freemium retention, support tickets for workarounds
**Critical:** Opportunities must be **problems**, not solutions disguised ("we need a mobile app" is disguised).
#### Question 3: Solutions (3 per opportunity, 9 total)
For the selected opportunity, generate **3 solutions** each with hypothesis and experiment.
**Example (Opportunity 1):**
1. Guided onboarding checklist — structured guidance increases completion; A/B test activation
2. Time-to-value triggers — proactive nudges prevent drop-off; track engagement with prompts
3. Human-assisted onboarding — personal touch for high-intent; offer to 50 users, measure vs. control
### Phase 2: Select POC
#### Question 4: Evaluate Solutions
Score each on **Feasibility** (1=months, 5=days), **Impact** (1=minimal, 5=major shift), **Market Fit** (1=no care, 5=actively request).
| Solution | Feasibility | Impact | Market Fit | Total |
|----------|-------------|--------|------------|-------|
| Guided checklist | 4 | 4 | 5 | 13 |
| Time-to-value triggers | 3 | 3 | 4 | 10 |
| Human-assisted | 5 | 5 | 3 | 13 |
**Recommended POC:** Balance of feasibility + impact + market fit.
#### Question 5: Define Experiment
1. **A/B test** — Build MVP, show 50%, compare conversion (quantitative, needs traffic)
2. **Prototype + usability test** — Clickable prototype, 10 users, qualitative (early-stage, low traffic)
3. **Manual concierge** — Run manually with 20 users, measure outcomes (learn fast, no dev)
## Output Structure
```markdown
# OST + POC Plan
## Desired Outcome
Outcome: [From Q1]
Target metric: [Measurable goal]
## Opportunity Map
### Opportunity 1: [Name]
Problem: [Description] / Evidence: [Data]
Solutions: 1. [A] 2. [B] 3. [C]
[... Opportunity 2, 3 ...]
## Selected POC
Opportunity: [Selected] / Solution: [Selected]
Hypothesis: "If we [X], then [metric] will [change] because [rationale]"
Experiment: [Type, participants, duration, success criteria]
Scores: F=__ I=__ MF=__ Total=__
## Next Steps
1. Build experiment
2. Run
3. Measure
4. Decide
```
## Examples
```markdown
Outcome: Trial-to-paid 15% → 25%
Opportunity: Users don't reach "aha" moment
Solution: Guided onboarding checklist
Experiment: A/B test, 50% of trials, 2 weeks
```
## Common Pitfalls
1. **Opportunities disguised as solutions** — "We need a mobile app" → reframe as "Mobile-first users can't access product on the go"
2. **Jumping to one solution** — Force 3+ per opportunity
3. **Vague outcome** — "Improve UX" unmeasurable; use "NPS 30 → 50"
4. **No experiments** — Every solution maps to a test
5. **Analysis paralysis** — Cap at 3 opportunities, 3 solutions each; pick POC, run
## References
- `problem-statement` — Frames opportunities as customer problems
- `jobs-to-be-done` — Identifies opportunities from JTBD
- `epic-hypothesis` — Turns validated solutions into testable epics
- `discovery-interview-prep` — Validates opportunities
- Teresa Torres, *Continuous Discovery Habits* (2021)
- Jeff Patton, *User Story Mapping* (2014)
- Ash Maurya, *Running Lean* (2012)
What This Does
Two-phase structured discovery: Phase 1 builds the OST (outcome → 3 opportunities → 3 solutions per opportunity → 9 total solutions), Phase 2 selects the POC using feasibility/impact/market fit scoring and picks an experiment (A/B, prototype, concierge).
Built on Teresa Torres' Continuous Discovery Habits. Pairs with problem-statement, jobs-to-be-done, epic-hypothesis, and discovery-interview-prep.
Quick Start
mkdir -p ~/Documents/OST
mv ~/Downloads/CLAUDE.md ~/Documents/OST/
cd ~/Documents/OST
claude
Provide the stakeholder request, existing context (OKRs, customer research, usage data), and desired outcome. Claude walks through both phases and delivers a complete OST + POC plan.
The Structure
Outcome (1)
↓
Opportunity × 3
↓
Solution × 3 per opportunity (9 total)
↓
POC selection (score feasibility + impact + market fit)
↓
Experiment (A/B, prototype, concierge)
POC Scoring Rubric
| Dimension | 1 | 3 | 5 |
|---|---|---|---|
| Feasibility | Months of work | Weeks | Days |
| Impact | Minimal outcome movement | Moderate | Major shift |
| Market Fit | Customers don't care | Nice-to-have | Actively request |
Pick the solution balancing all three — not just the highest total.
Tips & Best Practices
- Opportunities are problems, not solutions in disguise. "We need a mobile app" is not an opportunity.
- Force 3+ solutions per opportunity. One solution = stakeholders already decided.
- Make outcomes measurable. "Improve UX" is uncheckable; "NPS 30 → 50" is a decision.
- Cap exploration at 3×3. Analysis paralysis kills the framework.
- Every solution maps to an experiment. No experiment = no OST, just a wish list.
Common Pitfalls
- Opportunities disguised as solutions ("we need dark mode")
- Jumping to one solution without divergence
- Vague, unmeasurable outcomes
- Skipping experiment design ("let's just build it")
- Generating 20 opportunities and 50 solutions with no POC selection