---
name: 100x
description: Hire a team of parallel latest-Opus research agents to 10x-amplify any strategic decision. Use when user says "/100x", "100x this", or "hire a team to think about this". Works on any topic, product strategy, marketing, architecture, positioning, competitive analysis.
---

# /100x, Parallel Strategic Research Team

Spawn 5+ specialized Opus agents in parallel, each attacking the problem from a different angle. All agents use extended thinking and web research. Synthesize their findings into a ranked set of plays.

> **Customize this template before installing.** The customize prompt that came with this file walks you through 5 questions about your domain, constraints, lenses, and where to save outputs.

**Model handling:** Always pass `model: "opus"` to the Agent tool. This is dynamic and routes to whatever the current latest Opus is at the time the skill runs. Never hardcode a version number, the version label drifts, the `"opus"` alias does not.

## Protocol

### Step 0: Load Research Memory

Before anything else, look up prior /100x runs on related topics.

> FILL IN: where you save research outputs. Options:
>
> - **Supabase table**: query `research_outputs` (or your equivalent). Best if you want fast cross-session memory and can run SQL.
> - **Flat folder**: read all files in `{{OUTPUT_PATH}}` matching `*-100x.md`. Best for simple setups.
> - **Obsidian vault**: read all files in your vault's research folder. Best if you already live in Obsidian.
> - **Nowhere**: skip this step entirely. Each run is independent. Acceptable if you're just starting out.

If prior research exists:
1. **Build a preamble** summarizing what's been explored before, key conclusions, what was rejected and why, open questions that remain
2. **Identify gaps**, what angles or domains haven't been covered yet
3. **Note outcomes**, if any prior research was marked applied with outcome data, reference what actually happened
4. **Inject the preamble** into every agent prompt in Step 2 under a "PRIOR RESEARCH" section

If no prior research exists, skip the preamble and proceed normally.

The preamble is NOT a constraint, agents should build on prior findings, challenge them, or go deeper. It prevents re-treading the same ground and rewards compounding.

### Step 0.5: Load Business State (optional but recommended)

> FILL IN: your current business state and any permanent rules that should be injected into every agent prompt.
>
> Example permanent rules from my own use:
> - "I build AND do outreach simultaneously. Both tracks at full speed. Do NOT recommend gating building on outreach validation."
> - "I have N paying clients. Don't recommend revenue-validation steps I've already done."
> - "Time estimates from past tasks show I overestimate by Nx. Calibrate accordingly."
>
> If you're new to /100x or your situation is simple, leave this empty. The skill works without it.

If you have permanent corrections, inject them into every agent prompt under a "BUSINESS STATE / PERMANENT RULES" section:

```
PERMANENT RULES:
- {your rule 1}
- {your rule 2}
- ...
```

### Step 0.9: Premise Gate (mandatory)

Before spawning any agents, do a 60-second sanity check. Two questions:

**Question 1: Does this need a /100x?**

Not every idea warrants 4-7 parallel Opus agents. Assess:
- Is this multi-dimensional (benefits from 4+ distinct research angles)? → /100x
- Is this single-dimensional with a clear path (one domain, known constraints)? → Answer directly, skip /100x
- Is this a build task disguised as a research question? → Just build it, skip /100x
- Has this already been researched in a prior /100x (Step 0 found it)? → Summarize prior findings, ask what's changed

If it doesn't need /100x, tell the user: "This doesn't need the full research team. Here's the answer: [direct response]. Want me to 100x it anyway?" Respect their call.

**Question 2: Context injection, not idea rejection.**

Load current state (from Step 0 + 0.5) and inject it into the agent prompts as CONTEXT, not as a gate. The agents should know your current state so their proposals are grounded, but this information is never used to argue against exploring the idea. The 100x amplifies ideas, it doesn't filter them.

The red team (Step 5.5) handles "should we actually build this NOW vs later" after the research is done. That's the right place for timing concerns, not the front gate.

### Step 1: Decompose the Problem

Read the user's request and identify 4-6 independent research angles. Each angle should be a distinct discipline or perspective that would generate non-overlapping insights.

> FILL IN: your domain-specific lenses. Mine default to these for most strategic questions:
>
> - **Art / Experience** - aesthetic precedents, emotional design, sensory impact
> - **Business / Leverage** - ROI, positioning, buyer journey, revenue path
> - **Technical** - architecture, feasibility, performance, cost
> - **Viral / Growth** - shareability, distribution, launch strategy, content hooks
> - **Competitive** - landscape mapping, white space, differentiation
> - **Market / Timing** - trends, demand signals, cultural moment
>
> If your domain is different (legal, healthcare, finance, education, climate, hardware, etc.), pick lenses that match. Examples:
> - Healthcare: Clinical, Regulatory, Operational, Patient Experience, Reimbursement, Competitive
> - Hardware: Mechanical, Electrical, Manufacturing, Cost, Regulatory, Distribution
> - Finance: Macro, Micro, Risk, Regulatory, Behavioral, Competitive

If Step 0 found prior research, bias angle selection toward unexplored territory and open questions from previous runs.

### Step 2: Spawn the Team

Launch all agents in parallel using the Agent tool with:
- `model: "opus"` (always the latest Opus, no exceptions)
- `run_in_background: true`
- `subagent_type: "general-purpose"`
- Each agent gets a detailed prompt with full context + their specific research mandate
- Each prompt explicitly instructs: "Think deeply. Research online. Be concrete, not generic."
- Each prompt asks for 2-3 specific proposals ranked by leverage

### Step 3: Collect Results

Wait for all agents to complete. Resume each one to collect findings. If an agent is still running, check progress via output file and wait.

### Step 4: Synthesize

Cross-reference all findings and identify:
1. **Convergence**, where multiple agents independently reached the same conclusion (high confidence)
2. **Novel insights**, unique findings from individual angles
3. **Contradictions**, where agents disagree (flag for user decision)
4. **White space**, gaps nobody is filling

### Step 5: Present

Deliver a ranked set of plays with:
- Clear #1 recommendation with reasoning
- Supporting evidence from multiple agents
- Technical feasibility assessment
- Effort/impact estimate
- Next actions if the user says go

### Step 5.5: Red Team

After synthesizing but BEFORE saving or presenting as final, run a critical evaluation against the synthesis. Not optional, not skippable.

**Evaluate the synthesis for:**

1. **3-5 flaws**, weak assumptions, failure modes, things that break when they hit reality. Zero is not acceptable. For each flaw, name what specifically breaks.
2. **Weakest assumption**, the single load-bearing belief most likely to be wrong.
3. **What's missing**, angles nobody covered, questions nobody asked, tests nobody proposed.
4. **Convergence skepticism**, did agents converge because the answer is correct, or because it's obvious? 5/5 agents agreeing on an aesthetically compelling idea is weaker evidence than 3/5 agreeing on a counterintuitive one.
5. **Confidence %**, probability the plan works as described. Defend the number.

**Format the red team as a clearly labeled section in the output:**

```
## Red Team Evaluation

**Flaw 1: [title]**
[what breaks]

...

**Weakest assumption:** [one line]
**Missing:** [what nobody thought about]
**Confidence: X%**, [defense]
```

### Step 5.6: Round 2 Gate

**If confidence >= 60%:** Proceed to Step 6. The synthesis stands. Red team findings ship alongside the plays as caveats, not blockers.

**If confidence < 60%:** The synthesis has structural problems. Trigger Round 2.

**Round 2 protocol:**
1. Extract the specific flaws, weakest assumption, and missing angles from the red team.
2. Spawn 2-3 targeted Opus agents (not the full 4-7). Each agent addresses a specific flaw or gap. Their prompts include the original synthesis AND the red team findings.
3. Collect Round 2 results. Re-synthesize, merge Round 2 findings into the original, revise plays, update rankings.
4. Run red team again on the revised synthesis (Step 5.5 repeats).
5. **No Round 3.** If confidence is still < 60% after Round 2, ship it with the red team attached and flag the unresolved risks explicitly. The user decides whether to proceed or kill it.

Round 2 is surgical, not a full re-run. It exists to patch holes, not to re-explore the entire space.

### Step 6: Save Output

After the red team gate clears (or Round 2 completes), save a clean markdown file.

> FILL IN: where to save 100x outputs. Mine save to my Obsidian vault, numbered globally so I can scan the folder by run count. Yours might go to:
>
> - **Local folder**, `{{OUTPUT_PATH}}/{date}-{topic}-100x.md`
> - **Obsidian vault**, `{vault}/100x/{date}-{NNNN}-{topic}-100x.md` with a global running count
> - **Notion / Google Doc**, via API
> - **Nowhere**, output stays in the chat transcript only (acceptable for low-stakes runs)

**Naming convention I use** (steal it if you want a starting point):
- `YYYY.MM.DD-NNNN-{topic}-100x.md`
- `NNNN` = running global count across all 100x's ever run, zero-padded to 4 digits
- The count is the differentiator, not time, so I can scan the folder and see how many runs deep I am

**Number assignment (collision-safe):**
1. List all files in the folder matching `*-100x.md`
2. Parse the count from each filename (regex `^\d{4}\.\d{2}\.\d{2}-(\d{4})-`)
3. Find the maximum. New number = `max + 1`, zero-padded.
4. After writing the file, immediately re-list the folder. If any other file shares the same prefix, increment and rename.

The file should:
- Include the convergence map, all ranked plays, execution sequence, key sources, AND the red team evaluation
- The red team section is part of the deliverable, not a footnote
- If Round 2 fired, include both the original red team and the revised assessment
- Be self-contained, readable without the conversation context
- Strip conversational fluff, just the substance

### Step 7: Persist to Research Memory (optional)

> FILL IN: your persistence target. Skip this step if you set "Nowhere" in Step 0.
>
> If you use Supabase or Postgres, here's the SQL pattern I use:
>
> ```sql
> INSERT INTO research_outputs (session_id, question, agent_count, conclusions, rejected_hypotheses, open_questions, confidence_map, full_output)
> VALUES (...);
> ```
>
> If you use a flat folder, this step is just "save the file" and you've already done it.
> If you use Obsidian, this step is just "the file is already in the vault, the next sync will pick it up."

**What to capture (when persisting):**
- **conclusions**: the ranked plays. The #1 recommendation, supporting evidence, feasibility notes.
- **rejected_hypotheses**: ideas explored but didn't make the cut. Include WHY, prevents future runs from re-exploring dead ends. Also include red team flaws that weren't resolved.
- **open_questions**: threads that emerged but weren't resolved. Seeds for future angle selection. Include unresolved red team concerns.
- **confidence_map**: which conclusions had multi-agent convergence. Include the red team confidence % as a top-level entry.
- **full_output**: the complete markdown output, same content as the saved file.

This step is the difference between a one-off research run and a compounding system. Every run feeds the next.

## Rules

1. ALWAYS use the latest Opus for every agent (`model: "opus"`). No downgrades.
2. ALWAYS run agents in parallel, never sequential.
3. Every agent must research online where relevant.
4. Every agent gets full context about what's being analyzed.
5. Minimum 4 agents, maximum 7. More isn't better, it's noise.
6. Each agent should produce 2-3 concrete proposals, not vague analysis.
7. The synthesis is where the value lives. Don't just concatenate results.
8. If the user provides a specific domain/topic, tailor the agent angles to that domain.
9. Flag confidence levels: "5/5 agents converged on X" carries more weight than "1 agent suggested Y."
10. Always end with a clear "want to build it?" or equivalent call to action.
11. ALWAYS run Step 0 (Load Research Memory). The system compounds only if every run reads prior runs.
12. ALWAYS run Step 7 (Persist to Research Memory) when configured. Unpersisted research is wasted research.
13. When prior research exists, tell the user what you found and how it's shaping this run's angle selection.
14. ALWAYS run Step 5.5 (Red Team). Not optional. A 100x without a red team is a 100x that lies to you.
15. The red team evaluates the SYNTHESIS, not individual agent outputs.
16. Round 2 agents get the red team findings in their prompts. They are stress-testing specific cracks, not re-exploring.
17. Never run Round 3. Two passes is the ceiling. Ship with caveats or kill it.
18. ALWAYS run Step 0.9 (Premise Gate). Filter complexity, inject current state, do not reject ideas.

## Example Prompts by Domain

**Product:** Art/UX, Business Model, Technical Architecture, Growth/Distribution, Competitive Landscape
**Marketing:** Content Strategy, Platform-Specific Tactics, Audience Psychology, Competitive Positioning, Launch Sequencing
**Architecture:** Performance, Security, Cost, Developer Experience, Scale/Future-proofing
**Positioning:** Market Research, Messaging/Copy, Visual Identity, Competitive Differentiation, Channel Strategy
