04
Day 2 · Session 9

From Brief to Build — The Compound Way

75 min · Compound Engineering Plugin → Plan → Build → Review → Ship

Objectives

By the end of this lab, you will have:

This is The Proof — same tool, same brief, dramatically different results.

The Setup

Remember Lab 2? You built AI Trust Check with vague prompts and got unreliable results. Now you'll build the exact same scope — but with the Compound Engineering workflow. Same Claude, same brief, different approach.

MVP scope (same as Lab 2):

Part A: Install the Compound Engineering Plugin 5 min

In Claude Code, install the plugin

/plugin marketplace add https://github.com/EveryInc/compound-engineering-plugin
/plugin install compound-engineering

Verify it's installed

/help

You should see new commands: /workflows:plan, /workflows:work, /workflows:review, /workflows:compound.

Start fresh

# Create a new branch for the compound build
git checkout -b feature/compound-build

Part B: Plan with /workflows:plan 15 min

This is where the magic starts. Instead of vague prompts, you feed the plugin your product brief and technical spec.

Set up your CLAUDE.md first

Remember the technical spec from Lab 2, Part B? Now it matters.

You: "Create a CLAUDE.md for this project based on the technical spec in TECHNICAL-SPEC.md. Include the tech stack, conventions, file structure, and data model."

Run the planning workflow

Note

This is the Specification Prompt pattern (Pattern 1 from resources/prompt-patterns.md) — front-loading context, then stating the task with clear requirements. Compare this to the vague prompts from Lab 2.

/workflows:plan

When prompted, describe the feature:

You: "Build the AI Trust Check MVP — a safety rating directory for AI tools. Product brief: [paste the key sections from labs/assets/product-brief.md, or reference it] MVP scope: 1. Homepage with search bar, tool type pills, and safety rating pills 2. Tool detail page with: - Header (tool name, vendor, type, jurisdiction) - Tier toggle (one button per pricing tier) - Safety rating card (color-coded badge + name + risk summary) - Data clearance grid (Public/General/Confidential rows, Allowed/Not Allowed per tier) 3. Static JSON data for 5 tools (ChatGPT, Claude, Cursor, Midjourney, GitHub Copilot) with realistic tier data 4. Navigation bar and footer 5. Responsive design Use the data model and tech stack defined in CLAUDE.md."

Review the plan

The plugin produces a structured implementation plan with:

Read through it carefully. This is the 80/20 principle in action — spending time here saves time later.

Ask Claude to adjust anything that doesn't look right:

You: "The plan looks good, but I want to make sure the data model matches the product brief exactly. Each tool should have multiple tiers, and each tier has its own independent safety rating and data clearance grid. Can you verify the plan accounts for this?"

Part C: Build with /workflows:work 30 min

Execute the plan

/workflows:work

The plugin works through the plan task by task, using worktrees and structured execution.

Observe the difference

As Claude builds, notice:

Hooks Check

Are your automation hooks from Lab 3 still active? After Claude edits a file, watch for the auto-test hook. Check .claude/failures.log — are failures being logged? If hooks aren't running, verify your .claude/settings.json is in the project root.

Check in periodically

After every few tasks, verify the output:

npm run dev

Open the browser and check:

Context Check

Run /cost to see how many tokens you've used so far. If you're above 100K tokens, try /compact to summarize and free context. This is especially important for long build sessions.

Course-correct if needed

If something is off, tell Claude:

You: "The tier toggle isn't updating the data clearance grid. The grid should change when I switch tiers — each tier has its own set of allowed/not-allowed data levels."

Part D: Review with /workflows:review 10 min

Run the review workflow

/workflows:review

This triggers a multi-agent code review that checks:

Address any findings

You: "Fix the issues identified in the review."

Run a final check

npm run dev
npm test  # if tests were generated
npm run build  # verify production build works

Part E: Compare & Compound 15 min

Side-by-side comparison

Switch between your two builds:

# Round 1 (unstructured) — check the main/previous branch
git stash && git checkout main  # or wherever Lab 2 code lives
npm run dev
# Look at it in the browser

# Round 2 (compound) — switch back
git checkout feature/compound-build && git stash pop
npm run dev
# Look at it in the browser

Score both builds

Criteria Round 1 (Unstructured) Round 2 (Compound)
Data model matches brief? /5 /5
All MVP features present? /5 /5
Tier toggle works correctly? /5 /5
Data clearance grid accurate? /5 /5
Design consistency? /5 /5
Code quality & structure? /5 /5
Could PM approve this? /5 /5
Total /35 /35

Capture learnings

/workflows:compound

This documents what worked, what patterns emerged, and what to carry forward. It's the "compound" in compound engineering — each project makes the next one easier.

Deploy Round 2

# Option A: Vercel
npm i -g vercel
vercel

# Option B: Netlify
npm i -g netlify-cli
netlify deploy

Copy your preview URL — this is your shipped app.

You Did It!

  • Installed the Compound Engineering Plugin
  • Used /workflows:plan to create a structured plan from the product brief
  • Used /workflows:work to build the MVP systematically
  • Used /workflows:review for multi-agent code review
  • Compared Round 1 vs. Round 2 — saw the difference
  • Used /workflows:compound to capture learnings
  • Deployed a working AI Trust Check MVP

Share your URL with the group!

The Proof

What made the difference wasn't Claude — it was the workflow.

Factor Round 1 (Unstructured) Round 2 (Compound)
Input quality Vague 1-liners Structured brief + spec
Planning None /workflows:plan with tasks & criteria
Context No CLAUDE.md Full CLAUDE.md with conventions
Execution Ad-hoc, no order Sequential, dependency-aware
Verification None until the end Continuous + multi-agent review
Knowledge capture Nothing saved /workflows:compound documents patterns

Same AI. Same developer. Same brief. Different process. Different outcome.

This is the compound engineering principle: 80% planning and review, 20% execution. The execution is the easy part — it's the system around it that determines quality.

Reflection Questions

  1. What was the single biggest factor that improved Round 2's output?
  2. How much time did planning take vs. how much time did it save?
  3. Could you apply this workflow to your real projects tomorrow?
  4. What would you add to your team's CLAUDE.md based on today?