Objectives
By the end of this lab, you will have:
- Installed and used the Compound Engineering Plugin
- Built the same AI Trust Check MVP with structured workflows
- Compared Round 1 (unstructured) vs. Round 2 (compound) output
- Deployed a working app to a preview URL
- Shipped something real
This is The Proof — same tool, same brief, dramatically different results.
The Setup
Remember Lab 2? You built AI Trust Check with vague prompts and got unreliable results. Now you'll build the exact same scope — but with the Compound Engineering workflow. Same Claude, same brief, different approach.
MVP scope (same as Lab 2):
- Homepage with search/browse
- Tool detail page with tier toggle + safety rating card + data clearance grid
- Static JSON data for ~5 sample tools
Part A: Install the Compound Engineering Plugin 5 min
In Claude Code, install the plugin
/plugin marketplace add https://github.com/EveryInc/compound-engineering-plugin /plugin install compound-engineering
Verify it's installed
/help
You should see new commands: /workflows:plan, /workflows:work, /workflows:review, /workflows:compound.
Start fresh
# Create a new branch for the compound build git checkout -b feature/compound-build
Part B: Plan with /workflows:plan 15 min
This is where the magic starts. Instead of vague prompts, you feed the plugin your product brief and technical spec.
Set up your CLAUDE.md first
Remember the technical spec from Lab 2, Part B? Now it matters.
Run the planning workflow
This is the Specification Prompt pattern (Pattern 1 from resources/prompt-patterns.md) — front-loading context, then stating the task with clear requirements. Compare this to the vague prompts from Lab 2.
/workflows:plan
When prompted, describe the feature:
Review the plan
The plugin produces a structured implementation plan with:
- Ordered tasks with dependencies
- Acceptance criteria per task
- File-by-file breakdown
Read through it carefully. This is the 80/20 principle in action — spending time here saves time later.
Ask Claude to adjust anything that doesn't look right:
Part C: Build with /workflows:work 30 min
Execute the plan
/workflows:work
The plugin works through the plan task by task, using worktrees and structured execution.
Observe the difference
As Claude builds, notice:
- It follows the plan sequentially
- Each task builds on previous ones
- The data model is consistent everywhere
- Components reference shared types
- The design is cohesive
Hooks Check
Are your automation hooks from Lab 3 still active? After Claude edits a file, watch for the auto-test hook. Check .claude/failures.log — are failures being logged? If hooks aren't running, verify your .claude/settings.json is in the project root.
Check in periodically
After every few tasks, verify the output:
npm run dev
Open the browser and check:
- Does the homepage render with search and browse pills?
- Do tool cards show up?
- Does the tool detail page load with the tier toggle?
- Does switching tiers update the safety card and data grid?
Context Check
Run /cost to see how many tokens you've used so far. If you're above 100K tokens, try /compact to summarize and free context. This is especially important for long build sessions.
Course-correct if needed
If something is off, tell Claude:
Part D: Review with /workflows:review 10 min
Run the review workflow
/workflows:review
This triggers a multi-agent code review that checks:
- Code quality and consistency
- TypeScript strictness
- Accessibility
- Whether the implementation matches the plan
Address any findings
Run a final check
npm run dev npm test # if tests were generated npm run build # verify production build works
Part E: Compare & Compound 15 min
Side-by-side comparison
Switch between your two builds:
# Round 1 (unstructured) — check the main/previous branch git stash && git checkout main # or wherever Lab 2 code lives npm run dev # Look at it in the browser # Round 2 (compound) — switch back git checkout feature/compound-build && git stash pop npm run dev # Look at it in the browser
Score both builds
| Criteria | Round 1 (Unstructured) | Round 2 (Compound) |
|---|---|---|
| Data model matches brief? | /5 | /5 |
| All MVP features present? | /5 | /5 |
| Tier toggle works correctly? | /5 | /5 |
| Data clearance grid accurate? | /5 | /5 |
| Design consistency? | /5 | /5 |
| Code quality & structure? | /5 | /5 |
| Could PM approve this? | /5 | /5 |
| Total | /35 | /35 |
Capture learnings
/workflows:compound
This documents what worked, what patterns emerged, and what to carry forward. It's the "compound" in compound engineering — each project makes the next one easier.
Deploy Round 2
# Option A: Vercel npm i -g vercel vercel # Option B: Netlify npm i -g netlify-cli netlify deploy
Copy your preview URL — this is your shipped app.
You Did It!
- Installed the Compound Engineering Plugin
- Used
/workflows:planto create a structured plan from the product brief - Used
/workflows:workto build the MVP systematically - Used
/workflows:reviewfor multi-agent code review - Compared Round 1 vs. Round 2 — saw the difference
- Used
/workflows:compoundto capture learnings - Deployed a working AI Trust Check MVP
Share your URL with the group!
The Proof
What made the difference wasn't Claude — it was the workflow.
| Factor | Round 1 (Unstructured) | Round 2 (Compound) |
|---|---|---|
| Input quality | Vague 1-liners | Structured brief + spec |
| Planning | None | /workflows:plan with tasks & criteria |
| Context | No CLAUDE.md | Full CLAUDE.md with conventions |
| Execution | Ad-hoc, no order | Sequential, dependency-aware |
| Verification | None until the end | Continuous + multi-agent review |
| Knowledge capture | Nothing saved | /workflows:compound documents patterns |
Same AI. Same developer. Same brief. Different process. Different outcome.
This is the compound engineering principle: 80% planning and review, 20% execution. The execution is the easy part — it's the system around it that determines quality.
Reflection Questions
- What was the single biggest factor that improved Round 2's output?
- How much time did planning take vs. how much time did it save?
- Could you apply this workflow to your real projects tomorrow?
- What would you add to your team's CLAUDE.md based on today?