Objectives
By the end of this lab, you will have:
- Received a product brief and analyzed it like an architect
- Defined a technical spec and tech stack with Claude’s help
- Attempted to build a Core MVP using vague, unstructured prompts
- Experienced firsthand how prompt quality affects output quality
The Scenario
A product manager just handed you this brief: AI Trust Check — a “nutrition label for AI tools” that lets users search any AI tool, see its safety rating, and know what data they can safely use with it.
Your job: go from brief to working app. But first, you’ll try it the fast-and-loose way.
Part A: The PM Handoff 10 min
Read the Brief
Open the product brief your facilitator has provided:
labs/assets/product-brief.md
Skim it for 5 minutes. Don’t try to memorize everything — get a feel for:
- What problem does this solve?
- Who is it for?
- What are the key features?
- What data is involved?
Quick Discussion
With the group:
- What questions would you ask the PM before starting?
- What’s realistic to build in a workshop vs. what’s months of work?
- What would you build first?
Guide the group toward the Core MVP scope:
- Homepage with search/browse
- Tool detail page with tier toggle + safety rating card + data clearance grid
- Static JSON data for ~5 sample tools
Part B: The Architect Phase 20 min
Now act as the architect. Use Claude Code to help define the technical foundation.
Start Claude Code in Your Project
cd claude-test # or your workshop project directory claude
Define the Tech Stack
Define the Data Model
Define the Page Structure
Save Your Work
This spec is valuable. Set aside TECHNICAL-SPEC.md for now — you’ll need it in Lab 4, where it becomes the foundation for a structured build.
Part C: The Unstructured Build 35 min
Now forget everything you just planned. You’re going to build this app the way many developers start with AI — vague prompts, no structure, no plan.
Rules for This Round
- No CLAUDE.md — don’t give Claude project context
- No technical spec — don’t reference what you just created
- Vague prompts only — use the intentionally weak prompts below
- Don’t course-correct — if Claude goes in a weird direction, let it
The Prompts (Use These in Order)
The prompts below are intentional anti-patterns (see “Anti-Patterns to Avoid” in resources/prompt-patterns.md). Notice how they violate Principles 1–2 from docs/04-principles.md: no goal definition, no context. You’ll see the difference structured prompts make in Lab 4.
Prompt 1: Start the project
Wait for Claude to finish. Don’t clarify anything it asks — just say “sure, go ahead” or “yeah that works.”
Prompt 2: Add the detail page
Prompt 3: Add the tier toggle
Prompt 4: Make it look better
Prompt 5: Add more tools
Run It
npm run dev # or whatever Claude set up
Take Stock
Look at what you have. Open the app in a browser. Click around.
Part D: The Autopsy 10 min
Audit Your Output
Go through this checklist with your result:
| Question | Check |
|---|---|
| Does the data model match the product brief? | |
| Are safety ratings correct (4 levels, color-coded)? | |
| Does the tier toggle actually change the safety card? | |
| Is there a data clearance grid (Public/General/Confidential)? | |
| Does the homepage have search AND browse filters? | |
| Is the design consistent across pages? | |
| Could a PM look at this and say “yes, that’s what I asked for”? |
Common Issues You’ll Likely See
- Inconsistent data structures — the tools JSON doesn’t match across pages
- Missing features — no data clearance grid, no browse pills, no tier toggle
- Hallucinated features — Claude added things not in the brief
- No design system — each page looks different
- Can’t iterate — no plan to reference, hard to know what’s missing
- Fragile code — changes to one thing break another
Group Discussion
- What did Claude get right?
- What did it miss or get wrong?
- How much of the product brief made it into the output?
- What would you need to do to get this to production quality?
- Key question: Was the problem Claude, or was it the prompts?
The Takeaway
The issue isn’t that Claude can’t build this. The issue is that vague inputs produce vague outputs. The same tool with structured input produces dramatically better results — which you’ll prove in Lab 4.
Checkpoint
- You created a technical spec with Claude (Part B)
- You attempted a build with vague prompts (Part C)
- You identified gaps between the output and the product brief (Part D)
- You understand why unstructured prompting produces unreliable results
Keep Your Code
Don’t delete this attempt. You’ll compare it side-by-side with Lab 4’s output.
# Save this state git add -A && git commit -m "lab2: unstructured build attempt"
Reflection Questions
- How much of the product brief did Claude “remember” from vague prompts?
- What’s the minimum context Claude needs to produce reliable output?
- If you were starting a real project, what would you do differently?
- How does this compare to giving a junior developer the same vague instructions?