02
Day 1 · Session 6

From Brief to Build

75 min · PM handoff → Architect → Unstructured build → Autopsy

Objectives

By the end of this lab, you will have:

The Scenario

A product manager just handed you this brief: AI Trust Check — a “nutrition label for AI tools” that lets users search any AI tool, see its safety rating, and know what data they can safely use with it.

Your job: go from brief to working app. But first, you’ll try it the fast-and-loose way.

Part A: The PM Handoff 10 min

Read the Brief

Open the product brief your facilitator has provided:

labs/assets/product-brief.md

Skim it for 5 minutes. Don’t try to memorize everything — get a feel for:

Quick Discussion

With the group:

  1. What questions would you ask the PM before starting?
  2. What’s realistic to build in a workshop vs. what’s months of work?
  3. What would you build first?
Facilitator Note

Guide the group toward the Core MVP scope:

  • Homepage with search/browse
  • Tool detail page with tier toggle + safety rating card + data clearance grid
  • Static JSON data for ~5 sample tools

Part B: The Architect Phase 20 min

Now act as the architect. Use Claude Code to help define the technical foundation.

Start Claude Code in Your Project

cd claude-test  # or your workshop project directory
claude

Define the Tech Stack

You: "I'm building a web app called AI Trust Check — a safety rating directory for AI tools. Help me choose a tech stack. Requirements: - Static data (JSON), no backend needed - Search and filter functionality - Multiple pages (home, search results, tool detail) - Needs to look professional - Should be deployable to Vercel or Netlify What tech stack do you recommend and why?"

Define the Data Model

You: "Now help me define the data model. Each AI tool has: - Basic info (name, vendor, type, jurisdiction) - Multiple pricing tiers (e.g., Free, Business, Enterprise) - Each tier has its own safety rating (Enterprise Ready, Internal Safe, Public Data Only, or Unsafe) - Each tier has a data clearance grid (Public/General/Confidential — each Allowed or Not Allowed) - Optional: agentic capability warnings Create a TypeScript type definition and a sample JSON file with 5 tools."

Define the Page Structure

You: "Based on this data model, plan out the pages and components: - Homepage with search bar and browse filters - Tool detail page with tier toggle - Navigation and layout Just outline the structure — don't build yet."

Save Your Work

You: "Save the technical spec, data model, and component plan to a file called TECHNICAL-SPEC.md"

This spec is valuable. Set aside TECHNICAL-SPEC.md for now — you’ll need it in Lab 4, where it becomes the foundation for a structured build.

Part C: The Unstructured Build 35 min

Now forget everything you just planned. You’re going to build this app the way many developers start with AI — vague prompts, no structure, no plan.

Rules for This Round

The Prompts (Use These in Order)

Note

The prompts below are intentional anti-patterns (see “Anti-Patterns to Avoid” in resources/prompt-patterns.md). Notice how they violate Principles 1–2 from docs/04-principles.md: no goal definition, no context. You’ll see the difference structured prompts make in Lab 4.

Prompt 1: Start the project

You: "Build me a safety rating app for AI tools"

Wait for Claude to finish. Don’t clarify anything it asks — just say “sure, go ahead” or “yeah that works.”

Prompt 2: Add the detail page

You: "Add a page where I can see tool details"

Prompt 3: Add the tier toggle

You: "I need to be able to switch between pricing tiers"

Prompt 4: Make it look better

You: "Make it look nice"

Prompt 5: Add more tools

You: "Add a few more AI tools to the data"

Run It

npm run dev  # or whatever Claude set up

Take Stock

Look at what you have. Open the app in a browser. Click around.

Part D: The Autopsy 10 min

Audit Your Output

Go through this checklist with your result:

Question Check
Does the data model match the product brief?
Are safety ratings correct (4 levels, color-coded)?
Does the tier toggle actually change the safety card?
Is there a data clearance grid (Public/General/Confidential)?
Does the homepage have search AND browse filters?
Is the design consistent across pages?
Could a PM look at this and say “yes, that’s what I asked for”?

Common Issues You’ll Likely See

Group Discussion

  1. What did Claude get right?
  2. What did it miss or get wrong?
  3. How much of the product brief made it into the output?
  4. What would you need to do to get this to production quality?
  5. Key question: Was the problem Claude, or was it the prompts?

The Takeaway

The issue isn’t that Claude can’t build this. The issue is that vague inputs produce vague outputs. The same tool with structured input produces dramatically better results — which you’ll prove in Lab 4.

Checkpoint

  • You created a technical spec with Claude (Part B)
  • You attempted a build with vague prompts (Part C)
  • You identified gaps between the output and the product brief (Part D)
  • You understand why unstructured prompting produces unreliable results

Keep Your Code

Don’t delete this attempt. You’ll compare it side-by-side with Lab 4’s output.

# Save this state
git add -A && git commit -m "lab2: unstructured build attempt"

Reflection Questions

  1. How much of the product brief did Claude “remember” from vague prompts?
  2. What’s the minimum context Claude needs to produce reliable output?
  3. If you were starting a real project, what would you do differently?
  4. How does this compare to giving a junior developer the same vague instructions?