03
Day 1 · Session 7

Power Features

60 min · Subagents, hooks, custom commands, and memory

Objectives

By the end of this lab, you will have:

Part A: Subagents 15 min

The Scenario

You need to add tests AND documentation for the AI Trust Check app from Lab 2 simultaneously.

Try It

You: "I need two things done in parallel: 1. Write unit tests for the tool detail page — cover the tier toggle, safety rating card, and data clearance grid 2. Create a README with usage examples and screenshots of each safety rating level Use subagents to work on both simultaneously."

Observe

Reflection

Part B: Automation Hooks 20 min

Step 1: Configure auto-test hook

Create .claude/settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "command": "npm test -- --watchAll=false --passWithNoTests 2>/dev/null || true"
      }
    ]
  }
}

Step 2: Test it

Make a change and watch tests run automatically:

You: "Add a small change to the AI Trust Check tool detail page — add a 'last verified' date display below the data clearance grid."

Did the tests run after Claude edited the file?

Step 3: Add failure logging

Update .claude/settings.json:

{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "command": "npm test -- --watchAll=false --passWithNoTests 2>/dev/null || true"
      },
      {
        "matcher": "Bash|Execute",
        "command": "if [ $EXIT_CODE -ne 0 ]; then echo \"$(date -Iseconds) | $TOOL | $COMMAND\" >> .claude/failures.log; fi"
      }
    ]
  }
}

Part C: Custom Commands 20 min

Step 1: Create commands directory

mkdir -p .claude/commands

Step 2: Create /review-security command

Create .claude/commands/review-security.md:

Review the specified code for security vulnerabilities:

1. Check for injection attacks (SQL, XSS, command injection)
2. Verify authentication and authorization patterns
3. Look for sensitive data exposure
4. Check for insecure dependencies
5. Verify input validation

For each issue found:
- Explain the vulnerability
- Show the problematic code
- Provide a fix

If no issues found, confirm the code passes security review.

Target: $ARGUMENTS

Step 3: Test your command

You: /review-security src/

Run it on your Lab 2 code. How many issues does it find in the unstructured build? Keep this in mind for Lab 4 — you'll see whether the compound build produces fewer security findings.

Step 4: Create /analyze-failures command

Create .claude/commands/analyze-failures.md:

Analyze the failure log at .claude/failures.log to identify patterns.

1. Read the failure log
2. Group failures by type (test, build, lint, etc.)
3. Identify recurring patterns
4. For each pattern, suggest:
   - Root cause
   - Prevention strategy
   - Updates to CLAUDE.md

Output a summary of learnings and recommended actions.
If the log is empty or doesn't exist, confirm no failures recorded.

Step 5: Test the compound loop

  1. Intentionally cause some failures
  2. Run /analyze-failures
  3. See what patterns emerge

Quick: Create .claudeignore 2 min

echo -e ".env\n.env.*\n*.pem\nnode_modules/" > .claudeignore

This prevents Claude from reading sensitive files. See docs/05-safety.md for a full example.

Part D: Memory 5 min

Check your memory file

/memory

Add something manually

You: "Remember that this project uses Tailwind's 'prose' classes for text content."

Verify it persists

/memory

Checkpoint

Before moving on, confirm:

  • You've used subagents for parallel tasks
  • You have at least one working hook
  • You've created and used a custom command
  • You understand the failure logging compound loop

Your .claude folder should look like:

.claude/ ├── commands/ │ ├── review-security.md │ └── analyze-failures.md ├── hooks/ │ └── (configured in settings.json) ├── settings.json ├── memory.md └── failures.log (may be empty)

Reflection Questions

  1. Which hook would save you the most time in your daily work?
  2. What custom commands would your team benefit from?
  3. How could the failure logging loop improve over time?