Objectives
By the end of this lab, you will have:
- Used subagents for parallel work
- Configured at least one automation hook
- Created a custom slash command
- Set up the failure logging compound loop
Part A: Subagents 15 min
The Scenario
You need to add tests AND documentation for the AI Trust Check app from Lab 2 simultaneously.
Try It
Observe
- How does Claude split the work?
- Do the subagents stay in their lanes?
- How are results merged?
Reflection
- When would parallel work be useful in your real projects?
- When might it cause problems?
Part B: Automation Hooks 20 min
Step 1: Configure auto-test hook
Create .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "npm test -- --watchAll=false --passWithNoTests 2>/dev/null || true"
}
]
}
}
Step 2: Test it
Make a change and watch tests run automatically:
Did the tests run after Claude edited the file?
Step 3: Add failure logging
Update .claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "npm test -- --watchAll=false --passWithNoTests 2>/dev/null || true"
},
{
"matcher": "Bash|Execute",
"command": "if [ $EXIT_CODE -ne 0 ]; then echo \"$(date -Iseconds) | $TOOL | $COMMAND\" >> .claude/failures.log; fi"
}
]
}
}
Part C: Custom Commands 20 min
Step 1: Create commands directory
mkdir -p .claude/commands
Step 2: Create /review-security command
Create .claude/commands/review-security.md:
Review the specified code for security vulnerabilities: 1. Check for injection attacks (SQL, XSS, command injection) 2. Verify authentication and authorization patterns 3. Look for sensitive data exposure 4. Check for insecure dependencies 5. Verify input validation For each issue found: - Explain the vulnerability - Show the problematic code - Provide a fix If no issues found, confirm the code passes security review. Target: $ARGUMENTS
Step 3: Test your command
Run it on your Lab 2 code. How many issues does it find in the unstructured build? Keep this in mind for Lab 4 — you'll see whether the compound build produces fewer security findings.
Step 4: Create /analyze-failures command
Create .claude/commands/analyze-failures.md:
Analyze the failure log at .claude/failures.log to identify patterns. 1. Read the failure log 2. Group failures by type (test, build, lint, etc.) 3. Identify recurring patterns 4. For each pattern, suggest: - Root cause - Prevention strategy - Updates to CLAUDE.md Output a summary of learnings and recommended actions. If the log is empty or doesn't exist, confirm no failures recorded.
Step 5: Test the compound loop
- Intentionally cause some failures
- Run
/analyze-failures - See what patterns emerge
Quick: Create .claudeignore 2 min
echo -e ".env\n.env.*\n*.pem\nnode_modules/" > .claudeignore
This prevents Claude from reading sensitive files. See docs/05-safety.md for a full example.
Part D: Memory 5 min
Check your memory file
/memory
Add something manually
Verify it persists
/memory
Checkpoint
Before moving on, confirm:
- You've used subagents for parallel tasks
- You have at least one working hook
- You've created and used a custom command
- You understand the failure logging compound loop
Your .claude folder should look like:
Reflection Questions
- Which hook would save you the most time in your daily work?
- What custom commands would your team benefit from?
- How could the failure logging loop improve over time?