Key Sources Reviewed
Claude Code Training Programs
- Anthropic's Official Course (Skilljar) -- Context management, MCP servers, GitHub integration
- Vanderbilt/Coursera -- "Big prompts," Best of N pattern, CLAUDE.md, parallel development
- DAIR.AI -- Build sessions, AI app deployment, 4 sessions over 2 weeks
- O'Reilly (Ken Kousen) -- Skills, Hooks, Subagents, Plugins, enterprise standardization
- Udemy courses -- Multiple beginner-to-pro courses focusing on practical projects
- Dan Shipper's Claude 101 -- "Learn Claude Code in one day. Leave with a shipped project."
- IndyDevDan's Tactical Agentic Coding -- Agent orchestration, parallel execution, production deployment
Industry Frameworks & Trends
- Anthropic's 2026 Agentic Coding Trends Report -- The definitive industry document
- Tweag's Agentic Coding Handbook -- Enterprise experiment: AI team delivered in half the time with 30% fewer engineers
- Armin Ronacher's Recommendations -- Practical MCP advice, language choices
The "Orchestrator Shift"
- "From Coder to Orchestrator" (Human Who Codes) -- Coder → Conductor → Orchestrator evolution
- TurinTech -- "From Code Writers to Agent Orchestrators"
- Anthropic Report -- "Orchestration Shift" is the defining trend of 2026
- DevOps.com -- 96% of developers excited about AI, focus shifting to "impact over output"
What the Best Workshops Cover
We Have This Well
| Topic | Our Coverage | Industry Standard |
|---|---|---|
| CLAUDE.md | Detailed section with templates | Standard topic in all courses |
| Subagents | Covered in power features | Key differentiator for advanced courses |
| Memory/memory.md | Full explanation | Mentioned in most courses |
| MCP fundamentals | Good coverage | Universal topic |
| Mindset shift | Strong "AI Delivers / Human Decides" | Industry is moving here |
| Hands-on labs | 6 labs throughout | All good courses are project-based |
Gaps We Should Address
| Gap | What Industry Covers | Our Current State | Status |
|---|---|---|---|
| Hooks | O'Reilly, Udemy courses emphasize automation via hooks | Covered in docs/06-claude-code.md; practiced in Lab 3 Part B | Done |
| "Big Prompts" / Prompt Engineering | Vanderbilt course emphasizes writing prompts that build entire features | Full section in docs/04-principles.md + 10 patterns in resources/prompt-patterns.md | Done |
| Best of N Pattern | Vanderbilt: Generate 3-5 versions, cherry-pick best | Principle #6 in docs/04-principles.md; Pattern #6 in resources/prompt-patterns.md | Done |
| Plan-Act-Check Framework | CBT Nuggets uses this as core methodology | We have Plan vs Execute mode | Consider adopting this clearer framework |
| Branch-First Workflow | O'Reilly emphasizes safe experimentation via git branches | Principle #7 in docs/04-principles.md | Done |
| Custom Slash Commands | Building your own commands | Lab 3 Part C creates two custom commands (/review-security, /analyze-failures) | Done |
| Multi-Agent Orchestration | Anthropic report: 57% of orgs use multi-step workflows | Subagents covered in docs/06-claude-code.md and Lab 3 Part A | Partial |
| Shipped Project | Dan Shipper: "Leave with a shipped project" | Lab 4 Part E deploys to Vercel/Netlify preview URL | Done |
| Cost/Token Management | Most courses cover this | Brief mention | Good as is for 1.5 day workshop |
Things We Do Better Than Most
| Our Strength | Why It's Better |
|---|---|
| "AI Delivers / Human Decides" Model | Most courses don't have a clear framework like your diagram |
| Proving AI Can Complete Full Tasks | Few workshops explicitly demonstrate this as a goal |
| Productivity Gains by Phase | Your 60/60/30/50/70/40/50 breakdown is unique and compelling |
| Developer Role Shift Focus | Most courses focus on tools, not transformation |
| Compound Engineering + Compound AI Systems | Sophisticated framework thinking |
| Figma-to-Code Pipeline | Practical, real-world workflow most courses skip |
| Design-to-Code Handover Focus | Unique to your workshop |
Key Insights from Research
1. The Orchestrator Shift is Real
From Anthropic's 2026 Report:
"The human role has evolved from 'writer' to 'reviewer and validator.' Engineers are spending less time on syntax and more time coordinating teams of specialized agents."
Our alignment: Strong. Our "Plan → Define → Validate" model matches this exactly.
2. Productivity Numbers to Reference
| Source | Claim |
|---|---|
| Anthropic Report | TELUS saved 500,000 engineering hours |
| Anthropic Report | Rakuten: 99.9% accuracy on massive codebase migrations |
| Tweag Experiment | AI team delivered in half the time with 30% fewer engineers |
| Vanderbilt Course | "1000X Productivity" (marketing, but directionally true) |
Recommendation: Include 1-2 of these case studies in Session 1.
3. The "Shipped Project" Standard
Dan Shipper's Claude 101: "Learn Claude Code in one day. Leave with a shipped project."
Our current state: Lab 6 builds a prototype but doesn't necessarily "ship" it.
Recommendation: Make the Day 2 outcome more concrete -- "Leave with a deployed component."
4. Hooks Are Important
From O'Reilly course:
"Automate workflows with Hooks, customize output styles for different contexts"
From Udemy course:
"Hooks: Unlock the ultimate automation tool. Create shell commands that trigger at specific events in Claude's lifecycle."
Status: Now covered in docs/06-claude-code.md and practiced hands-on in Lab 3 Part B (auto-test hook + failure logging).
5. The "Best of N" Pattern
From Vanderbilt/Coursera:
"Use the 'Best of N' pattern with Claude Code to generate 3-5 versions of every feature and cherry-pick the best parts"
Status: Added as Principle #6 in docs/04-principles.md and Pattern #6 in resources/prompt-patterns.md.
6. Prompt Engineering Still Matters
From Tweag:
"Prompting is not about being clever. It's about being precise, scoped, and iterative."
Strategies they teach:
- Three Experts pattern
- Chain prompting
- Action-oriented instructions
Status: Added as a full resource at resources/prompt-patterns.md with 10 patterns, plus summary in docs/04-principles.md.
Recommended Updates to Workshop
High Priority (Definitely Add)
- Hooks Section
What: Shell commands triggered at lifecycle events
Why: Key automation feature, covered by all advanced courses
Where: Add to Claude Code Power Features section - "Best of N" Pattern
What: Generate multiple versions, cherry-pick best
Why: Vanderbilt emphasizes this as key technique
Where: Add to Agentic Engineering Principles - Prompt Patterns
What: Explicit patterns for effective prompting
Why: Industry standard content
Where: New subsection or expand existing principles - Case Studies / Proof Points
What: TELUS (500K hours), Rakuten (99.9% accuracy), Tweag (2x speed)
Why: Makes "the proof" more credible
Where: Session 1 and Session 11
Medium Priority (Consider Adding)
- Branch-First Workflow -- Always create a branch before major changes. Safety net for experimentation. Fits naturally with checkpoints section.
- Custom Slash Commands -- Building team-specific commands. Example:
/review-security,/generate-tests. Extends slash commands section. - Plan-Act-Check Framework -- Alternative framing to our current approach. Simple, memorable: Plan what to do → Act with AI → Check the output. Consider adopting or referencing.
Lower Priority (Nice to Have)
- Plugins (Enterprise) -- O'Reilly covers this for enterprise standardization. May be overkill for 7-person team.
- Skills (Claude Code feature) -- Persistent domain expertise. Advanced feature, may be too much.
Competitive Comparison
| Aspect | Our Workshop | Coursera (Vanderbilt) | O'Reilly | DAIR.AI |
|---|---|---|---|---|
| Duration | 1.5 days | Self-paced course | 1-day live | 4 sessions / 2 weeks |
| Focus | Mindset + Skills | Productivity patterns | Enterprise features | Build & deploy |
| MCP Coverage | Figma + Playwright | Not emphasized | Deep MCP | Exa (web search) |
| Unique Value | "AI Delivers/Human Decides" | "Big Prompts" | Hooks, Skills, Plugins | Ship an AI app |
| Price | Custom | ~$50/month | ~$500 (live) | $249 |
| Hands-on | 6 labs | Project-based | Exercises | Build sessions |
Our differentiators:
- Mindset-first approach (not just tool training)
- Clear framework (AI Delivers / Human Decides)
- Design-to-code workflow
- In-person, team-focused
- Living framework output
Summary: Recommended Changes
Must Add
- Hooks (automation triggers) -- docs/06-claude-code.md + Lab 3 Part B
- Best of N pattern (generate multiple versions) -- Principle #6 + Pattern #6
- Prompt patterns (explicit techniques) -- resources/prompt-patterns.md (10 patterns)
- Industry case studies (TELUS, Rakuten, Tweag) -- referenced in workshop-plan.md
Should Add
- Branch-first workflow -- Principle #7 in docs/04-principles.md
- Custom slash commands examples -- Lab 3 Part C (/review-security, /analyze-failures)
- "Shipped project" as explicit Day 2 outcome -- Lab 4 Part E deploys to preview URL
Our Unique Strengths (Keep/Emphasize)
- "AI Delivers / Human Decides" framework
- Developer role shift focus
- Productivity gains by phase
- Compound Engineering + Compound AI Systems
- Design-to-code pipeline
- Living framework output
Validation: We're on the Right Track
The research strongly validates our core approach:
- The orchestrator shift is THE trend -- Every major source confirms developers are moving from "writing code" to "orchestrating agents"
- Mindset matters more than mechanics -- Courses that focus only on features underperform those that address the mental model shift
- "AI can complete full tasks" needs proving -- This is still controversial, and workshops that demonstrate it stand out
- Frameworks are memorable -- "Plan-Act-Check," "Best of N," and our "AI Delivers / Human Decides" give people mental models to take back
- Hands-on is essential -- Every successful program is project-based
Our workshop is well-positioned. Adding Hooks, Best of N, and prompt patterns would close the main gaps with industry standards.