Reference

Workshop Research Analysis

Comparing RAB2B Workshop to industry best practices

Key Sources Reviewed

Claude Code Training Programs

  1. Anthropic's Official Course (Skilljar) -- Context management, MCP servers, GitHub integration
  2. Vanderbilt/Coursera -- "Big prompts," Best of N pattern, CLAUDE.md, parallel development
  3. DAIR.AI -- Build sessions, AI app deployment, 4 sessions over 2 weeks
  4. O'Reilly (Ken Kousen) -- Skills, Hooks, Subagents, Plugins, enterprise standardization
  5. Udemy courses -- Multiple beginner-to-pro courses focusing on practical projects
  6. Dan Shipper's Claude 101 -- "Learn Claude Code in one day. Leave with a shipped project."
  7. IndyDevDan's Tactical Agentic Coding -- Agent orchestration, parallel execution, production deployment

Industry Frameworks & Trends

  1. Anthropic's 2026 Agentic Coding Trends Report -- The definitive industry document
  2. Tweag's Agentic Coding Handbook -- Enterprise experiment: AI team delivered in half the time with 30% fewer engineers
  3. Armin Ronacher's Recommendations -- Practical MCP advice, language choices

The "Orchestrator Shift"

  1. "From Coder to Orchestrator" (Human Who Codes) -- Coder → Conductor → Orchestrator evolution
  2. TurinTech -- "From Code Writers to Agent Orchestrators"
  3. Anthropic Report -- "Orchestration Shift" is the defining trend of 2026
  4. DevOps.com -- 96% of developers excited about AI, focus shifting to "impact over output"

What the Best Workshops Cover

We Have This Well

Topic Our Coverage Industry Standard
CLAUDE.md Detailed section with templates Standard topic in all courses
Subagents Covered in power features Key differentiator for advanced courses
Memory/memory.md Full explanation Mentioned in most courses
MCP fundamentals Good coverage Universal topic
Mindset shift Strong "AI Delivers / Human Decides" Industry is moving here
Hands-on labs 6 labs throughout All good courses are project-based

Gaps We Should Address

Gap What Industry Covers Our Current State Status
Hooks O'Reilly, Udemy courses emphasize automation via hooks Covered in docs/06-claude-code.md; practiced in Lab 3 Part B Done
"Big Prompts" / Prompt Engineering Vanderbilt course emphasizes writing prompts that build entire features Full section in docs/04-principles.md + 10 patterns in resources/prompt-patterns.md Done
Best of N Pattern Vanderbilt: Generate 3-5 versions, cherry-pick best Principle #6 in docs/04-principles.md; Pattern #6 in resources/prompt-patterns.md Done
Plan-Act-Check Framework CBT Nuggets uses this as core methodology We have Plan vs Execute mode Consider adopting this clearer framework
Branch-First Workflow O'Reilly emphasizes safe experimentation via git branches Principle #7 in docs/04-principles.md Done
Custom Slash Commands Building your own commands Lab 3 Part C creates two custom commands (/review-security, /analyze-failures) Done
Multi-Agent Orchestration Anthropic report: 57% of orgs use multi-step workflows Subagents covered in docs/06-claude-code.md and Lab 3 Part A Partial
Shipped Project Dan Shipper: "Leave with a shipped project" Lab 4 Part E deploys to Vercel/Netlify preview URL Done
Cost/Token Management Most courses cover this Brief mention Good as is for 1.5 day workshop

Things We Do Better Than Most

Our Strength Why It's Better
"AI Delivers / Human Decides" Model Most courses don't have a clear framework like your diagram
Proving AI Can Complete Full Tasks Few workshops explicitly demonstrate this as a goal
Productivity Gains by Phase Your 60/60/30/50/70/40/50 breakdown is unique and compelling
Developer Role Shift Focus Most courses focus on tools, not transformation
Compound Engineering + Compound AI Systems Sophisticated framework thinking
Figma-to-Code Pipeline Practical, real-world workflow most courses skip
Design-to-Code Handover Focus Unique to your workshop

Key Insights from Research

1. The Orchestrator Shift is Real

From Anthropic's 2026 Report:

Key Quote

"The human role has evolved from 'writer' to 'reviewer and validator.' Engineers are spending less time on syntax and more time coordinating teams of specialized agents."

Our alignment: Strong. Our "Plan → Define → Validate" model matches this exactly.

2. Productivity Numbers to Reference

Source Claim
Anthropic ReportTELUS saved 500,000 engineering hours
Anthropic ReportRakuten: 99.9% accuracy on massive codebase migrations
Tweag ExperimentAI team delivered in half the time with 30% fewer engineers
Vanderbilt Course"1000X Productivity" (marketing, but directionally true)

Recommendation: Include 1-2 of these case studies in Session 1.

3. The "Shipped Project" Standard

Dan Shipper's Claude 101: "Learn Claude Code in one day. Leave with a shipped project."

Our current state: Lab 6 builds a prototype but doesn't necessarily "ship" it.

Recommendation: Make the Day 2 outcome more concrete -- "Leave with a deployed component."

4. Hooks Are Important

From O'Reilly course:

Key Quote

"Automate workflows with Hooks, customize output styles for different contexts"

From Udemy course:

Key Quote

"Hooks: Unlock the ultimate automation tool. Create shell commands that trigger at specific events in Claude's lifecycle."

Status: Now covered in docs/06-claude-code.md and practiced hands-on in Lab 3 Part B (auto-test hook + failure logging).

5. The "Best of N" Pattern

From Vanderbilt/Coursera:

Key Quote

"Use the 'Best of N' pattern with Claude Code to generate 3-5 versions of every feature and cherry-pick the best parts"

Status: Added as Principle #6 in docs/04-principles.md and Pattern #6 in resources/prompt-patterns.md.

6. Prompt Engineering Still Matters

From Tweag:

Key Quote

"Prompting is not about being clever. It's about being precise, scoped, and iterative."

Strategies they teach:

Status: Added as a full resource at resources/prompt-patterns.md with 10 patterns, plus summary in docs/04-principles.md.

Recommended Updates to Workshop

High Priority (Definitely Add)

  1. Hooks Section
    What: Shell commands triggered at lifecycle events
    Why: Key automation feature, covered by all advanced courses
    Where: Add to Claude Code Power Features section
  2. "Best of N" Pattern
    What: Generate multiple versions, cherry-pick best
    Why: Vanderbilt emphasizes this as key technique
    Where: Add to Agentic Engineering Principles
  3. Prompt Patterns
    What: Explicit patterns for effective prompting
    Why: Industry standard content
    Where: New subsection or expand existing principles
  4. Case Studies / Proof Points
    What: TELUS (500K hours), Rakuten (99.9% accuracy), Tweag (2x speed)
    Why: Makes "the proof" more credible
    Where: Session 1 and Session 11

Medium Priority (Consider Adding)

  1. Branch-First Workflow -- Always create a branch before major changes. Safety net for experimentation. Fits naturally with checkpoints section.
  2. Custom Slash Commands -- Building team-specific commands. Example: /review-security, /generate-tests. Extends slash commands section.
  3. Plan-Act-Check Framework -- Alternative framing to our current approach. Simple, memorable: Plan what to do → Act with AI → Check the output. Consider adopting or referencing.

Lower Priority (Nice to Have)

  1. Plugins (Enterprise) -- O'Reilly covers this for enterprise standardization. May be overkill for 7-person team.
  2. Skills (Claude Code feature) -- Persistent domain expertise. Advanced feature, may be too much.

Competitive Comparison

Aspect Our Workshop Coursera (Vanderbilt) O'Reilly DAIR.AI
Duration 1.5 days Self-paced course 1-day live 4 sessions / 2 weeks
Focus Mindset + Skills Productivity patterns Enterprise features Build & deploy
MCP Coverage Figma + Playwright Not emphasized Deep MCP Exa (web search)
Unique Value "AI Delivers/Human Decides" "Big Prompts" Hooks, Skills, Plugins Ship an AI app
Price Custom ~$50/month ~$500 (live) $249
Hands-on 6 labs Project-based Exercises Build sessions

Our differentiators:

  1. Mindset-first approach (not just tool training)
  2. Clear framework (AI Delivers / Human Decides)
  3. Design-to-code workflow
  4. In-person, team-focused
  5. Living framework output

Summary: Recommended Changes

Must Add

Should Add

Our Unique Strengths (Keep/Emphasize)

Validation: We're on the Right Track

The research strongly validates our core approach:

  1. The orchestrator shift is THE trend -- Every major source confirms developers are moving from "writing code" to "orchestrating agents"
  2. Mindset matters more than mechanics -- Courses that focus only on features underperform those that address the mental model shift
  3. "AI can complete full tasks" needs proving -- This is still controversial, and workshops that demonstrate it stand out
  4. Frameworks are memorable -- "Plan-Act-Check," "Best of N," and our "AI Delivers / Human Decides" give people mental models to take back
  5. Hands-on is essential -- Every successful program is project-based
Bottom Line

Our workshop is well-positioned. Adding Hooks, Best of N, and prompt patterns would close the main gaps with industry standards.