Model Comparison
| Model | Context Window | Approx. Lines of Code | Best For |
|---|---|---|---|
| Claude Sonnet 4 | 200K tokens | ~150,000 lines | Daily coding, fast responses |
| Claude Opus 4 | 200K tokens | ~150,000 lines | Complex reasoning, architecture |
| GPT-4o | 128K tokens | ~96,000 lines | General tasks, multimodal |
| GPT-4 Turbo | 128K tokens | ~96,000 lines | Long documents, analysis |
| Gemini 2.0 Pro | 2M tokens | ~1.5M lines | Entire codebases, massive context |
Token Estimation
Rules of Thumb
- 1 token ≈ 4 characters (English text)
- 1 token ≈ 0.75 words
- 1 line of code ≈ 10-15 tokens (varies by language)
By Language (approx. tokens per line)
| Language | Tokens/Line |
|---|---|
| Python | 8-12 |
| JavaScript | 10-14 |
| TypeScript | 12-16 |
| Java | 14-18 |
| HTML | 15-25 |
| CSS | 8-12 |
| JSON | 10-15 |
Quick Calculations
| Content | Rough Token Count |
|---|---|
| 100 lines of code | ~1,000-1,500 tokens |
| 1,000 lines of code | ~10,000-15,000 tokens |
| 10,000 lines of code | ~100,000-150,000 tokens |
| Average React component | ~200-500 tokens |
| Typical test file | ~300-800 tokens |
Context Management
Check Your Usage
/cost
Reduce Context Size
/compact
Clear and Start Fresh
/clear
Strategies for Large Codebases
1. Use CLAUDE.md
Provide project context upfront so it doesn't need to be repeated.
2. Selective File Loading
Instead of loading everything:
You: "Read only the files in src/auth/ before we work on authentication."
3. Work in Focused Sessions
Complete one feature, commit, then start fresh for the next.
4. Summarize Instead of Include
You: "I have a User model with fields: id, email, name, createdAt, updatedAt.
Don't read the file, just use this information."
5. Use Subagents
Parallel tasks use separate context windows.
When You're Running Out of Context
Signs
- Claude forgets earlier instructions
- Responses become less coherent
/costshows high token usage
Solutions
- Run
/compactto summarize - Commit current work and
/clear - Break large tasks into smaller sessions
- Update CLAUDE.md with learnings
Model Selection Tips
| Scenario | Recommended Model |
|---|---|
| Quick fixes, small tasks | Sonnet (fast, cheap) |
| Complex architecture | Opus (better reasoning) |
| Massive codebase | Gemini 2.0 Pro (huge context) |
| Code + images | GPT-4o (multimodal) |
Cost Awareness
Context isn't free -- larger context = more tokens = higher cost.
Optimize By
- Using CLAUDE.md for persistent context
- Running
/compactregularly - Being specific about which files to read
- Clearing context between unrelated tasks