Reference

Context Windows Reference

Understanding model context limits to work more effectively with AI coding tools

Model Comparison

Model Context Window Approx. Lines of Code Best For
Claude Sonnet 4 200K tokens ~150,000 lines Daily coding, fast responses
Claude Opus 4 200K tokens ~150,000 lines Complex reasoning, architecture
GPT-4o 128K tokens ~96,000 lines General tasks, multimodal
GPT-4 Turbo 128K tokens ~96,000 lines Long documents, analysis
Gemini 2.0 Pro 2M tokens ~1.5M lines Entire codebases, massive context

Token Estimation

Rules of Thumb

By Language (approx. tokens per line)

Language Tokens/Line
Python8-12
JavaScript10-14
TypeScript12-16
Java14-18
HTML15-25
CSS8-12
JSON10-15

Quick Calculations

Content Rough Token Count
100 lines of code~1,000-1,500 tokens
1,000 lines of code~10,000-15,000 tokens
10,000 lines of code~100,000-150,000 tokens
Average React component~200-500 tokens
Typical test file~300-800 tokens

Context Management

Check Your Usage

/cost

Reduce Context Size

/compact

Clear and Start Fresh

/clear

Strategies for Large Codebases

1. Use CLAUDE.md

Provide project context upfront so it doesn't need to be repeated.

2. Selective File Loading

Instead of loading everything:

You: "Read only the files in src/auth/ before we work on authentication."

3. Work in Focused Sessions

Complete one feature, commit, then start fresh for the next.

4. Summarize Instead of Include

You: "I have a User model with fields: id, email, name, createdAt, updatedAt.
     Don't read the file, just use this information."

5. Use Subagents

Parallel tasks use separate context windows.

When You're Running Out of Context

Signs

Solutions

  1. Run /compact to summarize
  2. Commit current work and /clear
  3. Break large tasks into smaller sessions
  4. Update CLAUDE.md with learnings

Model Selection Tips

Scenario Recommended Model
Quick fixes, small tasksSonnet (fast, cheap)
Complex architectureOpus (better reasoning)
Massive codebaseGemini 2.0 Pro (huge context)
Code + imagesGPT-4o (multimodal)

Cost Awareness

Context isn't free -- larger context = more tokens = higher cost.

Optimize By