Your AI assistant is answering questions from your knowledge base.
Sometimes it gives detailed, accurate responses. Sometimes it cuts off mid-sentence or forgets key context.
You loaded 50 documents into the prompt. The AI ignored half of them and rambled through the other half.
The problem is not the AI. The problem is how you are spending your tokens.
Every AI prompt has a fixed budget. How you spend it determines the quality of what you get back.
INTELLIGENCE LAYER - Token budgeting happens during context assembly, before the prompt is sent to the AI.
Every AI model has a context window limit. GPT-4 Turbo allows 128k tokens. Claude allows 200k. But just because you CAN use all those tokens does not mean you should. More context is not always better context.
Token budgeting is deciding how to allocate your available tokens across four competing areas: system instructions (who the AI is and how it should behave), examples (demonstrations of good responses), retrieved context (knowledge from your database), and output (room for the AI to generate its response).
The goal is not maximum context. The goal is maximum signal per token. A well-budgeted 4k token prompt often outperforms a bloated 100k token prompt.
Token budgeting solves a universal problem: when resources are limited, you must prioritize. The highest-value items get allocated first, lower-value items get what remains.
Fixed budget. Competing demands. Allocate based on value, not on arrival order or equal distribution. What matters most gets funded first.
Adjust the sliders to see how different allocations affect your prompt capacity.
Who the AI is and how it behaves
Your system prompt defines the AI persona, rules, and constraints. This is typically fixed for a given application. Budget 500-2000 tokens depending on complexity.
Demonstrations of good responses
Examples show the AI what good output looks like. Include 1-3 high-quality examples rather than many mediocre ones. Budget 0-1500 tokens based on task complexity.
Knowledge from your database
Documents, facts, and data retrieved from your knowledge base. This is usually the largest allocation. Budget based on what remains after other allocations.
Room for the AI to generate
Reserve tokens for the AI to produce its response. If you use all tokens on input, the AI has no room to answer. Budget 500-4000 tokens depending on expected response length.
Token budgeting sits at the center of context engineering. It takes input from chunking (what pieces exist) and reranking (which are most relevant), then determines how much of each to include before assembly.
Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed
Animated lines show direct connections · Hover for detailsTap for details · Click to learn more
You have 128k tokens available so you use 127k on context, leaving 1k for output. The AI starts answering then cuts off mid-sentence. Your users see incomplete, useless responses.
Instead: Always reserve output space first. Work backwards from how long responses need to be, then allocate the rest to context.
You split tokens evenly: 25% system, 25% examples, 25% context, 25% output. Now your simple FAQ bot has 20 examples (wasteful) and your complex analysis tool has too few.
Instead: Allocate based on task needs. Simple tasks need minimal examples, complex tasks need more context. There is no universal split.
You budget 4000 tokens for context and add JSON with lots of curly braces, colons, and whitespace. Actual content is only 2000 tokens. Half your budget went to syntax.
Instead: Use compact formats. Strip unnecessary fields. Count actual tokens, not characters. XML and JSON have significant overhead.
You have learned how to allocate tokens across system prompts, examples, context, and output. The natural next step is learning how to fit more signal into fewer tokens.