Your inbox has 200 items. Some are password resets. Some require 3 hours of research and a team decision.
They all look the same. They all sit in the same queue. Your best people spend half their day on tasks anyone could handle.
The problem is not volume. It is that simple and complex work are treated identically until a human looks at them.
CLASSIFICATION PATTERN - The intelligence layer that separates work requiring expertise from work requiring execution.
Complexity scoring assigns a difficulty rating to incoming requests, documents, or tasks before any human sees them. It looks at factors like the number of entities involved, the ambiguity of the language, whether multiple systems are affected, and historical patterns of similar requests.
A password reset scores low. A complaint referencing three different orders, two payment methods, and a pending refund scores high. The score determines what happens next: automated handling, junior team member, senior specialist, or escalation.
Without complexity scoring, your most expensive people waste time on tasks your cheapest automation could handle.
Complexity scoring solves a universal problem: matching work difficulty to the appropriate resource level so nothing is over-handled or under-handled.
Analyze incoming work for complexity indicators. Assign a score or tier. Route to the appropriate handler based on that tier. Track outcomes to refine scoring over time. This pattern applies whether you are routing support requests, reviewing documents, or triaging any queue.
Select different request types to see how complexity indicators translate to scores and routing decisions.
Click "Analyze Complexity" to see the breakdown
Count known complexity indicators
Define rules that add points for complexity signals: multiple entities mentioned (+2), references to past interactions (+1), involves multiple departments (+3), uses uncertain language (+1). Sum the points for a complexity score.
Let the model assess complexity directly
Prompt an AI model to rate complexity on a scale with reasoning. The model considers context, ambiguity, and domain knowledge requirements that rules might miss. More nuanced but requires clear criteria.
Learn from past resolution data
Analyze historical data: how long did similar requests take? How many interactions? What expertise was needed? New requests matching patterns of historically complex work inherit that complexity score.
Your team receives 150 incoming requests daily. Without complexity scoring, a senior team member might spend 15 minutes on a password reset while a fraud case sits untouched. With complexity scoring, simple requests auto-resolve while complex cases route directly to specialists with the right context.
Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed
Animated lines show direct connections - Hover for detailsTap for details - Click to learn more
A long message explaining a password reset is still simple. A short message saying "same problem as last time" referencing months of history is complex. Your scoring counts words and routes the wordy password reset to senior staff.
Instead: Score on structural complexity indicators, not surface features. Entity count, cross-references, and ambiguity matter more than word count.
Your scoring system routes requests. Six months later, you discover simple-scored items actually required 3 hours of work, and complex-scored items were resolved in 5 minutes. Nobody checked.
Instead: Track resolution time and outcome for each complexity tier. Regularly compare predicted complexity to actual effort. Retrain scoring when mismatches appear.
You created 10 complexity levels because more precision feels better. Your routing rules become impossible to maintain. Nobody agrees what differentiates level 4 from level 5.
Instead: Start with 3 tiers: simple (automate), moderate (standard handler), complex (specialist). Add granularity only when you have clear routing differences for each level.
You have learned how to measure task difficulty before it reaches a human. The next step is using that score to route work to the right handler automatically.