OperionOperion
Philosophy
Core Principles
The Rare Middle
Beyond the binary
Foundations First
Infrastructure before automation
Compound Value
Systems that multiply
Build Around
Design for your constraints
The System
Modular Architecture
Swap any piece
Pairing KPIs
Measure what matters
Extraction
Capture without adding work
Total Ownership
You own everything
Systems
Knowledge Systems
What your organization knows
Data Systems
How information flows
Decision Systems
How choices get made
Process Systems
How work gets done
Learn
Foundation & Core
Layer 0
Foundation & Security
Security, config, and infrastructure
Layer 1
Data Infrastructure
Storage, pipelines, and ETL
Layer 2
Intelligence Infrastructure
Models, RAG, and prompts
Layer 3
Understanding & Analysis
Classification and scoring
Control & Optimization
Layer 4
Orchestration & Control
Routing, state, and workflow
Layer 5
Quality & Reliability
Testing, eval, and observability
Layer 6
Human Interface
HITL, approvals, and delivery
Layer 7
Optimization & Learning
Feedback loops and fine-tuning
Services
AI Assistants
Your expertise, always available
Intelligent Workflows
Automation with judgment
Data Infrastructure
Make your data actually usable
Process
Setup Phase
Research
We learn your business first
Discovery
A conversation, not a pitch
Audit
Capture reasoning, not just requirements
Proposal
Scope and investment, clearly defined
Execution Phase
Initiation
Everything locks before work begins
Fulfillment
We execute, you receive
Handoff
True ownership, not vendor dependency
About
OperionOperion

Building the nervous systems for the next generation of enterprise giants.

Systems

  • Knowledge Systems
  • Data Systems
  • Decision Systems
  • Process Systems

Services

  • AI Assistants
  • Intelligent Workflows
  • Data Infrastructure

Company

  • Philosophy
  • Our Process
  • About Us
  • Contact
© 2026 Operion Inc. All rights reserved.
PrivacyTermsCookiesDisclaimer
Back to Learn
KnowledgeLayer 2Retrieval Architecture

Relevance Thresholds

Your team built a knowledge base search. Someone asks a question.

The system returns 10 results. All of them are vaguely related.

The AI uses all 10 to craft an answer. The answer is confidently wrong.

The system found stuff. It just found the wrong stuff.

The problem is not the search. The problem is that no one told the system when to stop.

8 min read
intermediate
Relevant If You're
Building internal knowledge search
Using AI to answer questions from documents
Filtering results before they reach the AI

INTERMEDIATE - Requires search results. Controls what reaches your AI.

Where This Sits

Category 2.3: Retrieval Architecture

2
Layer 2

Intelligence Infrastructure

Chunking StrategiesCitation & Source TrackingEmbedding Model SelectionHybrid SearchQuery TransformationRelevance ThresholdsReranking
Explore all of Layer 2
What It Is

A cutoff point that decides what gets used and what gets ignored

Every search returns results with similarity scores. A score of 0.92 means 'very similar.' A score of 0.47 means 'sort of related.' Without a threshold, you're feeding everything to the AI, including the garbage.

A relevance threshold is your quality gate. You decide: only results above 0.75 make the cut. Everything else gets filtered out before the AI sees it. The AI works with three highly relevant passages instead of ten mediocre ones.

Set the threshold too low and the AI hallucinates from bad context. Set it too high and the AI says "I don't know" when the answer exists. The art is finding the right cutoff for your use case.

The Lego Block Principle

Every decision system needs a quality gate. Without a clear threshold, you process everything equally and get overwhelmed by noise. The pattern is universal: define "good enough" before you act.

The core pattern:

Set a measurable cutoff. Anything above the line gets processed. Anything below gets filtered or escalated. This prevents low-quality inputs from polluting downstream decisions.

Where else this applies:

Hiring - Resume screening scores. Below 60%? Auto-reject. Above? Human review.
Data quality - If a record has less than 40% of required fields, route to cleanup queue.
Support routing - Confidence below 70%? Escalate to human. Above? Let automation handle it.
Reporting - Data freshness threshold. If older than 24 hours, show warning before including.
Interactive: Adjust Threshold, Watch Results Filter

Query: "What is our refund policy for enterprise customers?"

Move the slider to see which results pass or fail your quality gate. Green results are actually relevant. Red ones would pollute your AI's context.

Balanced
0.01.0
0.70
3
Results Passed
2
Relevant Passed
0
Relevant Blocked
5
Noise Blocked
Passed (3)
Sent to AI
Enterprise Customer Terms & Conditions

Enterprise customers receive 90-day refund window with full credit...

0.91
Refund Policy - General

Standard refund policy allows 30 days for all customers...

0.84
Enterprise Pricing Tiers

Enterprise pricing starts at $10,000/year with volume discounts...

0.72
Filtered (5)
Blocked
Customer Support Guidelines

All support tickets should reference the customer ID...

0.65
Policy Update - March 2024

Updated data retention policies for compliance...

0.58
Enterprise Onboarding Checklist

New enterprise customers should complete these steps...

0.51
Customer Feedback Summary

Q4 customer satisfaction survey results...

0.43
Office Policy Manual

Remote work policies for all employees...

0.35
Try it: Move the slider above to see how different thresholds affect what reaches your AI. The sweet spot depends on your data.
How It Works

Three approaches to setting the right cutoff

Fixed Threshold

One number for everything

You pick a number (e.g., 0.75) and apply it universally. Simple to implement, easy to understand. Works well when your queries are consistent and your embedding model is stable.

Pro: Simple, predictable, easy to debug
Con: One size rarely fits all query types

Dynamic Threshold

Adjust based on context

Different query types get different thresholds. Technical questions might need 0.85 (high precision). General inquiries might accept 0.65 (broader recall). The system learns what works for each category.

Pro: Better fits diverse query patterns
Con: More complex to tune and maintain

Top-K with Minimum

Take the best N, but only if good enough

Return the top 5 results, but only if they exceed 0.6. This guarantees you never get more than 5 (context limits) and never get junk (quality floor). Common in production RAG systems.

Pro: Balances quantity and quality constraints
Con: Two parameters to tune instead of one
Connection Explorer

"What is our refund policy for enterprise customers?"

An employee asks your knowledge base. Without relevance thresholds, the search returns everything vaguely related to 'refund' or 'enterprise' or 'policy.' The AI weaves them into a confidently wrong answer. With thresholds, only the three most relevant passages get through. The AI gives the actual policy.

Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed

Vector DB
Embeddings
Hybrid Search
Relevance Threshold
You Are Here
Reranking
AI Generation
Accurate Answer
Outcome
React Flow
Press enter or space to select a node. You can then use the arrow keys to move the node around. Press delete to remove it and escape to cancel.
Press enter or space to select an edge. You can then press delete to remove it or escape to cancel.
Foundation
Data Infrastructure
Intelligence
Understanding
Outcome

Animated lines show direct connections | Hover for detailsTap for details | Click to learn more

Upstream (Requires)

Embedding GenerationHybrid SearchVector Databases

Downstream (Enables)

RerankingAI Generation (Text)Context Window Management
Common Mistakes

What breaks when thresholds are wrong

Don't set the threshold based on intuition alone

You guessed 0.8 because it "sounds right." But your embedding model returns 0.6-0.7 for genuinely relevant content. Now your AI says "I don't know" to questions that have answers in your knowledge base.

Instead: Run test queries. See what scores your good matches actually get. Set threshold based on real data, not gut feeling.

Don't use the same threshold for all query types

A technical question about your API needs precision. A general question about company culture can be broader. Using 0.85 for everything means the culture question returns nothing.

Instead: Categorize your queries. Set different thresholds by category. Or use Top-K with minimum for more flexibility.

Don't ignore the 'no results' case

Everything fell below threshold. Now what? The system returns an empty context. The AI hallucinates an answer anyway because you didn't handle the edge case.

Instead: Detect when all results are below threshold. Return 'I don't have information on this' rather than letting the AI guess.

What's Next

Now that you understand relevance thresholds

You've learned how to filter search results before they reach your AI. The natural next step is understanding how to reorder the results that make it through.

Recommended Next

Reranking

How to reorder filtered results so the best ones come first

Back to Learning Hub