You built a search bar for your knowledge base. Users type 'refund policy' and get nothing because the doc is titled 'Returns and Exchanges.'
They search 'how to cancel' and miss the 'Subscription Management' guide entirely.
Keyword matching fails because people don't use the same words your docs use.
Search should understand meaning, not just match words.
FOUNDATIONAL FOR AI - Every RAG system, semantic search, and recommendation engine depends on embeddings.
An embedding is a list of numbers (a vector) that represents the meaning of text. Similar meanings produce similar numbers. 'How do I cancel my subscription?' and 'I want to stop my membership' become nearly identical vectors, even though they share almost no words.
You send text to an embedding model. It returns a vector, typically 768 to 1536 numbers. These numbers place your text in a high-dimensional space where distance equals semantic similarity. Close vectors mean similar meanings.
This is what makes AI search actually work. Instead of matching keywords, you're matching meaning. A user searching for 'refund' finds your 'Returns and Exchanges' doc because the concepts are close in vector space.
Embeddings solve a universal problem: how do you represent complex, fuzzy concepts (like meaning) as precise, comparable numbers that computers can work with?
Transform unstructured data into a fixed-size numerical representation that preserves similarity relationships. Similar inputs map to nearby points. Different inputs map to distant points. Now you can measure, compare, and search.
Try searching for "refund" when the doc is titled "Returns and Exchanges." Watch keyword search fail while embeddings succeed.
Click a query to see how keyword matching compares to embedding-based search.
No exact word matches found
The word "refund" doesn't appear in any documents
Only finds docs containing the exact search words
Our return window is 30 days from purchase. Items must be unused with original tags...
We accept all major credit cards, PayPal, and Apple Pay. For billing questions...
To modify or end your subscription, navigate to Account Settings > Billing...
Finds docs with similar meaning, even with different words
Send text to OpenAI, Cohere, or similar providers
The simplest path. You call an API endpoint with your text, get back a vector. OpenAI's text-embedding-3-small, Cohere's embed-v3, and Voyage AI are popular choices. No infrastructure to manage.
Run models like BGE, E5, or GTE on your own hardware
Download an open-source model and run it locally. Models like BGE-large, E5-mistral, and GTE-large rival commercial options. You control the infrastructure and your data never leaves.
Train on your specific domain for better results
Start with a base model and fine-tune on your data. If your domain has specialized vocabulary (legal, medical, technical), fine-tuning teaches the model what 'material adverse change' means in your context.
Your support lead needs context before a call. They search 'integration issues' and find tickets mentioning 'API errors,' 'sync failures,' and 'connection problems' because embeddings understand these mean similar things. In 2 seconds, not 20 minutes of keyword guessing.
Hover over any component to see what it does and why it's neededTap any component to see what it does and why it's needed
Animated lines show direct connections · Hover for detailsTap for details · Click to learn more
You start with OpenAI's ada-002, then switch to text-embedding-3-small for cost savings. But vectors from different models aren't comparable. Your search breaks because you're measuring distance between apples and oranges.
Instead: Pick one model and stick with it. If you must switch, re-embed everything.
You embed entire 10-page documents because 'more context is better.' But embedding models average meaning across the whole input. A doc about both refunds AND shipping becomes mediocre at matching either topic.
Instead: Chunk documents into focused pieces (200-500 tokens). Each chunk should be about one thing.
Your search returns the top 5 results no matter what. User searches for 'quantum physics' in your HR knowledge base. They get 5 results because you asked for 5, even though none are relevant.
Instead: Set a minimum similarity score (e.g., 0.7). Return nothing rather than garbage.
You've learned how text becomes searchable vectors. The natural next step is understanding where those vectors live and how you retrieve them at scale.