← Back to articles

How to Add AI Search to Your Website (2026)

Traditional search fails when users don't use exact keywords. AI search understands meaning — "affordable laptops for students" finds results about "budget notebooks for college." Here's how to add it to your site.

What You'll Build

Before: User searches "fix deployment error" → no results (your docs say "troubleshoot build failure")
After:  User searches "fix deployment error" → finds "Troubleshooting Build Failures" ✅

The Architecture

1. Your content → split into chunks
2. Chunks → converted to embeddings (vectors)
3. Vectors → stored in a vector database
4. User query → converted to embedding
5. Query embedding → compared to stored vectors
6. Most similar chunks → returned as results

Option 1: Fastest Setup (Orama — 10 minutes)

Orama is a full-text + vector search engine that runs anywhere:

npm install @orama/orama @orama/plugin-embeddings
import { create, insert, search } from '@orama/orama'
import { pluginEmbeddings } from '@orama/plugin-embeddings'

// Create index with AI embeddings
const db = await create({
  schema: {
    title: 'string',
    content: 'string',
    url: 'string',
  },
  plugins: [pluginEmbeddings({
    embeddings: {
      model: 'orama/gte-small', // built-in model
      documentFields: ['title', 'content'],
    }
  })],
})

// Index your content
await insert(db, {
  title: 'Troubleshooting Build Failures',
  content: 'When your deployment fails, check the build logs first...',
  url: '/docs/troubleshooting',
})

// AI search — understands meaning, not just keywords
const results = await search(db, {
  term: 'fix deployment error',
  mode: 'vector', // semantic search
  limit: 5,
})

Pros: No external API needed, runs client-side or server-side, free. Cons: Smaller model, less accurate than OpenAI embeddings for complex queries.

Option 2: Best Quality (OpenAI Embeddings + Supabase)

For production-quality semantic search:

Step 1: Set Up Supabase Vector Store

-- Enable the vector extension
create extension if not exists vector;

-- Create the content table
create table documents (
  id bigserial primary key,
  title text,
  content text,
  url text,
  embedding vector(1536)  -- OpenAI embedding dimension
);

-- Create an index for fast similarity search
create index on documents using ivfflat (embedding vector_cosine_ops)
  with (lists = 100);

-- Search function
create or replace function search_documents(
  query_embedding vector(1536),
  match_count int default 5,
  match_threshold float default 0.7
)
returns table (
  id bigint,
  title text,
  content text,
  url text,
  similarity float
)
language plpgsql
as $$
begin
  return query
  select
    documents.id,
    documents.title,
    documents.content,
    documents.url,
    1 - (documents.embedding <=> query_embedding) as similarity
  from documents
  where 1 - (documents.embedding <=> query_embedding) > match_threshold
  order by documents.embedding <=> query_embedding
  limit match_count;
end;
$$;

Step 2: Index Your Content

import OpenAI from 'openai'
import { createClient } from '@supabase/supabase-js'

const openai = new OpenAI()
const supabase = createClient(SUPABASE_URL, SUPABASE_KEY)

async function indexContent(pages: Array<{ title: string; content: string; url: string }>) {
  for (const page of pages) {
    // Generate embedding
    const response = await openai.embeddings.create({
      model: 'text-embedding-3-small',
      input: `${page.title}\n\n${page.content}`,
    })
    
    const embedding = response.data[0].embedding

    // Store in Supabase
    await supabase.from('documents').insert({
      title: page.title,
      content: page.content,
      url: page.url,
      embedding,
    })
  }
}

Step 3: Build the Search API

// app/api/search/route.ts (Next.js)
import { NextRequest, NextResponse } from 'next/server'
import OpenAI from 'openai'
import { createClient } from '@supabase/supabase-js'

const openai = new OpenAI()
const supabase = createClient(SUPABASE_URL, SUPABASE_KEY)

export async function POST(req: NextRequest) {
  const { query } = await req.json()

  // Convert query to embedding
  const embeddingResponse = await openai.embeddings.create({
    model: 'text-embedding-3-small',
    input: query,
  })
  const queryEmbedding = embeddingResponse.data[0].embedding

  // Search Supabase
  const { data: results } = await supabase.rpc('search_documents', {
    query_embedding: queryEmbedding,
    match_count: 5,
    match_threshold: 0.7,
  })

  return NextResponse.json({ results })
}

Step 4: Build the Search UI

'use client'
import { useState } from 'react'

export function SearchBar() {
  const [query, setQuery] = useState('')
  const [results, setResults] = useState([])
  const [loading, setLoading] = useState(false)

  async function handleSearch(e: React.FormEvent) {
    e.preventDefault()
    setLoading(true)

    const res = await fetch('/api/search', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ query }),
    })
    const data = await res.json()
    setResults(data.results)
    setLoading(false)
  }

  return (
    <div>
      <form onSubmit={handleSearch}>
        <input
          type="text"
          value={query}
          onChange={(e) => setQuery(e.target.value)}
          placeholder="Search with natural language..."
          className="w-full px-4 py-2 border rounded-lg"
        />
      </form>

      {loading && <p>Searching...</p>}

      <ul className="mt-4 space-y-3">
        {results.map((result: any) => (
          <li key={result.id}>
            <a href={result.url} className="block p-3 rounded-lg hover:bg-gray-50">
              <h3 className="font-semibold">{result.title}</h3>
              <p className="text-sm text-gray-600 mt-1">
                {result.content.slice(0, 150)}...
              </p>
              <span className="text-xs text-gray-400">
                {(result.similarity * 100).toFixed(0)}% match
              </span>
            </a>
          </li>
        ))}
      </ul>
    </div>
  )
}

Option 3: Hybrid Search (Best Results)

Combine keyword search with vector search for best results:

async function hybridSearch(query: string) {
  // Run both searches in parallel
  const [vectorResults, keywordResults] = await Promise.all([
    semanticSearch(query),   // Vector similarity
    fullTextSearch(query),    // Traditional keyword search
  ])

  // Merge and re-rank
  const merged = mergeResults(vectorResults, keywordResults)
  
  // Optional: Re-rank with an LLM for best quality
  const reranked = await rerankWithLLM(query, merged)
  
  return reranked
}

Keeping the Index Updated

// Re-index when content changes
// Option 1: Webhook on CMS publish
// Option 2: Cron job (daily re-index)
// Option 3: On-demand via admin API

async function reindexAll() {
  const pages = await getAllPages() // From your CMS/database
  
  // Clear old embeddings
  await supabase.from('documents').delete().neq('id', 0)
  
  // Batch index (respect rate limits)
  const BATCH_SIZE = 50
  for (let i = 0; i < pages.length; i += BATCH_SIZE) {
    const batch = pages.slice(i, i + BATCH_SIZE)
    await indexContent(batch)
    console.log(`Indexed ${i + batch.length}/${pages.length}`)
  }
}

Cost Estimation

Small site (100 pages):
  Indexing: ~$0.01 (one-time)
  Queries: ~$0.0001 per search
  Supabase: Free tier
  Monthly cost: ~$1-5

Medium site (1,000 pages):
  Indexing: ~$0.10 (one-time)
  Queries: ~$0.0001 per search × 10K searches = $1
  Supabase: Free tier
  Monthly cost: ~$5-15

Large site (10,000+ pages):
  Indexing: ~$1.00 (one-time)
  Supabase: $25/mo (Pro)
  Monthly cost: ~$30-50

FAQ

Is AI search better than Algolia?

For understanding natural language queries, yes. For exact keyword matching and typo tolerance, Algolia is still excellent. The best approach combines both (hybrid search).

Can I run this without OpenAI?

Yes. Use open-source embedding models (Hugging Face, Ollama) instead. Quality is slightly lower but there's no API cost.

How many documents can I search?

Supabase with pgvector handles millions of documents. For smaller sites, Orama works entirely in-memory.

Does this work with any CMS?

Yes. Index your content during build time (static sites) or via webhooks (headless CMS). The search layer is independent of your content source.

Bottom Line

For quick setup: Orama (10 minutes, no API key needed). For best quality: OpenAI embeddings + Supabase (30 minutes, pennies per month). For best results: hybrid search combining semantic and keyword matching.

AI search is no longer a luxury feature — it's expected. And it's surprisingly cheap and easy to implement.

Get AI tool guides in your inbox

Weekly deep-dives on the best AI coding tools, automation platforms, and productivity software.