PostgreSQL ships a full-text search engine. For most content-heavy and CRUD applications, it is the right starting point: no extra service to run, no index to keep in sync, and search results are transactionally consistent with your writes. Start here, and graduate to a dedicated search engine only when you hit a specific limitation that PostgreSQL cannot address.
This section covers PostgreSQL full-text search and trigram matching, the SQLx patterns for using them from Rust, building a search UI with HTMX and Maud, and when and how to move to Meilisearch.
PostgreSQL full-text search
PostgreSQL full-text search works by converting text into tsvector (a sorted list of normalised lexemes) and matching it against a tsquery (a search predicate). The engine handles stemming, stop-word removal, and ranking.
Schema setup
Add a tsvector column to your table using GENERATED ALWAYS AS ... STORED. PostgreSQL maintains it automatically on every insert and update.
CREATE TABLE articles (
id BIGSERIAL PRIMARY KEY,
title TEXT NOT NULL,
body TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
search_vector tsvector
GENERATED ALWAYS AS (
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(body, '')), 'B')
) STORED
);
setweight assigns a weight label (A, B, C, or D) to lexemes. Title matches weighted A will rank higher than body matches weighted B when you use the ranking functions.
Create a GIN index on the column:
CREATE INDEX idx_articles_search ON articles USING GIN (search_vector);
Without this index, every search query scans the full table and recomputes tsvectors. With it, PostgreSQL uses an inverted index to look up only rows containing the matching lexemes.
Building search queries
websearch_to_tsquery is the best choice for user-facing search. It accepts Google-like syntax (quoted phrases, - for exclusion, OR), and it never raises a syntax error on malformed input.
-- "rust web" becomes: 'rust' & 'web'
-- "rust framework" -django becomes: 'rust' & 'framework' & !'django'
-- "full text" OR search becomes: 'full' <-> 'text' | 'search'
SELECT websearch_to_tsquery('english', 'rust web framework');
The @@ operator matches a tsvector against a tsquery:
SELECT id, title
FROM articles
WHERE search_vector @@ websearch_to_tsquery('english', 'rust web framework')
ORDER BY ts_rank_cd(search_vector, websearch_to_tsquery('english', 'rust web framework')) DESC
LIMIT 20;
Other tsquery constructors exist for specific needs:
| Function | Behaviour |
|---|---|
websearch_to_tsquery | Google-like syntax, never errors. Best for user input. |
plainto_tsquery | Inserts & (AND) between all words. No special syntax. |
phraseto_tsquery | Inserts <-> (adjacent) between words. For exact phrase matching. |
to_tsquery | Requires explicit operators (&, |, !, <->). For programmatic query building. |
Ranking results
ts_rank_cd uses cover density ranking, which rewards documents where matching terms appear close together. It generally produces better results than ts_rank for multi-term queries.
SELECT id, title,
ts_rank_cd(search_vector, query) AS rank
FROM articles, websearch_to_tsquery('english', 'rust web') AS query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 20;
The weights array controls how much each label contributes to the rank. The default is {0.1, 0.2, 0.4, 1.0} for D, C, B, A respectively. Override it when you need different weighting:
ts_rank_cd('{0.1, 0.2, 0.4, 1.0}', search_vector, query)Highlighting search results
ts_headline generates a text snippet with matching terms wrapped in markers:
SELECT id, title,
ts_headline('english', body, query,
'StartSel=<mark>, StopSel=</mark>, MaxWords=35, MinWords=15, MaxFragments=2')
AS snippet
FROM articles, websearch_to_tsquery('english', 'rust web') AS query
WHERE search_vector @@ query
ORDER BY ts_rank_cd(search_vector, query) DESC
LIMIT 20;
ts_headline is expensive. It re-parses the original text for every row. Always apply it only to rows that have already been filtered and limited.
Search queries with SQLx
SQLx does not have native Rust types for tsvector or tsquery. This is not a problem in practice: keep the FTS logic in SQL, bind the search term as a String, and return only types SQLx understands.
ts_rank and ts_rank_cd return float4 (maps to f32). ts_headline returns text (maps to String). The @@ operator returns bool. All work directly with SQLx’s compile-time checked macros.
struct SearchResult {
id: i64,
title: String,
snippet: String,
rank: f32,
}
pub async fn search_articles(
pool: &PgPool,
query: &str,
limit: i64,
) -> Result<Vec<SearchResult>, sqlx::Error> {
sqlx::query_as!(
SearchResult,
r#"
SELECT
id,
title,
ts_headline('english', body, websearch_to_tsquery('english', $1),
'StartSel=<mark>, StopSel=</mark>, MaxWords=35, MinWords=15')
as "snippet!",
ts_rank_cd(search_vector, websearch_to_tsquery('english', $1))
as "rank!"
FROM articles
WHERE search_vector @@ websearch_to_tsquery('english', $1)
ORDER BY rank DESC
LIMIT $2
"#,
query,
limit
)
.fetch_all(pool)
.await
}
The "snippet!" and "rank!" column aliases force SQLx to treat these as non-nullable. Without the ! suffix, the macro infers Option<String> and Option<f32> for computed columns, even though these functions never return NULL for non-null inputs.
Do not use SELECT * on tables with tsvector columns. The query! and query_as! macros will fail at compile time because SQLx has no Rust type for tsvector. Always list your columns explicitly, omitting the tsvector column or casting it with ::text if you genuinely need its contents.
pg_trgm for fuzzy matching
PostgreSQL full-text search is lexeme-exact after normalisation. If a user types “postgre” instead of “postgresql”, FTS will not match. The pg_trgm extension fills this gap with trigram-based similarity matching, providing typo tolerance that FTS lacks.
Enable the extension
Add a migration:
CREATE EXTENSION IF NOT EXISTS pg_trgm;
pg_trgm is a contrib extension shipped with PostgreSQL but not enabled by default. The compile-time query! macros connect to your development database, so the extension must be installed there too.
Similarity search
A trigram is a sequence of three consecutive characters. Two strings are similar if they share many trigrams. The similarity function returns a score between 0.0 and 1.0:
SELECT similarity('postgresql', 'postgre');
-- Result: ~0.47
The % operator returns true when similarity exceeds a threshold (default 0.3, configurable with SET pg_trgm.similarity_threshold):
SELECT title, similarity(title, 'postgre') AS sml
FROM articles
WHERE title % 'postgre'
ORDER BY sml DESC
LIMIT 10;
word_similarity compares a search term against substrings of a longer text. It is better suited when searching for a word within a title or sentence:
SELECT title, word_similarity('serch', title) AS sml
FROM articles
WHERE 'serch' <% title
ORDER BY sml DESC
LIMIT 10;Indexing for trigram queries
Create a GIN index with the gin_trgm_ops operator class:
CREATE INDEX idx_articles_title_trgm ON articles USING GIN (title gin_trgm_ops);
This index supports the % operator, LIKE, ILIKE, and regex patterns. Without it, every trigram query requires a sequential scan.
If you need KNN (K-Nearest Neighbour) ordering with the <-> distance operator, use a GiST index instead:
CREATE INDEX idx_articles_title_trgm_gist ON articles USING GiST (title gist_trgm_ops);
GiST supports ORDER BY title <-> 'search term' directly, which GIN does not.
Trigram queries with SQLx
similarity and word_similarity take text inputs and return real (maps to f32). No casting workarounds needed.
struct FuzzyResult {
id: i64,
title: String,
similarity: f32,
}
pub async fn fuzzy_search(
pool: &PgPool,
query: &str,
limit: i64,
) -> Result<Vec<FuzzyResult>, sqlx::Error> {
sqlx::query_as!(
FuzzyResult,
r#"
SELECT
id,
title,
similarity(title, $1) as "similarity!"
FROM articles
WHERE title % $1
ORDER BY similarity DESC
LIMIT $2
"#,
query,
limit
)
.fetch_all(pool)
.await
}Combining FTS and trigram search
A practical search function tries full-text search first for precise, ranked results, then falls back to trigram matching for typo tolerance:
pub async fn search(
pool: &PgPool,
query: &str,
limit: i64,
) -> Result<Vec<SearchResult>, sqlx::Error> {
let results = search_articles(pool, query, limit).await?;
if results.is_empty() {
// Fall back to fuzzy matching on title
return sqlx::query_as!(
SearchResult,
r#"
SELECT
id,
title,
'' as "snippet!",
similarity(title, $1) as "rank!"
FROM articles
WHERE title % $1
ORDER BY rank DESC
LIMIT $2
"#,
query,
limit
)
.fetch_all(pool)
.await;
}
Ok(results)
}
You can also combine both in a single query with a weighted score, but the fallback pattern is simpler to reason about and avoids the cost of trigram comparison on every row when FTS already produces good results.
Search UI with HTMX
A search interface needs a text input that sends queries as the user types, a target element where results appear, and debouncing to avoid flooding the server with requests on every keystroke. HTMX handles all of this declaratively.
The search input
fn search_input(query: &str) -> Markup {
html! {
input type="search" name="q" value=(query)
placeholder="Search articles..."
hx-get="/search"
hx-trigger="input changed delay:300ms, keyup[key=='Enter'], search"
hx-target="#search-results"
hx-sync="this:replace"
hx-replace-url="true"
hx-indicator="#search-spinner";
span #search-spinner .htmx-indicator { "Searching..." }
}
}
The trigger configuration:
input changed delay:300msdebounces: fires 300ms after the user stops typing, and only if the value actually changed.keyup[key=='Enter']fires immediately on Enter.searchfires when the user clicks the browser’s native clear button on<input type="search">.
hx-sync="this:replace" cancels any in-flight request and replaces it with the new one. Without this, a slow response for “ab” could arrive after a fast response for “abc” and overwrite the correct results with stale ones.
hx-replace-url="true" updates the browser URL bar to /search?q=... without creating a history entry for every keystroke. The user can copy, bookmark, or share the URL.
The results fragment
fn search_results(results: &[SearchResult]) -> Markup {
html! {
@if results.is_empty() {
p .no-results { "No articles found." }
} @else {
@for result in results {
article .search-result {
h3 {
a href={ "/articles/" (result.id) } { (result.title) }
}
p .snippet { (PreEscaped(&result.snippet)) }
}
}
}
}
}
Use PreEscaped for the snippet because ts_headline returns HTML with <mark> tags. The snippet content comes from your own database, not from user input, so this is safe.
The Axum handler
The handler serves both full page loads (direct navigation to /search?q=rust) and HTMX fragment requests (triggered by typing in the input). Detect the difference with the HX-Request header.
use axum::extract::{Query, State};
use axum::http::HeaderMap;
use axum::response::Html;
use maud::{html, Markup, PreEscaped};
#[derive(serde::Deserialize)]
pub struct SearchParams {
#[serde(default)]
q: String,
}
pub async fn search_handler(
headers: HeaderMap,
State(state): State<AppState>,
Query(params): Query<SearchParams>,
) -> Markup {
let results = if params.q.is_empty() {
vec![]
} else {
search(&state.db, ¶ms.q, 20)
.await
.unwrap_or_default()
};
let fragment = search_results(&results);
if headers.get("HX-Request").is_some() {
fragment
} else {
search_page(¶ms.q, fragment)
}
}
fn search_page(query: &str, results: Markup) -> Markup {
html! {
h1 { "Search" }
(search_input(query))
div #search-results {
(results)
}
}
}
When HTMX sends a request, the handler returns only the results fragment. When the user navigates directly to /search?q=rust, it returns the full page with the search input pre-populated and results already rendered. This makes search URLs bookmarkable and shareable.
Route setup
use axum::{routing::get, Router};
let app = Router::new()
.route("/search", get(search_handler))
.with_state(state);When PostgreSQL search is not enough
PostgreSQL FTS handles most search requirements for content-heavy and CRUD applications. Recognise these limits so you know when to reach for a dedicated engine:
- No built-in typo tolerance.
pg_trgmhelps, but it works on string similarity, not search-query-level fuzzy matching. A dedicated engine like Meilisearch handles typos automatically across all indexed fields. - No faceted search. Counting results by category, tag, or date range alongside search results requires separate
GROUP BYqueries. Dedicated engines provide facets as a first-class feature. - Limited relevance tuning.
ts_rankandts_rank_cdare basic. There is no equivalent to Elasticsearch’s function scoring, decay functions, or field-level boosting beyond four weight levels (A/B/C/D). - Performance at scale. PostgreSQL FTS works well into the millions of rows for straightforward queries. Beyond that, GIN indexes become large and slow to update, and
ts_headlineis CPU-intensive. - No instant prefix matching. FTS matches complete lexemes. Searching for “rus” will not match “rust”. Dedicated engines handle prefix matching out of the box.
- No semantic matching. FTS matches words, not meaning. “How to fix a flat tire” will not find documents about “tire puncture repair”. For meaning-based retrieval, see Semantic Search.
If your application hits one or more of these limits and search is a primary user-facing feature, add Meilisearch or pgvector depending on what you need.
Meilisearch
Meilisearch is a search engine built in Rust with built-in typo tolerance, instant search, and faceted filtering. It runs as a separate service, providing a RESTful API that your application talks to via the Rust SDK.
Running Meilisearch in development
Add it to your Docker Compose file:
services:
meilisearch:
image: getmeili/meilisearch:v1.12
ports:
- "7700:7700"
environment:
MEILI_ENV: development
MEILI_MASTER_KEY: devMasterKey123
volumes:
- meili_data:/meili_data
volumes:
meili_data:
In development mode, Meilisearch exposes a web-based search preview UI at http://localhost:7700.
Rust SDK
Add the dependency:
[dependencies]
meilisearch-sdk = "0.28"
Index documents and search:
use meilisearch_sdk::client::Client;
#[derive(serde::Serialize, serde::Deserialize, Debug)]
struct Article {
id: i64,
title: String,
body: String,
}
// Create client
let client = Client::new("http://localhost:7700", Some("devMasterKey123"))?;
// Index documents
let articles: Vec<Article> = fetch_all_articles(&pool).await?;
client.index("articles")
.add_documents(&articles, Some("id"))
.await?;
// Search (typo-tolerant by default: "rrust" finds "rust")
let results = client.index("articles")
.search()
.with_query("rrust web framwork")
.with_limit(20)
.execute::<Article>()
.await?;Keeping the index in sync
PostgreSQL remains the source of truth. Meilisearch is a derived, read-optimised search layer. The simplest sync strategy is application-level dual write with a periodic full resync as a safety net.
Dual write: when your application inserts or updates an article in PostgreSQL, also push the document to Meilisearch:
pub async fn create_article(
pool: &PgPool,
meili: &Client,
title: &str,
body: &str,
) -> Result<Article, AppError> {
let article = sqlx::query_as!(
Article,
"INSERT INTO articles (title, body) VALUES ($1, $2) RETURNING id, title, body",
title, body
)
.fetch_one(pool)
.await?;
meili.index("articles")
.add_documents(&[&article], Some("id"))
.await?;
Ok(article)
}
Periodic resync: a background task queries PostgreSQL for rows modified since the last sync (using an updated_at column) and pushes them to Meilisearch. Run this every 30-60 seconds. It catches any drift caused by failed dual writes.
If the Meilisearch write fails, the search index is temporarily stale but the database is correct. Design your application to tolerate this eventual consistency.
When to use Meilisearch
Add Meilisearch when search is a primary user-facing feature and you need:
- Automatic typo tolerance across all indexed fields
- Faceted search and filtering
- Instant prefix matching (results as the user types each character)
- Relevance ranking that works well out of the box without manual tuning
Accept the operational cost: a separate service to run, a sync strategy to maintain, and eventual consistency between your database and search index.
tantivy
tantivy is an embedded full-text search library for Rust. Think of it as Lucene for Rust: you link it into your application directly, with no separate process or HTTP API. It provides BM25 scoring, configurable tokenisers with stemming support for 17 languages, phrase queries, and faceted search.
tantivy is a good fit when you need more powerful search than PostgreSQL FTS but want to avoid adding infrastructure. The index lives in your application process, so there is no sync problem and no network hop. The trade-off is that you manage the index lifecycle yourself, and the index writer holds an exclusive lock, which limits it to a single-process deployment (or requires designating one process as the indexer).
tantivy does not provide built-in typo tolerance. If you need automatic fuzzy matching, Meilisearch is a better choice.
Gotchas
websearch_to_tsquery never errors, to_tsquery does. Use websearch_to_tsquery or plainto_tsquery for user-facing search. to_tsquery requires valid operator syntax and will return a SQL error on malformed input like unbalanced parentheses.
Generated column expressions must be immutable. to_tsvector('english', title) with a string literal regconfig is immutable. If the language configuration comes from another column, you need a trigger instead of a generated column.
The pg_trgm extension must be installed in your development database. The query! macros connect at compile time. If the extension is missing, any query using similarity(), %, or related operators will fail to compile.
ts_headline on large result sets is slow. Always filter and limit rows before applying ts_headline. Never call it on the full table.
sqlx prepare needs extensions too. If you use cargo sqlx prepare for offline compilation in CI, the database used for preparation must have pg_trgm installed and the schema fully migrated.