SQLx is an async database library for Rust that checks your SQL queries against a real PostgreSQL database at compile time. If a query references a column that does not exist, uses the wrong type, or has a syntax error, the compiler catches it before the application runs. This is the primary reason to choose SQLx over other database libraries.
SQLx is not an ORM. There is no query builder, no model macros, and no schema-to-struct code generation. Write SQL directly, and SQLx verifies it.
Setup
Add SQLx to your Cargo.toml:
[dependencies]
sqlx = { version = "0.8", features = [
"runtime-tokio",
"tls-rustls-ring-webpki",
"postgres",
"macros",
"migrate",
] }
Feature breakdown:
runtime-tokioselects the Tokio async runtime.tls-rustls-ring-webpkienables TLS via rustls with WebPKI certificate roots. For local development without TLS, this still needs to be present but the connection will negotiate plaintext if the server allows it.postgresenables the PostgreSQL driver.macrosenablesquery!,query_as!, and the other compile-time checked query macros.migrateenables the migration runner andmigrate!macro.
Add type integration features as needed:
sqlx = { version = "0.8", features = [
"runtime-tokio",
"tls-rustls-ring-webpki",
"postgres",
"macros",
"migrate",
"uuid",
"time",
"json",
] }
These enable uuid::Uuid, time crate date/time types, and serde_json::Value / Json<T> for JSONB columns, respectively.
Install the CLI
The sqlx-cli tool manages databases and migrations:
cargo install sqlx-cli --no-default-features --features rustls,postgres
This installs only PostgreSQL support, which keeps the build faster than the full default install.
Connecting to PostgreSQL
SQLx reads the database connection string from the DATABASE_URL environment variable. Set it in a .env file at the project root:
DATABASE_URL=postgres://myapp:password@localhost:5432/myapp_dev
The format is postgres://user:password@host:port/database. SQLx’s macros use dotenvy to read .env automatically at compile time.
PostgreSQL itself should be running as a Docker container managed by Docker Compose. See the Development Environment section for the container setup.
Connection pooling
Create a connection pool at application startup and share it through Axum’s application state. PgPool is internally reference-counted, so cloning it is cheap.
use sqlx::postgres::PgPoolOptions;
use sqlx::PgPool;
let pool = PgPoolOptions::new()
.max_connections(5)
.connect(&std::env::var("DATABASE_URL").expect("DATABASE_URL must be set"))
.await
.expect("failed to connect to database");
Pass the pool into your Axum AppState:
#[derive(Clone)]
struct AppState {
db: PgPool,
}
let app = Router::new()
.route("/", get(index))
.with_state(AppState { db: pool });
Handlers extract it with State:
async fn list_users(State(state): State<AppState>) -> impl IntoResponse {
let users = sqlx::query_as!(User, "SELECT id, name, email FROM users")
.fetch_all(&state.db)
.await
.unwrap();
// render users
}
The default pool configuration is reasonable for most applications:
| Option | Default | Purpose |
|---|---|---|
max_connections | 10 | Maximum connections in the pool |
min_connections | 0 | Minimum idle connections maintained |
acquire_timeout | 30s | How long to wait for a connection |
idle_timeout | 10 min | Close idle connections after this duration |
max_lifetime | 30 min | Close connections older than this |
Override them on PgPoolOptions if needed. For most web applications, setting max_connections to match your expected concurrency and leaving the rest at defaults works well.
For lazy connection establishment (useful in tests or CLIs where the database might not be needed):
let pool = PgPoolOptions::new()
.max_connections(5)
.connect_lazy(&database_url)?;
This returns immediately. Connections are established on first use.
Compile-time checked queries
The query! macro is the core of SQLx. At compile time, it connects to the database specified by DATABASE_URL, sends the query to PostgreSQL for parsing and type-checking, and generates Rust code that matches the result columns.
query!
query! returns an anonymous record type with fields matching the query’s output columns:
let row = sqlx::query!("SELECT id, name, email FROM users WHERE id = $1", user_id)
.fetch_one(&pool)
.await?;
// row.id: i32
// row.name: String
// row.email: String
Bind parameters use PostgreSQL’s $1, $2, … syntax. The macro checks that the number and types of bind arguments match what the query expects.
query_as!
query_as! maps results directly into a named struct:
struct User {
id: i32,
name: String,
email: String,
}
let user = sqlx::query_as!(User, "SELECT id, name, email FROM users WHERE id = $1", user_id)
.fetch_one(&pool)
.await?;
The macro generates a struct literal, matching column names to field names. It does not use the FromRow trait. The struct does not need any derive macros.
Fetch methods
Choose the fetch method based on how many rows you expect:
| Method | Returns | Use when |
|---|---|---|
.execute(&pool) | PgQueryResult | INSERT, UPDATE, DELETE with no RETURNING |
.fetch_one(&pool) | T | Exactly one row expected (errors if zero or multiple) |
.fetch_optional(&pool) | Option<T> | Zero or one row |
.fetch_all(&pool) | Vec<T> | Collect all rows into a Vec |
.fetch(&pool) | impl Stream<Item = Result<T>> | Stream rows without buffering |
fetch_one returns an error if the query produces zero rows or more than one. Use fetch_optional when the row might not exist.
Nullable columns
The macro infers nullability from the database schema. A column with a NOT NULL constraint maps to T; a nullable column maps to Option<T>.
Override nullability in the column alias when the macro gets it wrong (common with expressions, COALESCE, or complex joins):
// Force non-null (panics at runtime if NULL)
sqlx::query!(r#"SELECT count(*) as "count!" FROM users"#)
// Force nullable
sqlx::query!(r#"SELECT name as "name?" FROM users"#)
// Override both nullability and type
sqlx::query!(r#"SELECT id as "id!: uuid::Uuid" FROM users"#)
The override syntax uses the column alias in double quotes:
"col!"forces non-null"col?"forces nullable"col: Type"overrides the Rust type"col!: Type"forces non-null with a type override
RETURNING clauses
PostgreSQL’s RETURNING clause turns INSERT, UPDATE, and DELETE into queries that produce rows. Use fetch_one with query_as! to get the created or modified record back:
let user = sqlx::query_as!(
User,
"INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email",
name,
email
)
.fetch_one(&pool)
.await?;
This avoids a separate SELECT after every insert.
Offline mode for CI
Compile-time query checking requires a running PostgreSQL database. In CI environments where a database is not available during compilation, SQLx provides offline mode.
- With the database running locally, generate the query cache:
cargo sqlx prepare --workspace
This creates a .sqlx/ directory containing metadata for every compile-time checked query in the project.
-
Commit
.sqlx/to version control. -
When
DATABASE_URLis absent at compile time and.sqlx/exists, the macros use the cached metadata instead of connecting to a database. -
In CI, verify the cache is up to date:
cargo sqlx prepare --workspace --check
This fails if any query has changed without regenerating the cache, catching stale metadata before it causes runtime surprises.
To include queries from tests and other non-default targets:
cargo sqlx prepare --workspace -- --all-targets --all-features
Set SQLX_OFFLINE=true to force offline mode even when DATABASE_URL is present. This is useful for verifying that the offline cache works correctly.
Writing and organising queries
Keep queries inline, next to the code that uses them. SQLx’s macros are designed for this: the query text and its bind parameters live together in the handler or module function, so the reader sees the full picture without jumping between files.
pub async fn find_user_by_email(pool: &PgPool, email: &str) -> Result<Option<User>, sqlx::Error> {
sqlx::query_as!(
User,
"SELECT id, name, email, created_at FROM users WHERE email = $1",
email
)
.fetch_optional(pool)
.await
}
pub async fn create_user(pool: &PgPool, name: &str, email: &str) -> Result<User, sqlx::Error> {
sqlx::query_as!(
User,
"INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email, created_at",
name,
email
)
.fetch_one(pool)
.await
}
For queries that are genuinely long (complex joins, CTEs), query_file_as! reads SQL from a separate file:
-- queries/users_with_posts.sql
SELECT u.id, u.name, u.email, count(p.id) as "post_count!"
FROM users u
LEFT JOIN posts p ON p.user_id = u.id
GROUP BY u.id, u.name, u.email
ORDER BY u.namelet users = sqlx::query_file_as!(UserWithPosts, "queries/users_with_posts.sql")
.fetch_all(&pool)
.await?;
File paths are relative to the crate’s Cargo.toml directory. The file is still checked at compile time against the database.
Mapping query results to Rust types
With macros (preferred)
query_as! maps columns to struct fields by name. The struct needs no special derives:
struct User {
id: i32,
name: String,
email: String,
bio: Option<String>, // nullable column
created_at: time::OffsetDateTime, // TIMESTAMPTZ with the `time` feature
}
let users = sqlx::query_as!(User, "SELECT id, name, email, bio, created_at FROM users")
.fetch_all(&pool)
.await?;
The macro matches column names to field names at compile time. If the types do not match (e.g., a NOT NULL TEXT column mapped to i32), compilation fails.
With FromRow (runtime)
For cases where compile-time checking is not available (dynamic queries, generic code), use sqlx::FromRow:
#[derive(Debug, sqlx::FromRow)]
struct User {
id: i32,
name: String,
email: String,
bio: Option<String>,
}
let users: Vec<User> = sqlx::query_as::<_, User>("SELECT id, name, email, bio FROM users")
.fetch_all(&pool)
.await?;
Note the distinction: query_as! (with !) is a macro that checks at compile time and does not use FromRow. query_as::<_, T>() (without !) is a runtime function that requires T: FromRow.
FromRow supports field-level attributes for column renaming, defaults, and type conversion:
#[derive(sqlx::FromRow)]
struct User {
id: i32,
#[sqlx(rename = "user_name")]
name: String,
#[sqlx(default)]
role: String,
}PostgreSQL type mappings
SQLx maps PostgreSQL types to Rust types. The common mappings, using the feature flags from the setup above:
| PostgreSQL | Rust | Feature |
|---|---|---|
BOOL | bool | |
INT2 / SMALLINT | i16 | |
INT4 / INT | i32 | |
INT8 / BIGINT | i64 | |
FLOAT4 / REAL | f32 | |
FLOAT8 / DOUBLE PRECISION | f64 | |
TEXT, VARCHAR | String | |
BYTEA | Vec<u8> | |
UUID | uuid::Uuid | uuid |
TIMESTAMPTZ | time::OffsetDateTime | time |
TIMESTAMP | time::PrimitiveDateTime | time |
DATE | time::Date | time |
TIME | time::Time | time |
JSON, JSONB | serde_json::Value or Json<T> | json |
INT4[], TEXT[], etc. | Vec<T> |
UUID
UUID primary keys are common in web applications. Enable the uuid feature and use uuid::Uuid directly:
use uuid::Uuid;
struct User {
id: Uuid,
name: String,
email: String,
}
let user = sqlx::query_as!(
User,
"INSERT INTO users (id, name, email) VALUES ($1, $2, $3) RETURNING id, name, email",
Uuid::new_v4(),
name,
email
)
.fetch_one(&pool)
.await?;
Add uuid to your direct dependencies too, since you will construct values from it:
uuid = { version = "1", features = ["v4"] }Timestamps with the time crate
Enable the time feature for date and time support. TIMESTAMPTZ columns map to time::OffsetDateTime, which carries a UTC offset:
use time::OffsetDateTime;
struct AuditEntry {
id: i32,
action: String,
created_at: OffsetDateTime,
}
let entry = sqlx::query_as!(
AuditEntry,
"INSERT INTO audit_log (action) VALUES ($1) RETURNING id, action, created_at",
action
)
.fetch_one(&pool)
.await?;
PostgreSQL stores TIMESTAMPTZ in UTC internally. The OffsetDateTime you receive will always have a UTC offset.
For the time crate, add it as a direct dependency:
time = "0.3"JSONB
JSONB is useful for semi-structured data that does not warrant its own columns. Enable the json feature and use serde_json::Value for unstructured JSON or sqlx::types::Json<T> for typed deserialization:
use sqlx::types::Json;
#[derive(serde::Serialize, serde::Deserialize)]
struct Preferences {
theme: String,
notifications: bool,
}
// Insert typed JSON
sqlx::query!(
"UPDATE users SET preferences = $1 WHERE id = $2",
Json(&prefs) as _,
user_id
)
.execute(&pool)
.await?;
// Read typed JSON
let row = sqlx::query!(
r#"SELECT preferences as "preferences!: Json<Preferences>" FROM users WHERE id = $1"#,
user_id
)
.fetch_one(&pool)
.await?;
let prefs: Preferences = row.preferences.0;
The as _ cast on the insert side is required to help the macro infer the correct PostgreSQL type. On the read side, the type override in the column alias tells the macro to deserialise into Json<Preferences>.
Custom enum types
Map PostgreSQL enum types to Rust enums with sqlx::Type:
#[derive(Debug, sqlx::Type)]
#[sqlx(type_name = "user_role", rename_all = "lowercase")]
enum UserRole {
Admin,
Member,
Guest,
}
This corresponds to a PostgreSQL type created with:
CREATE TYPE user_role AS ENUM ('admin', 'member', 'guest');
Use the enum directly in queries:
sqlx::query!(
"INSERT INTO users (name, role) VALUES ($1, $2)",
name,
role as UserRole
)
.execute(&pool)
.await?;
The as UserRole cast tells the macro which Rust type to use for encoding.
Transactions
A transaction groups multiple queries into an atomic unit. Either all succeed and the changes are committed, or any failure rolls everything back.
Start a transaction with pool.begin():
let mut tx = pool.begin().await?;
let user = sqlx::query_as!(
User,
"INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email",
name,
email
)
.execute(&mut *tx)
.await?;
sqlx::query!(
"INSERT INTO audit_log (user_id, action) VALUES ($1, $2)",
user.id,
"account_created"
)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Pass the transaction to queries with &mut *tx. This dereferences the Transaction to the underlying connection and reborrows it.
If commit() is never called, the transaction rolls back when it is dropped. This makes the ? operator transaction-safe: if any query fails and the function returns early, the transaction is dropped and automatically rolled back.
async fn transfer(
pool: &PgPool,
from_id: i32,
to_id: i32,
amount: i64,
) -> Result<(), sqlx::Error> {
let mut tx = pool.begin().await?;
sqlx::query!(
"UPDATE accounts SET balance = balance - $1 WHERE id = $2",
amount,
from_id
)
.execute(&mut *tx)
.await?; // rolls back on failure
sqlx::query!(
"UPDATE accounts SET balance = balance + $1 WHERE id = $2",
amount,
to_id
)
.execute(&mut *tx)
.await?; // rolls back on failure
tx.commit().await?;
Ok(())
}
For explicit rollback (useful when a business rule fails after the queries succeed):
if balance_too_low {
tx.rollback().await?;
return Err(/* ... */);
}Gotchas
DATABASE_URL must be set at compile time. The query! macros connect to PostgreSQL during compilation. If the variable is missing and no .sqlx/ cache exists, compilation fails. Keep a .env file in your project root for local development.
*&mut tx syntax. Passing a transaction to a query requires &mut *tx, not &mut tx or &tx. The Transaction type implements DerefMut to the underlying connection; the dereference-reborrow is needed for the borrow checker.
Column name matching in query_as!. The column names in the SELECT must match the struct field names exactly. Use AS to rename columns if the database naming convention differs:
sqlx::query_as!(
User,
"SELECT id, user_name AS name FROM users"
)
Nullable inference in expressions. The macro sometimes cannot determine nullability for computed expressions (count(*), COALESCE, subqueries). Use the "col!" override to tell it the result is non-null:
sqlx::query!(r#"SELECT count(*) as "total!" FROM users"#)
Pool exhaustion. If all connections are in use and acquire_timeout is reached, the next query fails. This usually means the pool is too small for the application’s concurrency, or a handler is holding a connection too long (a common cause is doing non-database work while a transaction is open). Keep transactions short.