Migrations track every change to your database schema as versioned SQL files. SQLx includes a migration system that runs these files in order, records which have been applied, and validates that applied migrations have not been modified. The same sqlx-cli tool installed in the database section manages the full lifecycle.
Creating migrations
Generate a new migration with sqlx migrate add. Use the -r flag to create reversible migrations, which produce a .up.sql and .down.sql pair:
sqlx migrate add -r create_users
This creates two files in the migrations/ directory at the project root:
migrations/
20260226140000_create_users.up.sql
20260226140000_create_users.down.sql
The timestamp prefix is generated in UTC and determines execution order. Timestamp versioning is the default and prevents conflicts when multiple developers create migrations concurrently.
Once the first migration uses -r, subsequent calls to sqlx migrate add will produce reversible pairs automatically. The CLI infers the mode from existing files.
Writing the SQL
The .up.sql file contains the forward schema change:
-- migrations/20260226140000_create_users.up.sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
The .down.sql file reverses it:
-- migrations/20260226140000_create_users.down.sql
DROP TABLE users;
Keep each migration focused on a single change. A migration that creates a table should not also modify a different table. This makes reverting predictable and keeps the history readable.
Running migrations
At application startup
The migrate! macro embeds migration files directly into the compiled binary. Call .run() on the pool at startup to apply any pending migrations before the application begins serving requests:
use sqlx::PgPool;
#[tokio::main]
async fn main() {
let pool = PgPool::connect(&std::env::var("DATABASE_URL").expect("DATABASE_URL must be set"))
.await
.expect("failed to connect to database");
sqlx::migrate!()
.run(&pool)
.await
.expect("failed to run migrations");
// build router, start server...
}
migrate!() reads from the migrations/ directory relative to Cargo.toml. The migration SQL is baked into the binary at compile time, so the deployed binary is self-contained, it does not need the migration files on disk.
This is the simplest deployment model. One binary, one process, and the schema is always in sync with the code.
With the CLI
For larger deployments where migrations should run as a separate step before the application starts, use the CLI directly:
sqlx migrate run
This reads DATABASE_URL from the environment or a .env file. The CLI approach gives you explicit control over when schema changes happen, which matters when you have multiple application instances starting simultaneously, need to run migrations from a CI pipeline before deployment, or want human review of what will be applied before it runs.
The two approaches are not mutually exclusive. migrate run is idempotent: it skips any migration already recorded in the database. You can run migrations from the CLI in your deployment pipeline and keep sqlx::migrate!().run(&pool) in your application code as a safety net.
Recompilation caveat
The migrate! macro runs at compile time, but Cargo does not automatically detect changes to non-Rust files. Adding a new .sql migration without modifying any .rs file will not trigger recompilation. The application will silently use the old set of migrations.
Fix this by generating a build.rs that watches the migrations directory:
sqlx migrate build-script
This creates a build.rs at the project root:
// generated by `sqlx migrate build-script`
fn main() {
println!("cargo:rerun-if-changed=migrations");
}
Commit this file. With it in place, any change to the migrations/ directory triggers a rebuild.
Reverting migrations
Revert the most recently applied migration:
sqlx migrate revert
This runs the .down.sql file for the last applied migration. Run it multiple times to step back further, or target a specific version:
# revert everything after version 20260226140000
sqlx migrate revert --target-version 20260226140000
# revert all migrations
sqlx migrate revert --target-version 0
Reverting is primarily a development tool. In production, writing a new forward migration to undo a change is usually safer than reverting, because other parts of the system may already depend on the schema change.
Checking migration status
Inspect which migrations have been applied and whether any are out of sync:
sqlx migrate info
This prints each migration’s version, description, applied status, and whether its checksum matches the file on disk. Use this to diagnose problems before making changes, especially in shared environments.
How SQLx tracks migrations
SQLx creates a _sqlx_migrations table automatically on first run. It records each applied migration’s version, description, checksum (SHA-256 of the SQL content), execution time, and success status.
Two behaviours follow from this:
Checksum validation. Every time migrations run, SQLx compares the stored checksum for each already-applied migration against the current file on disk. If a file has been edited after it was applied, SQLx raises an error. This catches accidental edits to applied migrations. If you need to correct a mistake, write a new migration rather than editing the old one.
Dirty state detection. If a migration fails partway through, its row may be recorded with success = false. SQLx refuses to run further migrations until the dirty state is resolved. In development, the simplest fix is to drop and recreate the database. In production, investigate the failure, fix it manually, and update the row.
Managing migrations across environments
Development
The typical workflow during development:
# create the database (if it doesn't exist)
sqlx database create
# apply all pending migrations
sqlx migrate run
# full reset when needed
sqlx database drop
sqlx database create
sqlx migrate runCI
In CI, create a disposable database, apply migrations, and verify the offline query cache is up to date:
sqlx database create
sqlx migrate run
cargo sqlx prepare --workspace --check
The --check flag fails the build if any query! macro’s cached metadata in .sqlx/ is stale. This enforces that developers run cargo sqlx prepare after schema changes.
Production
For applications using the embedded migrate!() macro, no separate migration step is needed. The binary applies its own migrations on startup.
For CLI-based deployments, run sqlx migrate run as part of the deployment process, before starting the application. In Docker, this is typically an entrypoint script or an init container. The --dry-run flag shows what would be applied without executing, useful for pre-deployment review:
sqlx migrate run --dry-runConcurrency safety
SQLx acquires a PostgreSQL advisory lock before running migrations. If multiple instances start simultaneously, only one will apply migrations while the others wait. This prevents race conditions during rolling deployments.
Gotchas
Never edit an applied migration. The checksum validation will reject it. Write a new corrective migration instead.
Don’t mix simple and reversible migrations. SQLx infers the migration type from existing files. Stick with one style (reversible, using -r) throughout the project.
Commit build.rs and .sqlx/. The build.rs file (from sqlx migrate build-script) ensures new migrations trigger recompilation. The .sqlx/ directory (from cargo sqlx prepare) enables compilation without a live database. Both belong in version control.
DATABASE_URL takes precedence over .sqlx/. In CI, if DATABASE_URL is set during compilation, the query! macros will try to connect to it rather than using the offline cache. Set SQLX_OFFLINE=true explicitly when you want to force offline mode.