Hypermedia-Driven Applications with Rust

A practical guide to building full-stack web applications in Rust using hypermedia-driven architecture
Generated on February 26, 2026

Table of Contents

Getting Started
  1. Development Environment
  2. Project Structure
Architecture
  1. Why Hypermedia-Driven Architecture
  2. The Web Platform Has Caught Up
  3. SPA vs HDA: A Side-by-Side Comparison
  4. When to Use HDA (and When Not To)
Core Stack
  1. Web Server with Axum
  2. HTML Templating with Maud
  3. Interactivity with HTMX
  4. CSS Without Frameworks
Data
  1. Database with PostgreSQL and SQLx
  2. Database Migrations
  3. Search
Auth & Security
  1. Authentication
  2. Authorization
  3. Web Application Security
Forms & Errors
  1. Form Handling and Validation
  2. Error Handling
Integrations
  1. Server-Sent Events and Real-Time Updates
  2. HTTP Client and External APIs
  3. Background Jobs and Durable Execution with Restate
  4. AI and LLM Integration
Infrastructure
  1. File Storage
  2. Email
  3. Configuration and Secrets
  4. Observability
Operations
  1. Testing
  2. Continuous Integration and Delivery
  3. Deployment
  4. Web Application Performance
Practices
  1. Rust Best Practices for Web Development
  2. Building with AI Coding Agents

Getting Started

Development Environment

Run Rust natively on your host machine. Run backing services (PostgreSQL, Valkey, Restate, RustFS, MailCrab) in Docker containers. This separation keeps your edit-compile-run cycle fast while giving you disposable, reproducible infrastructure.

Rust Toolchain

Install rustup, which manages your Rust compiler, standard library, and development tools.

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

The default installation profile includes rustc, cargo, clippy, and rustfmt. Add rust-analyzer (the language server) and rust-src (standard library source, needed for full rust-analyzer functionality) separately:

rustup component add rust-analyzer rust-src

Verify the installation:

rustc --version
cargo --version

Keep everything current with rustup update. Rust releases a new stable version every six weeks.

What each tool does

  • rustc compiles Rust source code. You rarely invoke it directly; cargo handles it.
  • cargo builds, tests, runs, and manages dependencies. It is the entry point for nearly every Rust workflow.
  • clippy is the official linter. Run cargo clippy to catch common mistakes and non-idiomatic patterns.
  • rustfmt formats code to a consistent style. Run cargo fmt to format, cargo fmt -- --check to verify without modifying files.
  • rust-analyzer provides IDE features (completions, diagnostics, go-to-definition, refactoring) via the Language Server Protocol. Any editor or AI coding agent with LSP support can use it.

Backing Services with Docker Compose

The application depends on five external services during development. Run them in containers so they are disposable and require no host-level installation.

ServiceImagePortsPurpose
PostgreSQLpostgres:18-alpine5432Primary database
Valkeyvalkey/valkey:9-alpine6379Pub/sub and caching
Restatedocker.restate.dev/restatedev/restate:latest8080, 9070, 9071Durable execution engine
RustFSrustfs/rustfs:latest9000, 9001S3-compatible object storage
MailCrabmarlonb/mailcrab:latest1025 (SMTP), 1080 (Web UI)Email capture for testing

Create compose.yaml at the project root:

services:
  postgres:
    image: postgres:18-alpine
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: app_dev
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app"]
      interval: 5s
      timeout: 5s
      retries: 5

  valkey:
    image: valkey/valkey:9-alpine
    ports:
      - "6379:6379"
    volumes:
      - valkeydata:/data

  restate:
    image: docker.restate.dev/restatedev/restate:latest
    ports:
      - "8080:8080"
      - "9070:9070"
      - "9071:9071"
    extra_hosts:
      - "host.docker.internal:host-gateway"
    volumes:
      - restatedata:/target

  rustfs:
    image: rustfs/rustfs:latest
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      RUSTFS_ROOT_USER: minioadmin
      RUSTFS_ROOT_PASSWORD: minioadmin
    volumes:
      - rustfsdata:/data

  mailcrab:
    image: marlonb/mailcrab:latest
    ports:
      - "1080:1080"
      - "1025:1025"

volumes:
  pgdata:
  valkeydata:
  restatedata:
  rustfsdata:

Start all services:

docker compose up -d

Stop containers (data persists in named volumes):

docker compose down

Stop and destroy everything, including data:

docker compose down -v

Service notes

Valkey is the BSD-licensed fork of Redis, maintained by the Linux Foundation. It is fully API-compatible with Redis, so any Redis client library works without changes. The guide uses Valkey because its licence is unambiguous.

Restate is a durable execution engine for reliable background work, workflows, and agentic AI. The extra_hosts entry allows Restate (running inside Docker) to reach your application (running on the host) via host.docker.internal. Use this hostname instead of localhost when registering service deployments with the Restate admin API on port 9070.

RustFS is an S3-compatible object storage server written in Rust, licensed under Apache 2.0. It replaces MinIO, which entered maintenance mode in December 2025. RustFS is still in alpha but functional for local development. Its web console is available at http://localhost:9001.

MailCrab captures all email sent to it. Configure your application’s SMTP to point at localhost:1025, then view captured messages at http://localhost:1080. No email leaves your machine.

Docker runtime

Any Docker-compatible runtime works: Docker Desktop, OrbStack (macOS), Colima (macOS/Linux), or Podman. The docker compose commands behave identically across all of them.

cargo xtask

cargo xtask is a convention for writing project automation as a Rust binary inside your workspace. Instead of shell scripts or Makefiles, your build tasks are Rust code: checked by the compiler, cross-platform, and requiring no external tooling beyond cargo.

The pattern works by defining a cargo alias that runs a dedicated crate.

Setup

Create the alias in .cargo/config.toml:

[alias]
xtask = "run --package xtask --"

Add an xtask crate to your workspace. In the root Cargo.toml:

[workspace]
resolver = "2"
members = ["app", "xtask"]
default-members = ["app"]

default-members prevents cargo build and cargo test from compiling the xtask crate unless explicitly requested.

Create xtask/Cargo.toml:

[package]
name = "xtask"
version = "0.1.0"
edition = "2024"
publish = false

[dependencies]
clap = { version = "4", features = ["derive"] }
xshell = "0.2"
anyhow = "1"

xshell provides shell-like command execution without invoking an actual shell. Variable interpolation is safe by construction, preventing injection.

Create xtask/src/main.rs:

use std::process::ExitCode;

use anyhow::Result;
use clap::{Parser, Subcommand};
use xshell::{cmd, Shell};

#[derive(Parser)]
#[command(name = "xtask")]
struct Cli {
    #[command(subcommand)]
    command: Command,
}

#[derive(Subcommand)]
enum Command {
    /// Start backing services and the dev server
    Dev,
    /// Run database migrations
    Migrate,
    /// Run all CI checks locally
    Ci,
}

fn main() -> ExitCode {
    let cli = Cli::parse();
    let result = match cli.command {
        Command::Dev => dev(),
        Command::Migrate => migrate(),
        Command::Ci => ci(),
    };
    match result {
        Ok(()) => ExitCode::SUCCESS,
        Err(e) => {
            eprintln!("error: {e:?}");
            ExitCode::FAILURE
        }
    }
}

fn dev() -> Result<()> {
    let sh = Shell::new()?;
    cmd!(sh, "docker compose up -d").run()?;
    cmd!(sh, "bacon run").run()?;
    Ok(())
}

fn migrate() -> Result<()> {
    let sh = Shell::new()?;
    cmd!(sh, "cargo sqlx migrate run").run()?;
    Ok(())
}

fn ci() -> Result<()> {
    let sh = Shell::new()?;
    cmd!(sh, "cargo fmt --all -- --check").run()?;
    cmd!(sh, "cargo clippy --all-targets -- -D warnings").run()?;
    cmd!(sh, "cargo nextest run").run()?;
    Ok(())
}

Run tasks with:

cargo xtask dev       # start services + dev server
cargo xtask migrate   # run database migrations
cargo xtask ci        # fmt check, clippy, tests

Add subcommands as your project grows. Common additions: seed (populate development data), reset (drop and recreate the database), build-css (run lightningcss processing).

Editor Configuration

Any editor with Language Server Protocol support works for Rust development. Install the rust-analyzer extension or plugin for your editor of choice.

The following rust-analyzer settings matter for this stack. Apply them through your editor’s LSP configuration.

{
  "rust-analyzer.check.command": "clippy",
  "rust-analyzer.procMacro.enable": true,
  "rust-analyzer.cargo.buildScripts.enable": true,
  "rust-analyzer.check.allTargets": true
}

check.command: "clippy" runs clippy instead of cargo check on save, giving you lint feedback inline. Slightly slower on large workspaces, but the additional warnings are worth it.

procMacro.enable: true is critical for this stack. Maud’s html! macro, serde’s derive macros, and SQLx’s query! macro are all procedural macros. Without this setting, rust-analyzer cannot expand them, resulting in false errors and missing completions inside macro invocations.

cargo.buildScripts.enable: true ensures build scripts run during analysis. SQLx’s compile-time query checking depends on this.

check.allTargets: true includes tests, examples, and benchmarks in diagnostic checking.

Fast Iteration

bacon

bacon watches your source files and runs cargo commands on every change. It replaces the older cargo-watch, which is no longer actively developed (its maintainer recommends bacon).

Install it:

cargo install --locked bacon

Run it:

bacon           # defaults to cargo check
bacon clippy    # run clippy on every change
bacon test      # run tests on every change
bacon run       # build and run on every change

bacon provides a TUI with sorted, filtered diagnostics. Press t to switch to tests, c to switch to clippy, r to run the application. The full set of keyboard shortcuts is shown in the interface.

For project-specific jobs, create a bacon.toml at the workspace root:

[jobs.run]
command = ["cargo", "run"]
watch = ["src"]

[jobs.test-integration]
command = ["cargo", "nextest", "run", "--test", "integration"]
watch = ["src", "tests"]

Linking

On Linux with Rust 1.90+, the compiler uses lld (the LLVM linker) by default. This is significantly faster than the traditional system linker and requires no configuration.

On macOS, Apple’s default linker is adequate. No special setup is needed.

Incremental compilation

Cargo enables incremental compilation by default for debug builds. After the initial compile, changing a single file typically triggers a rebuild of only the affected crate and its dependents.

Two practices keep incremental rebuilds fast:

  • Split your workspace into focused crates. A change in one crate does not recompile unrelated crates. The Project Structure section covers this in detail.
  • Keep macro-heavy code in leaf crates. Procedural macro expansion is one of the slower compilation phases. Isolating it limits the rebuild radius.

cargo-nextest

cargo-nextest is a test runner that executes tests in parallel across separate processes. It is noticeably faster than cargo test on projects with more than a handful of tests, and its output is easier to read.

cargo install --locked cargo-nextest
cargo nextest run

Doctests are not supported by nextest. Run them separately with cargo test --doc.

Project Structure

A Cargo workspace groups multiple crates under a single Cargo.lock and shared target/ directory. Each crate has its own Cargo.toml and its own dependency list, which means the compiler enforces boundaries between crates: if crates/domain/Cargo.toml does not list sqlx, no code in that crate can import it. This is not a convention. It is a compilation error.

Splitting a project into workspace crates gives you faster incremental builds (changing one crate does not recompile unrelated ones), enforced dependency boundaries, and a clear map of what depends on what.

Workspace layout

Use a virtual manifest, a root Cargo.toml that contains [workspace] but no [package]. All application crates live under crates/:

my-app/
  Cargo.toml              # virtual manifest (workspace root)
  Cargo.lock
  .cargo/
    config.toml            # cargo aliases (xtask)
  crates/
    server/                # binary: composition root
    web/                   # library: Axum handlers, routing, middleware
    db/                    # library: SQLx queries and database access
    domain/                # library: shared types, business logic
    config/                # library: environment variable parsing
    jobs/                  # library: Restate durable execution handlers
    xtask/                 # binary: build automation (dev, migrate, ci)
  migrations/              # SQLx migration files
  compose.yaml             # backing services (Postgres, Valkey, etc.)
  .env                     # local environment variables

The flat crates/* layout is the simplest approach. Cargo’s crate namespace is flat, so hierarchical folder structures (like crates/libs/ and crates/services/) add visual complexity that does not map to anything Cargo understands. Put everything under crates/ and use the crate names to communicate purpose.

What each crate does

server is the binary crate and the composition root. It depends on every other crate and wires them together at startup: builds the database pool, constructs the Axum router, starts the HTTP listener. This is the only crate that sees the full dependency graph.

web contains Axum handlers, route definitions, middleware configuration, and Maud templates. It depends on domain for shared types and on db for data access. All HTTP-facing code lives here.

db owns all database access. SQLx queries, connection pool management, and result-to-type mappings belong in this crate. It depends on domain for the types that queries return.

domain holds types and logic shared across the application: entity structs, error enums, validation rules, and any business logic that is not tied to a specific framework. This crate should have minimal dependencies. It does not depend on Axum, SQLx, or any infrastructure crate.

config parses environment variables into typed configuration structs at startup. It depends on serde and dotenvy, not on framework crates.

jobs contains Restate service and workflow handlers for durable background work. It depends on domain and db, but not on web. Jobs are triggered by HTTP handlers but execute independently.

xtask is the build automation crate. The Development Environment section covers its setup in detail.

Root Cargo.toml

The workspace root defines shared settings, dependency versions, and lint configuration for all members.

[workspace]
members = ["crates/*"]
resolver = "3"

[workspace.package]
edition = "2024"
version = "0.1.0"
rust-version = "1.85"

[workspace.dependencies]
# Async runtime
tokio = { version = "1", features = ["rt-multi-thread", "macros", "signal"] }

# Web framework
axum = "0.8"
tower = "0.5"
tower-http = { version = "0.6", features = ["trace", "compression-gzip"] }
tower-sessions = "0.14"

# HTML templating
maud = { version = "0.26", features = ["axum"] }

# Serialisation
serde = { version = "1", features = ["derive"] }
serde_json = "1"

# Database
sqlx = { version = "0.8", default-features = false, features = [
  "runtime-tokio", "postgres", "macros", "migrate",
] }

# Error handling
thiserror = "2"
anyhow = "1"

# Observability
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

# Config
dotenvy = "0.15"

# HTTP client
reqwest = { version = "0.12", default-features = false, features = [
  "rustls-tls", "json",
] }

# Internal crates
app-server = { path = "crates/server" }
app-web = { path = "crates/web" }
app-db = { path = "crates/db" }
app-domain = { path = "crates/domain" }
app-config = { path = "crates/config" }
app-jobs = { path = "crates/jobs" }

[workspace.lints.rust]
unsafe_code = "forbid"
rust_2018_idioms = { level = "warn", priority = -1 }
unreachable_pub = "warn"

[workspace.lints.clippy]
enum_glob_use = "warn"
implicit_clone = "warn"
dbg_macro = "warn"

Workspace dependencies

The [workspace.dependencies] table defines dependency versions once. Member crates reference them with workspace = true:

# crates/web/Cargo.toml
[package]
name = "app-web"
edition.workspace = true
version.workspace = true

[lints]
workspace = true

[dependencies]
axum.workspace = true
maud.workspace = true
tower.workspace = true
tower-http.workspace = true
tower-sessions.workspace = true
serde.workspace = true
tracing.workspace = true
app-domain.workspace = true
app-db.workspace = true

Members can add features on top of the workspace definition. Features are additive: you can add but not remove them.

# crates/db/Cargo.toml
[dependencies]
sqlx = { workspace = true, features = ["uuid", "time"] }

Workspace lints

The [workspace.lints] table shares lint configuration across all crates. Each member opts in with [lints] workspace = true. The example above forbids unsafe code project-wide and enables several useful Clippy lints.

Workspace package metadata

[workspace.package] avoids repeating edition, version, and rust-version in every crate. Members inherit with edition.workspace = true, and so on. Only unpublished, internal crates should share a version this way. If you publish crates to crates.io, give them independent version numbers.

The dependency graph

The crate dependency graph for this layout looks like this:

server ──→ web ──→ domain
  │         │
  │         └────→ db ──→ domain
  │
  ├──────→ db
  ├──────→ config
  ├──────→ jobs ──→ domain
  │         │
  │         └────→ db
  └──────→ domain

domain sits at the bottom with no framework dependencies. db depends on domain and sqlx. web depends on domain, db, and axum. server depends on everything and wires it all together.

This graph is enforced by Cargo.toml files. If someone adds an axum import to the domain crate, the compiler rejects it. No linting rules or code review discipline required.

The domain crate

Keep domain lean. It holds types that multiple crates need: entity structs, ID types, error enums, validation logic. It depends on serde for serialisation and thiserror for error types. It does not depend on Axum, SQLx, Maud, or Tokio.

# crates/domain/Cargo.toml
[package]
name = "app-domain"
edition.workspace = true
version.workspace = true

[lints]
workspace = true

[dependencies]
serde.workspace = true
thiserror.workspace = true

A typical domain crate:

use serde::{Deserialize, Serialize};

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Contact {
    pub id: i64,
    pub name: String,
    pub email: String,
}

#[derive(Debug, Deserialize)]
pub struct CreateContact {
    pub name: String,
    pub email: String,
}

#[derive(Debug, thiserror::Error)]
pub enum ContactError {
    #[error("contact not found")]
    NotFound,
    #[error("email already exists")]
    DuplicateEmail,
}

Other crates import these types. The db crate maps SQL rows to Contact. The web crate uses CreateContact to deserialise form submissions. Neither crate defines these types itself, so there is a single source of truth.

The server crate

The binary crate has one job: connect everything and start listening.

# crates/server/Cargo.toml
[package]
name = "app-server"
edition.workspace = true
version.workspace = true

[lints]
workspace = true

[dependencies]
tokio.workspace = true
axum.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
app-web.workspace = true
app-db.workspace = true
app-config.workspace = true
use anyhow::Result;
use tracing_subscriber::EnvFilter;

#[tokio::main]
async fn main() -> Result<()> {
    tracing_subscriber::fmt()
        .with_env_filter(EnvFilter::from_default_env())
        .init();

    let config = app_config::load()?;
    let db = app_db::connect(&config.database_url).await?;
    let app = app_web::router(db);

    let listener = tokio::net::TcpListener::bind(&config.listen_addr).await?;
    tracing::info!("listening on {}", config.listen_addr);
    axum::serve(listener, app).await?;

    Ok(())
}

This is deliberately thin. Route definitions, middleware, and handler logic live in the web crate. The server crate only constructs dependencies and passes them in.

default-members

Set default-members in the workspace root to control which crates cargo build and cargo run operate on by default:

[workspace]
members = ["crates/*"]
default-members = ["crates/server"]
resolver = "3"

With this setting, cargo run starts the server without needing -p app-server. The xtask crate and other library crates only compile when explicitly requested or pulled in as dependencies.

When to split into more crates

Start with fewer crates than you think you need. A single lib crate alongside server and xtask is a reasonable starting point. Split when you have a concrete reason:

  • Compile times. A change in one module triggers recompilation of unrelated code. Splitting into separate crates isolates the rebuild radius.
  • Dependency sprawl. A module pulls in heavy dependencies that most of the codebase does not need. Moving it to its own crate keeps those dependencies contained.
  • Independent deployment. A Restate worker or CLI tool needs to share domain types with the web server but should not pull in Axum.
  • Team boundaries. Different people or teams own different parts of the system and want clear interfaces between them.

Do not split pre-emptively. Each new crate adds a Cargo.toml to maintain and a boundary to design. Split when the cost of staying in one crate (slow builds, tangled dependencies) exceeds the cost of the boundary.

Feature unification

Cargo unifies features of shared dependencies across all workspace members. If the web crate enables sqlx/postgres and the jobs crate enables sqlx/uuid, both features are active everywhere. Features are additive, so this usually works fine. It becomes a problem only if two crates need genuinely incompatible configurations of the same dependency, which is rare in practice.

Resolver 3 (the default with edition = "2024") already avoids unifying features across different target platforms, which eliminates the most common source of unexpected feature activation.

Architecture

Why Hypermedia-Driven Architecture

This section makes the case for hypermedia-driven architecture (HDA) as the default approach to building web applications. The arguments here are opinionated but grounded in the original definition of REST, the economics of framework migration, and the structural properties of HTML as a transfer format.

The technical implementation follows in later sections. This one answers the prior question: why build this way at all?

REST was always about hypermedia

Roy Fielding’s 2000 doctoral dissertation, Architectural Styles and the Design of Network-based Software Architectures, defined REST as “an architectural style for distributed hypermedia systems.” The word hypermedia is not incidental. It is the subject of the entire architecture.

Chapter 5 of the dissertation specifies four interface constraints for REST. The fourth is HATEOAS: Hypermedia As The Engine of Application State. Server responses carry both data and navigational controls. The client does not hardcode knowledge of available actions. It discovers them through hypermedia links and forms embedded in the response. HTML is the canonical format that satisfies this constraint: an HTML page contains both content and the controls (links, forms, buttons) that drive state transitions.

JSON carries no native hypermedia controls. A JSON response like {"name": "Alice", "email": "alice@example.com"} contains data but no affordances. The client must know in advance what URLs to call, what HTTP methods to use, and what payloads to send. This requires out-of-band documentation and tight client-server coupling, which is precisely what REST’s uniform interface constraint was designed to prevent.

By 2008, the drift had become bad enough that Fielding wrote a blog post titled “REST APIs must be hypertext-driven”:

I am getting frustrated by the number of people calling any HTTP-based interface a REST API. […] If the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.

The industry ignored him. The Richardson Maturity Model, popularised by Martin Fowler, formalised REST into “levels.” Most developers stopped at Level 2 (HTTP verbs and resource URLs) and never implemented Level 3 (hypermedia controls). When JSON replaced XML as the dominant transfer format, the “REST” label stuck even though the defining constraint had been dropped. What the industry calls a “RESTful API” is, by Fielding’s definition, RPC with nice URLs.

This matters because the original REST architecture was designed to solve real problems: evolvability, loose coupling, and independent deployment of client and server. Those problems did not go away when the industry adopted JSON APIs. The solutions were simply abandoned.

The HDA architecture defined

A hypermedia-driven application (HDA) returns HTML from the server, not JSON. The term comes from Carson Gross, creator of htmx, and is defined in detail in the book Hypermedia Systems and on the htmx website.

The architecture has two constraints:

  1. Hypermedia communication. The server responds to HTTP requests with HTML. The client renders it. There is no JSON serialisation layer, no client-side data model, and no mapping between API responses and UI state. The HTML is the interface.

  2. Declarative interactivity. HTML-embedded attributes (such as htmx’s hx-get, hx-post, hx-swap) drive dynamic behaviour. The developer declares what should happen in the markup rather than writing imperative JavaScript to manage requests, state, and DOM updates.

The key mechanism is partial page replacement. When the user interacts with an element, the browser sends an HTTP request and receives an HTML fragment. That fragment replaces a targeted region of the DOM. The server controls what the user sees next, because the server produces the HTML. The client is a rendering engine, not an application runtime.

This eliminates an entire layer of software. In a typical SPA, the server serialises data to JSON, the client deserialises it, maps it into a state store, derives a virtual DOM from that state, and diffs it against the real DOM. In HDA, the server renders HTML and the browser displays it. The serialisation, deserialisation, state management, and virtual DOM diffing layers do not exist because they are not needed.

An HDA is not a traditional multi-page application with full page reloads on every click. The partial replacement model provides the same responsiveness that SPAs deliver, but the interactivity logic lives on the server rather than in client-side JavaScript.

The coupling advantage

Each endpoint in an HDA produces self-contained HTML. A handler for GET /contacts/42/edit returns an edit form. That form contains the data, the input fields, the validation rules (via HTML5 attributes), and the submit action (via the form’s action attribute or htmx attributes). Everything the client needs is in the response. There is no shared state to coordinate with.

SPA architectures centralise client-side state. React applications commonly use a global state store (Redux, Zustand, Jotai, or React Context) to hold data that multiple components need. This creates a coupling pattern: when you change the shape of data in the store, every component that reads or writes that data must be updated.

Redux’s single-store design has been criticised for exhibiting the God Object anti-pattern, where a single entity becomes tightly coupled to much of the codebase. Changes intended to benefit one feature create ripple effects in unrelated features. The React-Redux community documented this problem: hooks encourage tight coupling between Redux state shape and component internals, reducing testability and violating the single responsibility principle.

The single-spa project (a framework for combining multiple SPAs) explicitly warns against sharing Redux stores across micro-frontends: “if you find yourself needing constant sharing of UI state, your microfrontends are likely more coupled than they should be.” This is an acknowledgement from within the SPA ecosystem that centralised client state creates coupling problems.

In HDA, the coupling boundary is the HTTP response. Each response is stateless and self-contained. The server can change the HTML structure of one endpoint without affecting any other endpoint, because there is no shared client-side state that binds them together. Two developers can modify two different pages concurrently with zero coordination. This property is structural, not a matter of discipline. It falls out automatically from the architecture.

The framework migration tax

JavaScript framework churn imposes a recurring cost on every project built with a client-side framework.

AngularJS to Angular 2+. React class components to hooks to server components. Vue 2 to Vue 3. Each major transition changes fundamental patterns: how components are defined, how state is managed, how side effects are handled. Code written against the old patterns must be rewritten, not just updated.

A peer-reviewed study by Ferreira, Borges, and Valente (On the (Un-)Adoption of JavaScript Front-end Frameworks, published in Software: Practice and Experience, 2021) examined 12 open-source projects that performed framework migrations. The findings:

  • The time spent performing the migration was greater than or equal to the time spent using the old framework in all 12 projects.
  • In 5 of the 12 projects, the time spent migrating exceeded the time spent using both the old and new frameworks combined.
  • Migration durations ranged from 7 days to 966 days.

AngularJS reached end-of-life on 31 December 2021. Three years later, BuiltWith reports over one million live websites still running AngularJS. WebTechSurvey puts the figure above 500,000. The exact count varies by measurement method, but the order of magnitude is clear: hundreds of thousands of applications remain on a deprecated, unpatched framework because migrating to Angular 2+ requires a near-complete rewrite of the client-side codebase.

This is not a one-time problem. React’s transition from class components to hooks changed every component pattern in the ecosystem. The ongoing shift toward React Server Components is changing the execution model itself, blurring the boundary between server and client in ways that require rethinking application architecture. Each transition resets knowledge, breaks libraries, and forces rewrites.

The migration tax is a structural property of the SPA model: when interactivity logic lives in client-side JavaScript tied to a specific framework’s component model, that logic must be rewritten whenever the framework’s model changes. HDA does not eliminate the need to stay current with server-side tools, but server-side framework transitions (switching from one Rust web framework to another, for example) affect route definitions and middleware, not the fundamental rendering model. The HTML your server produces is the same regardless of which framework generates it.

The backward-compatibility guarantee

No HTML element has ever been removed from the specification in a way that breaks rendering.

The WHATWG HTML Standard, which governs HTML as a living specification, lists obsolete elements including <marquee>, <center>, <font>, <frame>, and <acronym>. Authors are told not to use them. But the specification still mandates that browsers render them. <marquee> has a complete interface specification (HTMLMarqueeElement) with defined behaviour. <acronym> must be treated as equivalent to <abbr> for rendering purposes. These elements work in every modern browser because the spec requires it.

This is not accidental. It is policy. The W3C HTML Design Principles document establishes a priority of constituencies: “In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.” Backward compatibility flows directly from this principle: breaking existing content harms users, so the specification does not break existing content.

The WHATWG’s founding position reinforces this:

Technologies need to be backwards compatible, that specifications and implementations need to match even if this means changing the specification rather than the implementations.

An application built on HTML, CSS, and HTTP in 2026 can reasonably expect its platform foundation to remain stable for decades. The same HTML that rendered in Netscape Navigator still renders in Chrome today. No JavaScript framework has provided, or can provide, a comparable guarantee. React is 12 years old and has undergone three major paradigm shifts. The <form> element is 31 years old and works exactly as it did in 1995, with additional capabilities layered on top.

This is the core durability argument for HDA. Your investment in HTML templates, HTTP handlers, and declarative interactivity attributes is protected by the strongest backward-compatibility commitment in software: the web platform’s refusal to break existing content.

No separate API layer

In HDA, the HTML response is the API. There is no JSON layer to design, version, document, or maintain.

A traditional SPA architecture requires two applications: a client-side app that renders UI, and a server-side API that produces JSON. These are developed, tested, deployed, and versioned as separate artefacts with a contract between them. When the contract changes, both sides must change in coordination.

HDA collapses this into one application. An Axum handler receives a request, queries the database, renders HTML with Maud, and returns it. The browser displays the HTML. There is one codebase, one deployment, one thing to reason about.

This has practical consequences:

  • No API versioning. The server controls the HTML. If the data model changes, the server updates the template. There is no external consumer relying on a JSON schema.
  • No serialisation code. No serde annotations on response types, no JSON schema validation on the client, no mapping between API responses and component props.
  • No CORS configuration. The browser requests HTML from the same origin that served the page. Cross-origin issues do not arise.
  • Faster feature delivery. Adding a field to a page means adding it to the query and the template. In an SPA, it means updating the API response, the TypeScript types, the state store, and the component that renders it.

The reduction in moving parts is not incremental. It is categorical. An entire class of bugs (schema mismatches, stale client caches, API versioning conflicts) cannot occur because the architecture does not have the layers where those bugs live.

When you do need a separate API

HDA does not mean you never write JSON endpoints. It means JSON is not the default, and HTML handles the majority of your application’s interface.

There are legitimate cases where a JSON API is the right tool:

  • Third-party integrations. External services that call your application (payment webhooks, OAuth callbacks, partner integrations) communicate in JSON. These are not UI interactions; they are machine-to-machine interfaces.
  • Mobile applications. If you ship a native mobile app alongside your web application, the mobile client needs a data API. HDA applies to the web interface; the mobile interface has different constraints.
  • Public APIs. If your product offers an API as a feature (for customers to build integrations), that API will be JSON and needs the usual API design treatment: versioning, documentation, authentication, rate limiting.
  • Islands of rich interactivity. Some UI components genuinely need client-side state: a drag-and-drop kanban board, a collaborative text editor, a real-time data visualisation. These components can fetch JSON from dedicated endpoints while the rest of the application uses HDA. This is the islands pattern, covered in When to Use HDA.

The principle is straightforward: use HTML for the interface, JSON for integrations. Most web applications are overwhelmingly interface. The JSON endpoints, when needed, are a small surface area alongside the HDA core, not a parallel architecture that doubles the codebase.

The Web Platform Has Caught Up

Between 2022 and 2026, the web platform crossed a capability threshold. Native CSS and HTML features now provide the functionality that historically justified adopting a CSS preprocessor, a utility framework, a CSS-in-JS library, or a JavaScript UI component system. No single feature is transformative. The cumulative effect is that the problems requiring these tools in 2020 can be solved with the platform itself in 2026.

This section catalogues what changed and why it matters for the architectural choice described in Why Hypermedia-Driven Architecture. The HDA model depends on the platform being capable enough that server-rendered HTML, plain CSS, and minimal JavaScript can deliver a production-quality experience. That dependency is now met.

The Interop Project

Cross-browser inconsistency was a primary driver of framework and preprocessor adoption. Developers reached for jQuery, Sass, Autoprefixer, and eventually React because writing to the platform directly meant writing to four different platforms with different bugs. The Interop Project has largely eliminated this rationale.

Interop is a joint initiative of Apple, Google, Igalia, Microsoft, and Mozilla, running annually since 2021 (initially as “Compat 2021”). Each year, the participants agree on a set of web platform features, write shared test suites via the Web Platform Tests project, and publicly track each browser engine’s pass rate. The Interop dashboard reports a single “interop score”: the percentage of tests that pass in all browsers simultaneously.

The scores tell the story:

YearStarting interop scoreEnd-of-year (stable)End-of-year (experimental)
Compat 202164-69%>90%–
Interop 2022~49%83%~97%
Interop 2023~48%75%89%
Interop 202446%95%99%
Interop 202529%97%99%

The low starting scores each year reflect the selection of new focus areas, not regression. Each iteration targets harder, more recent features. That Interop 2025 started at 29% and finished at 97% in stable releases means the browser vendors are converging on new features within a single calendar year.

WebKit’s review of Interop 2025 described the result directly: “Every browser engine invested heavily, and the lines converge at the top. That convergence is what makes the Interop project so valuable, the shared progress that means you can write code once and trust that it works everywhere.”

Interop 2026 launched in February 2026 with 20 focus areas including cross-document view transitions, scroll-driven animation timelines, and continued anchor positioning alignment. The initiative is now in its fifth consecutive year with no signs of winding down.

The practical consequence: if you write CSS and HTML to the current specifications, it works in Chrome, Firefox, Safari, and Edge. The “works in my browser but not yours” problem that drove an entire generation of tooling adoption is, for the features that matter most, solved.

CSS features that replace frameworks

Eight CSS features, all shipping between 2022 and 2026, collectively address the problems that justified Sass, Less, PostCSS, Tailwind, CSS-in-JS, and JavaScript positioning libraries.

Cascade Layers (@layer)

Cascade Layers provide explicit control over cascade priority, independent of selector specificity or source order. All major browsers shipped support within five weeks of each other in early 2022. @layer reached Baseline Widely Available in September 2024.

@layer reset, base, components, utilities;

@layer reset {
  * { margin: 0; box-sizing: border-box; }
}

@layer components {
  .card { padding: 1rem; border: 1px solid #ddd; }
}

@layer utilities {
  .hidden { display: none; }
}

Styles in later-declared layers always win over earlier layers, regardless of specificity. This replaces the specificity arms race that led to !important abuse, strict BEM naming conventions, and CSS-in-JS libraries whose primary value proposition was specificity isolation. Styles outside any @layer have the highest priority, which allows third-party CSS to be layered below application styles without modification.

CSS Nesting

CSS Nesting reached Baseline Newly Available in December 2023, when Chrome 120 and Safari 17.2 shipped the relaxed syntax (Firefox 117 had shipped in August 2023).

.card {
  padding: 1rem;

  h2 {
    font-size: 1.25rem;
  }

  &:hover {
    box-shadow: 0 2px 8px rgb(0 0 0 / 0.1);
  }

  @media (width >= 768px) {
    padding: 2rem;
  }
}

This is the feature that eliminated the most common reason for using Sass or Less. The relaxed nesting syntax (no & required before element selectors) matches what preprocessor users expect. Media queries and other at-rules can nest directly inside selectors.

Container Queries

Container Queries reached Baseline Widely Available in August 2025. Firefox 110 was the last browser to ship, completing Baseline in February 2023.

.card-container {
  container-type: inline-size;
}

@container (inline-size > 400px) {
  .card {
    display: grid;
    grid-template-columns: 200px 1fr;
  }
}

Media queries respond to the viewport. Container queries respond to the size of the containing element. This makes components genuinely reusable: a card component that switches from stacked to horizontal layout based on its container width, not the window width. Previously, achieving this required JavaScript ResizeObserver workarounds or abandoning the idea entirely.

Size container queries are the Baseline part. Style container queries (@container style(...)) remain Chromium-only as of early 2026.

The :has() selector

:has() reached Baseline Newly Available in December 2023, when Firefox 121 shipped (Safari had led in March 2022, Chrome followed in August 2022).

/* Style a card differently when it contains an image */
.card:has(img) {
  grid-template-rows: 200px 1fr;
}

/* Style a form group when its input is invalid */
.form-group:has(:invalid) {
  border-color: var(--color-error);
}

/* Style a section when it has no content */
section:has(> :only-child) {
  padding: 0;
}

:has() is the long-requested “parent selector,” though it is more general than that name implies. It selects an element based on its descendants, siblings, or any relational condition expressible as a selector. Before :has(), selecting a parent based on its children required JavaScript DOM traversal. Entire categories of conditional styling that needed classList.toggle() or framework-level reactivity can now be expressed in CSS alone.

@scope

@scope reached Baseline Newly Available in December 2025, when Firefox 146 shipped (Chrome 118 had led in October 2023, Safari 17.4 followed in March 2024).

@scope (.card) to (.card-footer) {
  p { margin-bottom: 0.5rem; }
  a { color: var(--card-link-color); }
}

@scope provides proximity-based style scoping with both an upper bound (the scope root) and an optional lower bound (the scope limit), creating a “donut scope” that prevents styles from leaking into nested sub-components. This addresses the problem that CSS Modules, BEM, and Shadow DOM each solved partially: keeping component styles from colliding. Unlike Shadow DOM, @scope does not create hard encapsulation boundaries, so styles remain inspectable and overridable when needed.

The cumulative effect

No single feature here replaces a framework. The replacement is structural.

In 2020, a developer building a component library needed: a preprocessor for nesting and variables (Sass), a naming convention or tooling for specificity management (BEM or CSS Modules), JavaScript for responsive component behaviour (ResizeObserver hacks), JavaScript for parent-based conditional styling (no :has()), and either strict discipline or a CSS-in-JS library to prevent style collisions.

In 2026, native CSS handles all of this. Nesting and custom properties replace the preprocessor. @layer replaces specificity management tooling. Container queries replace JavaScript resize detection. :has() replaces JavaScript conditional styling. @scope replaces CSS-in-JS scoping. The developer writes CSS, and it works across browsers.

HTML features that replace JavaScript UI primitives

The historical justification for React’s component model arose partly because HTML lacked native primitives for modals, tooltips, menus, and rich selects. Three of those gaps are now closed at Baseline. Two more are closing.

The <dialog> element

<dialog> reached Baseline Widely Available in approximately September 2024. Firefox 98 and Safari 15.4 completed cross-browser support in March 2022.

<dialog id="confirm-dialog">
  <h2>Delete this item?</h2>
  <p>This action cannot be undone.</p>
  <form method="dialog">
    <button value="cancel">Cancel</button>
    <button value="confirm">Delete</button>
  </form>
</dialog>

A modal <dialog> (opened via showModal()) provides focus trapping, top-layer rendering, backdrop styling via ::backdrop, the Escape key to close, and <form method="dialog"> for declarative close actions. These are the behaviours that every custom modal library (Bootstrap Modal, React Modal, a11y-dialog) reimplements in JavaScript. The native element provides them with correct accessibility semantics, including the dialog ARIA role and proper focus restoration on close, out of the box.

The Popover API

The Popover API reached Baseline Newly Available in January 2025 (Safari 18.3 resolved a light-dismiss bug on iOS that had delayed the designation).

<button popovertarget="menu">Options</button>
<div id="menu" popover>
  <a href="/settings">Settings</a>
  <a href="/profile">Profile</a>
  <a href="/logout">Log out</a>
</div>

The popover attribute gives any element top-layer rendering, light dismiss (click outside or press Escape to close), and automatic accessibility wiring. popover="auto" provides light dismiss; popover="manual" requires explicit close. This replaces Tippy.js, Bootstrap Popovers, and the custom JavaScript that every dropdown menu previously required.

The popover="hint" variant (for hover-triggered tooltips) is an Interop 2026 focus area and not yet Baseline.

Invoker Commands

Invoker Commands (command and commandfor attributes) reached Baseline Newly Available in early 2026, with Safari 26.2 completing cross-browser support after Chrome 135 (April 2025) and Firefox 144.

<button commandfor="my-dialog" command="show-modal">Open</button>
<dialog id="my-dialog">
  <p>Dialog content</p>
  <button commandfor="my-dialog" command="close">Close</button>
</dialog>

Invoker Commands connect a button to a target element declaratively: commandfor names the target, command specifies the action. Built-in commands include show-modal, close, and request-close for dialogs, and toggle-popover, show-popover, hide-popover for popovers. No JavaScript required for these interactions.

Combined with <dialog> and the Popover API, Invoker Commands eliminate the last bit of JavaScript glue that modals and popovers previously required. A dialog can be opened, populated, and closed entirely through HTML attributes and server-rendered content, which is exactly what HDA needs.

Gaps still closing

Two features listed in the outline remain Chromium-only as of February 2026:

Customizable <select> (appearance: base-select). Chrome 134+ and Edge 134+ ship full CSS styling of <select> elements, including custom option rendering via exposed pseudo-elements (::picker(select), selectedoption). Firefox and Safari are implementing but have not shipped to stable. This feature replaces React Select, Select2, and the entire category of custom dropdown libraries that exist because native <select> has been unstyled. The opt-in (appearance: base-select) means browsers without support simply show the default <select>, making it safe to adopt as progressive enhancement.

Speculation Rules API. Chrome 121+ supports declarative prefetch and prerender rules via <script type="speculationrules">. WordPress and Shopify have deployed it at scale. Firefox’s standards position is positive for prefetch but neutral on prerender; Safari has published no position. Non-supporting browsers ignore the <script> block entirely, so it can be deployed today without harm. For HDA applications, speculation rules offer the multi-page navigation speed that SPA prefetching provides, without any client-side routing framework.

Both features work as progressive enhancement: they improve the experience in supporting browsers without breaking others.

Progressive enhancement as the architectural default

The features above share a property: they degrade gracefully. A <dialog> without JavaScript still renders its content. A popover without support becomes a static element. A <select> without appearance: base-select falls back to the native control. This is not accidental. The web platform is designed around progressive enhancement.

Native HTML elements carry built-in ARIA semantics, focus management, and keyboard handling. A <dialog> opened with showModal() traps focus, responds to Escape, announces itself to screen readers, and restores focus to the triggering element on close. A <button> with commandfor and command attributes communicates its relationship to the target element through the accessibility tree. These behaviours are defined by the specification and implemented by the browser.

SPA component libraries must reimplement all of this. A React modal component needs explicit focus-trap logic, an Escape key handler, ARIA attributes, a portal to render in the correct DOM position, and focus restoration on unmount. Libraries like Radix UI and Headless UI exist specifically because implementing accessible interactive components in React is difficult. The native elements provide the same behaviours correctly by default.

In HDA, progressive enhancement is the structural default. The baseline is server-rendered HTML with standard links and forms. htmx attributes enhance but are not required; a form with hx-post and hx-swap still submits normally via the browser’s native form handling if htmx fails to load. In SPA frameworks, progressive enhancement is opt-in and, under deadline pressure, frequently abandoned.

No-build JavaScript

ES Modules (<script type="module">) have been supported in all major browsers since 2018 and are Baseline Widely Available. Import Maps reached Baseline Widely Available in approximately September 2025, with Safari 16.4 completing cross-browser support in March 2023.

Together, they enable npm-style bare specifier imports in the browser without npm, Node.js, or a bundler:

<script type="importmap">
{
  "imports": {
    "htmx": "/static/js/htmx.min.js",
    "alpinejs": "/static/js/alpine.min.js"
  }
}
</script>
<script type="module">
  import 'htmx';
</script>

Import maps resolve bare specifiers (import 'htmx') to URLs, the same job that webpack, Rollup, and esbuild perform during a build step. With import maps, the browser does this resolution at runtime. No bundler needed.

The trade-offs are real. There is no tree-shaking: unused code in imported modules ships to the client. No TypeScript compilation: types are stripped only if a build step runs. No code splitting: the browser loads entire modules rather than optimised chunks. For applications with large client-side dependency graphs, these costs matter.

For HDA applications, they do not. The client-side dependency count is typically small: htmx (14 KB gzipped), perhaps a date formatting library, perhaps a small charting library for a dashboard page. The total client-side JavaScript in an HDA application is measured in tens of kilobytes, not megabytes. HTTP/2 and HTTP/3 multiplexing further reduce the cost of serving a handful of small modules individually.

Some practitioners retain a build step for minification, but this is an optional optimisation, not an architectural requirement. The htmx project itself argues explicitly against build steps, distributing as a single file that can be included with a <script> tag. The no-build approach is not a compromise for HDA. It is the natural fit.

The supply chain security argument

The architectural choice to avoid npm is not only a simplicity argument. It is a security argument, grounded in the structural properties of the npm dependency graph and the empirical record of supply chain attacks against it.

The dependency graph problem

Zimmermann, Staicu, Tenny, and Pradel (Small World with High Risks: A Study of Security Threats in the npm Ecosystem, USENIX Security 2019) analysed npm’s dependency graph as of April 2018 and found small-world network properties: just 20 maintainer accounts could reach more than half of the entire ecosystem through transitive dependencies. Installing an average npm package implicitly trusts approximately 80 other packages and 39 maintainers. 391 highly influential maintainers each affected more than 10,000 packages.

A comparative study by Decan, Mens, and Grosjean (An Empirical Comparison of Dependency Network Evolution in Seven Software Packaging Ecosystems, Empirical Software Engineering, 2019) found npm had the highest transitive dependency counts among seven ecosystems. A more recent study by Biernat et al. (How Deep Does Your Dependency Tree Go?, December 2025) across ten ecosystems found that Maven now shows the highest mean amplification ratio (24.70x transitive-to-direct), with npm at 4.32x. npm is not the worst offender across all ecosystems, but it remains structurally exposed: 12% of npm projects exceed a 10x amplification ratio, and the absolute number of affected projects is enormous given npm’s scale.

The empirical record

The structural risk is not theoretical. Supply chain attacks against npm are recurring and escalating in sophistication.

event-stream (November 2018). A new maintainer, given publish access through social engineering, added a dependency on flatmap-stream containing encrypted malicious code targeting the Copay Bitcoin wallet. The package had approximately 2 million weekly downloads. The malicious code was live for over two months before a computer science student noticed it.

polyfill.io (June 2024). The polyfill.io CDN domain was acquired by a new owner in February 2024. Four months later, the CDN began serving modified JavaScript that redirected mobile users to scam sites. Over 380,000 websites were embedding scripts from the compromised domain. Andrew Betts, the original creator, had warned users when the sale occurred. Most did not act.

chalk/debug (September 2025). A phishing attack compromised the npm credentials of a maintainer of chalk, debug, and 16 other packages. The malicious versions contained code to hijack cryptocurrency transactions in browsers. The 18 affected packages accounted for over 2.6 billion combined weekly downloads. The malicious versions were live for approximately two hours.

These incidents share a structural cause: the npm ecosystem’s deep transitive dependency graphs mean that compromising a single package or maintainer account can reach thousands or millions of downstream projects. The risk scales with the number of dependencies.

The HDA alternative

An HDA application with vendored htmx eliminates this entire attack surface. htmx is 14 KB minified and gzipped, has zero dependencies, and is distributed as a single JavaScript file. There is no npm install step, no node_modules directory, no transitive dependency graph, and no exposure to registry-level supply chain attacks.

This is not an incremental improvement. A typical React application created with Vite installs approximately 270 packages, and projects using Create React App (now deprecated) routinely exceeded 1,500. Each package is a node in the dependency graph that the Zimmermann findings describe. Reducing that graph from hundreds of nodes to zero is a categorical change in supply chain risk profile.

The comparison is worth stating plainly. One architecture requires you to trust hundreds of packages, maintained by strangers, with update cadences you do not control, delivered through a registry that is a recurring target of supply chain attacks. The other architecture requires you to trust one 14 KB file that you can vendor, audit, and pin.

What this means for HDA

The web platform’s capability expansion between 2022 and 2026 is the material condition that makes hypermedia-driven architecture practical for production applications. The HDA model depends on three platform properties:

  1. CSS is sufficient for production UI. Nesting, container queries, cascade layers, :has(), and @scope collectively provide the capabilities that previously required a preprocessor, a utility framework, or CSS-in-JS.

  2. HTML provides interactive primitives. <dialog>, the Popover API, and Invoker Commands cover modals, tooltips, dropdowns, and declarative element interaction without JavaScript component libraries.

  3. The browser is a capable module system. ES Modules and Import Maps enable dependency management without a build tool, and the small dependency footprint of HDA applications makes the trade-offs (no tree-shaking, no code splitting) irrelevant.

The Interop Project ensures these features work consistently across browsers. The backward-compatibility guarantee described in the previous section ensures they will continue to work. And the elimination of the npm dependency graph provides a supply chain security posture that no framework-dependent architecture can match.

The web platform was not always adequate for building rich applications without frameworks. It is now.

SPA vs HDA: A Side-by-Side Comparison

The previous sections argued for hypermedia-driven architecture on structural grounds: coupling, migration cost, backward compatibility. This section puts code next to code. What does the same feature actually look like when built both ways, and what do published migrations tell us about the difference at scale?

What real migrations show

The strongest published data comes from Contexte, a SaaS product for media professionals built with React. In 2022, developer David Guillot presented the results of porting the application from React to Django templates with htmx:

MetricReactDjango + htmxChange
Total lines of code21,5007,200−67%
JavaScript dependencies2559−96%
Web build time40s5s−88%
First load time-to-interactive2–6s1–2s−50–60%
Memory usage~75 MB~45 MB−46%

The port took roughly two months to rewrite a codebase that had taken two years to build. The team eliminated the hard split between frontend and backend developers. User experience did not degrade.

Contexte is a media-oriented application, exactly the kind of content-driven, read-heavy workload that hypermedia was designed for. The htmx project acknowledges this: “These sorts of numbers would not be expected for every web application.” A separate Next.js to htmx port showed a 17% reduction in written application code and over 50% reduction in total shipped code when accounting for dependency weight.

The pattern across these migrations is consistent. The JSON serialisation layer disappears. Client-side state management disappears. The build toolchain disappears. The dependency graph collapses. What remains is server-side code that got somewhat larger (Contexte’s Python grew from 500 to 1,200 lines) and a total codebase that got dramatically smaller.

The same feature, two architectures

Consider a searchable contact list with inline editing and deletion. The specification is identical for both implementations:

  • Display contacts from a database
  • Live search with debounce (300ms)
  • Click a row to get an editable form
  • Delete with confirmation
  • All changes persist to the server

This is a bread-and-butter CRUD feature. Most web applications are made of features like this one.

SPA: React + Vite + REST API

The SPA approach requires two applications. A React client handles rendering and state. A server exposes JSON endpoints. They communicate through a serialisation boundary.

Search with debounce needs a custom hook or a library:

function useDebounce(value, delay) {
  const [debounced, setDebounced] = useState(value);
  useEffect(() => {
    const timer = setTimeout(() => setDebounced(value), delay);
    return () => clearTimeout(timer);
  }, [value, delay]);
  return debounced;
}

function ContactList() {
  const [query, setQuery] = useState('');
  const [contacts, setContacts] = useState([]);
  const [editingId, setEditingId] = useState(null);
  const debouncedQuery = useDebounce(query, 300);

  useEffect(() => {
    fetch(`/api/contacts?q=${debouncedQuery}`)
      .then(res => res.json())
      .then(setContacts);
  }, [debouncedQuery]);

  // ... render logic, edit mode toggling, delete handlers
}

The component manages three pieces of state: the search query, the contact list, and which row is being edited. Each state change triggers a re-render. The search query flows through a debounce hook, which triggers a fetch, which deserialises JSON, which updates state, which triggers another re-render. The edit mode is a client-side toggle: clicking a row sets editingId, and the component conditionally renders either a display row or a form row based on that state.

Inline editing requires the client to manage form state, submit JSON to the API, handle the response, and update the local contact list to reflect the change:

async function handleSave(contact) {
  const res = await fetch(`/api/contacts/${contact.id}`, {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(contact),
  });
  const updated = await res.json();
  setContacts(prev =>
    prev.map(c => c.id === updated.id ? updated : c)
  );
  setEditingId(null);
}

The server side mirrors this with JSON endpoints:

app.get('/api/contacts', async (req, res) => {
  const contacts = await db.query(
    'SELECT * FROM contacts WHERE name ILIKE $1',
    [`%${req.query.q}%`]
  );
  res.json(contacts);
});

app.put('/api/contacts/:id', async (req, res) => {
  const { name, email } = req.body;
  const updated = await db.query(
    'UPDATE contacts SET name = $1, email = $2 WHERE id = $3 RETURNING *',
    [name, email, req.params.id]
  );
  res.json(updated[0]);
});

Every interaction crosses the serialisation boundary twice: the server serialises to JSON, the client deserialises, processes the data, and re-renders. The CORS configuration, the Content-Type headers, the JSON.stringify and res.json() calls are all infrastructure that exists solely because the client and server are separate applications communicating through a data format that carries no hypermedia controls.

The project also needs a build toolchain. A fresh React + Vite project installs Node.js, npm, Vite (which bundles esbuild for development and Rollup for production), a JSX transformer (Babel or SWC), and ESLint. The node_modules directory contains hundreds of transitive packages. Each is a separate project with its own release cycle.

HDA: Rust/Axum/Maud + htmx

The HDA approach is one application. The server handles everything: routing, data access, rendering, and interactivity declarations.

Search with debounce is a single HTML attribute:

fn search_input(query: &str) -> Markup {
    html! {
        input type="text" name="q" value=(query)
            hx-get="/contacts"
            hx-trigger="input changed delay:300ms"
            hx-target="#contact-list"
            placeholder="Search contacts...";
    }
}

No hook. No state. No effect. The hx-trigger attribute declares the debounce behaviour inline. When the user types, htmx waits 300ms after the last keystroke, sends a GET request, and swaps the response into #contact-list. The server returns an HTML fragment containing the filtered rows.

The search handler queries the database and renders HTML directly:

async fn list_contacts(
    State(pool): State<PgPool>,
    Query(params): Query<SearchParams>,
) -> Markup {
    let contacts = sqlx::query_as!(
        Contact,
        "SELECT * FROM contacts WHERE name ILIKE '%' || $1 || '%'",
        params.q.unwrap_or_default()
    )
    .fetch_all(&pool)
    .await
    .unwrap();

    html! {
        tbody#contact-list {
            @for contact in &contacts {
                (contact_row(contact))
            }
        }
    }
}

There is no JSON serialisation. The handler returns Markup, which Axum sends as an HTML response. The database result flows directly into the template. The query is checked at compile time by SQLx.

Inline editing is a template swap, not a state toggle. Clicking the edit button asks the server for an edit form:

fn contact_row(contact: &Contact) -> Markup {
    html! {
        tr {
            td { (contact.name) }
            td { (contact.email) }
            td {
                button hx-get={"/contacts/" (contact.id) "/edit"}
                    hx-target="closest tr"
                    hx-swap="outerHTML" { "Edit" }
            }
        }
    }
}

fn contact_edit_row(contact: &Contact) -> Markup {
    html! {
        tr {
            td {
                input type="text" name="name" value=(contact.name);
            }
            td {
                input type="text" name="email" value=(contact.email);
            }
            td {
                button hx-put={"/contacts/" (contact.id)}
                    hx-target="closest tr"
                    hx-swap="outerHTML"
                    hx-include="closest tr" { "Save" }
            }
        }
    }
}

The edit handler returns contact_edit_row, which replaces the display row. The save handler updates the database and returns contact_row, which replaces the edit form. No client-side state tracks which row is being edited. The server controls the UI by returning the appropriate HTML fragment.

The entire client-side dependency is htmx: a single 14 KB file (minified and gzipped) with zero dependencies. No build step. No node_modules. No package manager. Vendor the file and serve it from your Rust application.

Key observations

The comparison reveals differences that are structural, not incremental.

The JSON serialisation layer is eliminated entirely. In the SPA, every interaction crosses a serialisation boundary: JSON.stringify on the client, res.json() on the server, res.json() then setContacts() on the way back. In HDA, the handler returns HTML. The serialisation layer does not exist because the architecture does not need it.

Client-side state management disappears. The React component manages query, contacts, and editingId as state. Changes to any of these trigger re-renders. The htmx version has no client-side state at all. The server is the single source of truth, and every user action asks the server what to show next.

The dependency asymmetry is categorical. One side installs hundreds of packages through a package manager, maintained by hundreds of independent maintainers, each a potential supply chain risk. The other vendors a single file. The React runtime alone (~55 KB gzipped for React 19 + ReactDOM) is roughly four times the size of htmx (~14 KB gzipped), and that comparison ignores the entire build toolchain and its transitive dependencies.

The build toolchain is a complexity tax. The SPA needs Node.js, npm, Vite, esbuild, Rollup, and a JSX transformer to convert source files into something a browser can execute. The HDA serves HTML from a compiled Rust binary. The browser needs no build artefact because the server already produced what the browser understands natively: HTML.

What the SPA provides that HDA does not

The comparison above is favourable to HDA because this is a CRUD feature, and CRUD features are what HDA handles best. The SPA architecture has genuine strengths that should not be dismissed as irrelevant.

Component-level encapsulation with typed props. React components accept typed props and manage their own state in a well-defined scope. This composability model is genuinely powerful for building complex UIs. A component can be tested in isolation, rendered in a storybook, and reused across pages with different data. Maud functions provide similar composition, but the pattern is less formalised and has no equivalent to React’s developer tooling for component inspection.

React DevTools and the debugging experience. React DevTools lets you inspect the component tree, view props and state, trace re-renders, and profile performance. The htmx debugging experience is the browser’s network tab and the DOM inspector. For complex UIs, React’s tooling gives developers significantly better visibility into what the application is doing and why.

Client-side rendering avoids some server round-trips. When edit mode is a client-side state toggle, the UI updates instantly. No network request is needed to show a form. In HDA, clicking “Edit” sends a request to the server and waits for the response. On a fast connection, this difference is imperceptible. On a slow connection or for highly interactive interfaces, it matters.

The component library ecosystem is unmatched. Libraries like shadcn/ui and Radix provide production-quality, accessible UI primitives: dialogs, dropdowns, date pickers, data tables, command palettes. These components handle keyboard navigation, screen reader announcements, focus trapping, and edge cases that take significant effort to implement correctly. The HDA ecosystem has no equivalent at comparable maturity. If your application needs a complex, accessible data table with column sorting, filtering, pagination, and row selection, a React component library gives you that out of the box.

TypeScript provides end-to-end type checking. TypeScript catches errors across the entire client-side codebase: props, state, API response shapes, event handlers. In the SPA model, a type error in a component is caught before the code runs. Rust provides this same safety on the server side (and Maud catches malformed HTML at compile time), but the client-side interactivity in HDA is untyped HTML attributes. A typo in hx-target is a runtime error, not a compile-time error.

Hiring and ecosystem momentum. React dominates job postings and developer mindshare. Finding developers who know React is straightforward. Finding developers who know Rust, Axum, Maud, and htmx is harder. This is not a technical argument, but it is a practical one that affects team building and hiring timelines.

For most CRUD and content-driven features, these trade-offs favour HDA. The component ecosystem advantage matters most when building interfaces that require complex, accessible widgets. The typing advantage is real but narrower than it appears, because the majority of interactivity in an HDA is handled by a small set of well-tested htmx attributes rather than arbitrary JavaScript. The hiring argument is genuine and may be the strongest practical objection for many teams.

Rust-specific advantages

The contact list comparison used generic server code for the SPA side. The HDA side is Rust, and Rust brings specific advantages beyond the architectural ones.

Maud checks HTML at compile time. Most server-side template engines (Jinja2, ERB, Handlebars) parse templates at runtime. A typo in a variable name, a missing closing tag, or a type mismatch surfaces as a runtime error, sometimes only when that specific template path is hit in production. Maud’s html! macro is evaluated during compilation. If the template contains a syntax error or references a variable that does not exist, the code does not compile. This is a meaningful safety guarantee that most server-side frameworks cannot offer.

SQLx checks queries at compile time. The sqlx::query_as! macro verifies SQL against a live database during compilation. If a column name is wrong, a type does not match, or a table does not exist, the compiler catches it. Combined with Maud’s compile-time HTML checking, the Rust HDA stack catches errors at two boundaries (database-to-code and code-to-HTML) where most stacks only discover problems at runtime.

The combination delivers type safety comparable to TypeScript + React, but without the client-side dependency graph. TypeScript checks component props and state. Rust + SQLx + Maud checks database queries, handler types, and HTML output. Both approaches catch a broad category of errors before the code runs. The difference is that the Rust approach achieves this with a single compiled binary, while the TypeScript approach requires a build toolchain, a runtime, and hundreds of dependencies to deliver the same guarantee.

When to Use HDA (and When Not To)

Core Stack

Web Server with Axum

Axum is the HTTP framework for this stack. Built on Tower and Hyper, it provides type-safe request handling through extractors and uses the same Tower middleware that the rest of the Rust async ecosystem uses.

This section covers routing, handlers, extractors, shared state, middleware, static assets, and graceful shutdown. A complete runnable server is assembled at the end.

A minimal server

Add Axum and Tokio to your Cargo.toml:

[dependencies]
axum = "0.8"
tokio = { version = "1", features = ["full"] }

A server that responds to GET /:

use axum::{Router, routing::get};

#[tokio::main]
async fn main() {
    let app = Router::new()
        .route("/", get(|| async { "hello" }));

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
        .await
        .unwrap();

    axum::serve(listener, app).await.unwrap();
}

axum::serve binds the router to a TCP listener. There is no separate Server type.

Handlers

A handler is an async function that receives zero or more extractors and returns something that implements IntoResponse:

use axum::response::Html;

async fn index() -> Html<&'static str> {
    Html("<h1>Home</h1>")
}

Axum provides IntoResponse implementations for common types: String, &str, StatusCode, Html<T>, Json<T>, and tuples that combine a status code with a body.

use axum::{http::StatusCode, response::IntoResponse};

async fn not_found() -> impl IntoResponse {
    (StatusCode::NOT_FOUND, Html("<h1>404</h1>"))
}

In a hypermedia-driven application, most handlers return Html. The JSON response types exist but are rarely the primary format.

Debugging handler signatures

Enable the macros feature and annotate handlers with #[debug_handler] during development. It produces clearer compiler errors when an extractor or return type is wrong:

axum = { version = "0.8", features = ["macros"] }
use axum::debug_handler;

#[debug_handler]
async fn index() -> Html<&'static str> {
    Html("<h1>Home</h1>")
}

Remove #[debug_handler] before release. It adds overhead that is only useful during compilation.

Extractors

Extractors pull data out of the incoming request. Axum calls FromRequestParts (for headers, path parameters, query strings) or FromRequest (for the body) on each handler argument. A body-consuming extractor must be the last argument.

Common extractors:

ExtractorSourceExample
Path<T>URL path parametersPath(id): Path<u64>
Query<T>Query stringQuery(params): Query<SearchParams>
Form<T>URL-encoded bodyForm(data): Form<LoginForm>
State<T>Shared application stateState(state): State<AppState>
HeaderMapRequest headersheaders: HeaderMap
use axum::extract::{Path, Query, State};
use axum::response::Html;
use serde::Deserialize;

#[derive(Deserialize)]
struct SearchParams {
    q: Option<String>,
    page: Option<u32>,
}

async fn search(
    State(state): State<AppState>,
    Query(params): Query<SearchParams>,
) -> Html<String> {
    // use state.db and params.q to query and render results
    Html(format!("<p>Searching for {:?}</p>", params.q))
}

Path parameters use curly-brace syntax in route definitions. This changed in Axum 0.8; the older colon syntax (:id) no longer works:

// /{id} not /:id
app.route("/users/{id}", get(show_user));

async fn show_user(Path(id): Path<u64>) -> impl IntoResponse {
    Html(format!("<h1>User {id}</h1>"))
}

Application state

Shared state is how handlers access the database pool, configuration, and other application-wide resources. Define a struct, derive Clone, and pass it to the router with with_state:

use sqlx::PgPool;

#[derive(Clone)]
struct AppState {
    db: PgPool,
    config: AppConfig,
}

#[derive(Clone)]
struct AppConfig {
    app_name: String,
    base_url: String,
}

Wire the state into the router:

let state = AppState {
    db: PgPool::connect(&database_url).await.unwrap(),
    config: AppConfig {
        app_name: "My App".into(),
        base_url: "http://localhost:3000".into(),
    },
};

let app = Router::new()
    .route("/", get(index))
    .with_state(state);

Handlers extract it with State<AppState>:

async fn index(State(state): State<AppState>) -> Html<String> {
    Html(format!("<h1>{}</h1>", state.config.app_name))
}

Router<S> means the router is missing state of type S. Calling .with_state(state) produces Router<()>, meaning all state has been provided. Only Router<()> can be passed to axum::serve.

PgPool is internally reference-counted, so cloning AppState is cheap. For fields that need interior mutability (counters, caches), wrap them in Arc<RwLock<T>>.

Route organisation with nest

Router::nest mounts a sub-router under a path prefix. Use this to organise routes by feature or domain area:

fn user_routes() -> Router<AppState> {
    Router::new()
        .route("/", get(list_users).post(create_user))
        .route("/{id}", get(show_user))
        .route("/{id}/edit", get(edit_user_form).post(update_user))
}

fn admin_routes() -> Router<AppState> {
    Router::new()
        .route("/", get(admin_dashboard))
        .route("/users", get(admin_users))
}

let app = Router::new()
    .route("/", get(index))
    .nest("/users", user_routes())
    .nest("/admin", admin_routes())
    .with_state(state);

Requests to /users/42 reach show_user with the path /42. The prefix is stripped before the nested router sees the request. If a handler needs the full original URI, extract OriginalUri from axum::extract.

In a workspace with multiple crates, define route functions in each crate and assemble them in the binary crate:

// in src/main.rs
use users::user_routes;
use admin::admin_routes;

let app = Router::new()
    .nest("/users", user_routes())
    .nest("/admin", admin_routes())
    .with_state(state);

All nested routers must share the same state type. If a sub-router has its own state, call .with_state() on it before nesting:

let inner = Router::new()
    .route("/bar", get(inner_handler))
    .with_state(InnerState {});  // becomes Router<()>

let app = Router::new()
    .nest("/foo", inner)  // Router<()> nests into any parent
    .with_state(OuterState {});

Middleware

Axum uses Tower layers for middleware. The tower-http crate provides HTTP-specific layers that cover most common needs.

[dependencies]
tower = "0.5"
tower-http = { version = "0.6", features = ["trace", "compression-gzip"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

Request tracing

TraceLayer logs every request and response, integrating with the tracing crate:

use tower_http::trace::TraceLayer;
use tracing_subscriber::EnvFilter;

tracing_subscriber::fmt()
    .with_env_filter(EnvFilter::from_default_env())
    .init();

let app = Router::new()
    .route("/", get(index))
    .layer(TraceLayer::new_for_http());

Control log levels with the RUST_LOG environment variable: RUST_LOG=info for production, RUST_LOG=tower_http=trace during development.

Response compression

CompressionLayer compresses response bodies. Enable additional algorithms by adding features like compression-br or compression-zstd:

use tower_http::compression::CompressionLayer;

let app = Router::new()
    .route("/", get(index))
    .layer(CompressionLayer::new())
    .layer(TraceLayer::new_for_http());

Combining layers

Apply multiple layers with ServiceBuilder. Layers are listed top-to-bottom, and the first layer listed is the outermost (runs first on the request, last on the response):

use tower::ServiceBuilder;

let app = Router::new()
    .route("/", get(index))
    .layer(
        ServiceBuilder::new()
            .layer(TraceLayer::new_for_http())
            .layer(CompressionLayer::new())
    )
    .with_state(state);

Here, tracing wraps compression: requests are logged before responses are compressed.

Sessions and CSRF

Session management (tower-sessions) and CSRF protection follow the same .layer() pattern. They are covered in the Authentication section.

Custom middleware

For application-specific middleware, use axum::middleware::from_fn. Write a plain async function that receives the request and a Next handle:

use axum::{
    middleware::{self, Next},
    extract::Request,
    response::Response,
    http::StatusCode,
};

async fn require_auth(
    State(state): State<AppState>,
    request: Request,
    next: Next,
) -> Result<Response, StatusCode> {
    // check auth, return Err(StatusCode::UNAUTHORIZED) if invalid
    Ok(next.run(request).await)
}

let app = Router::new()
    .route("/dashboard", get(dashboard))
    .route_layer(middleware::from_fn_with_state(
        state.clone(),
        require_auth,
    ))
    .with_state(state);

.route_layer() applies middleware only to matched routes. Unmatched requests fall through to the fallback without hitting this middleware. .layer() applies to all requests, including fallbacks.

Serving static assets

An HDA application typically serves a small set of CSS and JavaScript files. The rust-embed crate embeds an entire directory into the binary at compile time, producing a single self-contained executable.

[dependencies]
rust-embed = "8"
mime_guess = "2"

Define an embedded asset struct pointing at your assets directory:

use rust_embed::RustEmbed;

#[derive(RustEmbed)]
#[folder = "assets/"]
struct Assets;

Write a handler that serves embedded files:

use axum::{
    extract::Path,
    http::{header, StatusCode},
    response::IntoResponse,
};

async fn static_handler(Path(path): Path<String>) -> impl IntoResponse {
    match Assets::get(&path) {
        Some(file) => {
            let mime = mime_guess::from_path(&path).first_or_octet_stream();
            (
                [(header::CONTENT_TYPE, mime.as_ref())],
                file.data.to_vec(),
            )
                .into_response()
        }
        None => StatusCode::NOT_FOUND.into_response(),
    }
}

Mount it on the router:

let app = Router::new()
    .route("/", get(index))
    .route("/assets/{*path}", get(static_handler));

In debug builds, rust-embed reads files from disk, so changes to CSS and JavaScript appear without recompilation. In release builds, everything is baked into the binary.

If your project grows to include many large assets (images, fonts), consider tower-http’s ServeDir to serve from the filesystem instead, or move large files to object storage.

Graceful shutdown

axum::serve accepts a shutdown signal via .with_graceful_shutdown(). When the signal fires, the server stops accepting new connections and waits for in-flight requests to complete.

use tokio::signal;

async fn shutdown_signal() {
    let ctrl_c = async {
        signal::ctrl_c()
            .await
            .expect("failed to install Ctrl+C handler");
    };

    #[cfg(unix)]
    let terminate = async {
        signal::unix::signal(signal::unix::SignalKind::terminate())
            .expect("failed to install SIGTERM handler")
            .recv()
            .await;
    };

    #[cfg(not(unix))]
    let terminate = std::future::pending::<()>();

    tokio::select! {
        _ = ctrl_c => {},
        _ = terminate => {},
    }
}

Pass the signal to the server:

axum::serve(listener, app)
    .with_graceful_shutdown(shutdown_signal())
    .await
    .unwrap();

This handles both Ctrl+C (SIGINT) and SIGTERM, which is what Docker and most process managers send when stopping a container. In production, consider adding a TimeoutLayer from tower-http so that slow in-flight requests cannot block shutdown indefinitely.

Putting it together

A complete main.rs combining routing, state, middleware, static assets, and graceful shutdown:

use axum::{
    extract::{Path, State},
    http::{header, StatusCode},
    response::{Html, IntoResponse},
    routing::get,
    Router,
};
use rust_embed::RustEmbed;
use sqlx::PgPool;
use tokio::signal;
use tower::ServiceBuilder;
use tower_http::{compression::CompressionLayer, trace::TraceLayer};
use tracing_subscriber::EnvFilter;

// -- State --

#[derive(Clone)]
struct AppState {
    db: PgPool,
    config: AppConfig,
}

#[derive(Clone)]
struct AppConfig {
    app_name: String,
}

// -- Static assets --

#[derive(RustEmbed)]
#[folder = "assets/"]
struct Assets;

async fn static_handler(Path(path): Path<String>) -> impl IntoResponse {
    match Assets::get(&path) {
        Some(file) => {
            let mime = mime_guess::from_path(&path).first_or_octet_stream();
            ([(header::CONTENT_TYPE, mime.as_ref())], file.data.to_vec())
                .into_response()
        }
        None => StatusCode::NOT_FOUND.into_response(),
    }
}

// -- Handlers --

async fn index(State(state): State<AppState>) -> Html<String> {
    Html(format!(
        r#"<html>
  <head><link rel="stylesheet" href="/assets/style.css"></head>
  <body><h1>{}</h1></body>
</html>"#,
        state.config.app_name
    ))
}

// -- User routes --

fn user_routes() -> Router<AppState> {
    Router::new()
        .route("/", get(list_users))
        .route("/{id}", get(show_user))
}

async fn list_users() -> Html<&'static str> {
    Html("<h1>Users</h1>")
}

async fn show_user(Path(id): Path<u64>) -> Html<String> {
    Html(format!("<h1>User {id}</h1>"))
}

// -- Shutdown --

async fn shutdown_signal() {
    let ctrl_c = async {
        signal::ctrl_c()
            .await
            .expect("failed to install Ctrl+C handler");
    };

    #[cfg(unix)]
    let terminate = async {
        signal::unix::signal(signal::unix::SignalKind::terminate())
            .expect("failed to install SIGTERM handler")
            .recv()
            .await;
    };

    #[cfg(not(unix))]
    let terminate = std::future::pending::<()>();

    tokio::select! {
        _ = ctrl_c => {},
        _ = terminate => {},
    }
}

// -- Main --

#[tokio::main]
async fn main() {
    tracing_subscriber::fmt()
        .with_env_filter(EnvFilter::from_default_env())
        .init();

    let database_url =
        std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");

    let state = AppState {
        db: PgPool::connect(&database_url).await.unwrap(),
        config: AppConfig {
            app_name: "My App".into(),
        },
    };

    let app = Router::new()
        .route("/", get(index))
        .nest("/users", user_routes())
        .route("/assets/{*path}", get(static_handler))
        .layer(
            ServiceBuilder::new()
                .layer(TraceLayer::new_for_http())
                .layer(CompressionLayer::new()),
        )
        .with_state(state);

    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
        .await
        .unwrap();

    tracing::info!("listening on {}", listener.local_addr().unwrap());

    axum::serve(listener, app)
        .with_graceful_shutdown(shutdown_signal())
        .await
        .unwrap();
}

The corresponding dependencies:

[dependencies]
axum = "0.8"
tokio = { version = "1", features = ["full"] }
tower = "0.5"
tower-http = { version = "0.6", features = ["trace", "compression-gzip"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
sqlx = { version = "0.8", features = ["runtime-tokio", "postgres"] }
rust-embed = "8"
mime_guess = "2"
serde = { version = "1", features = ["derive"] }

HTML Templating with Maud

Maud is a compile-time HTML templating library for Rust. Its html! macro checks your markup at compile time and expands it to efficient string-building code, so there is no runtime template parsing, no template files to deploy, and no possibility of a missing closing tag appearing in production.

The Web Server with Axum section used Html<String> with format! for responses. That works for trivial cases, but it gives you no structure, no escaping, and no compile-time checking. Maud replaces it entirely. Handlers return Markup instead of Html<String>, and the compiler catches template errors before the server starts.

Setup

Add Maud to your Cargo.toml with the axum feature:

[dependencies]
maud = { version = "0.27", features = ["axum"] }

The axum feature implements IntoResponse for Maud’s Markup type, so handlers can return markup directly. It targets axum-core 0.5, which corresponds to Axum 0.8.

The html! macro

The html! macro is the core of Maud. It takes a custom syntax that resembles HTML but follows Rust conventions, and returns a Markup value:

use maud::{html, Markup};

let greeting = "world";
let page: Markup = html! {
    h1 { "Hello, " (greeting) "!" }
};

Elements

Elements with content use curly braces. Void elements (those that cannot have children) use a semicolon:

html! {
    h1 { "Page title" }
    p {
        strong { "Bold text" }
        " followed by normal text."
    }
    br;
    input type="text" name="query";
}

Non-void elements that need no content still use braces:

html! {
    script src="/static/app.js" {}
    div.placeholder {}
}

Attributes

Attributes appear after the element name, before the braces or semicolon:

html! {
    input type="email" name="user_email" required placeholder="you@example.com";
    a href="/about" { "About" }
    article data-id="12345" { "Content" }
}

Classes and IDs have a shorthand syntax, chained directly onto the element:

html! {
    input #search-input .form-control type="text";
    div.card.shadow-sm { "Card content" }
}
// Produces:
// <input id="search-input" class="form-control" type="text">
// <div class="card shadow-sm">Card content</div>

A class or ID without an element name produces a div:

html! {
    #main { "Main content" }
    .sidebar { "Sidebar content" }
}
// Produces:
// <div id="main">Main content</div>
// <div class="sidebar">Sidebar content</div>

Quote class names that contain characters Maud’s parser would choke on:

html! {
    div."col-sm-6" { "Column" }
}

Dynamic values with splices

Parentheses insert a Rust expression into the output. Maud automatically escapes HTML special characters:

let username = "Alice <script>alert('xss')</script>";
html! {
    p { "Hello, " (username) "!" }
}
// Output: <p>Hello, Alice &lt;script&gt;alert('xss')&lt;/script&gt;!</p>

Any type implementing std::fmt::Display can be spliced. This includes strings, numbers, and any type with a Display implementation.

For dynamic attribute values, use parentheses for a single expression or braces for concatenation:

let user_id = 42;
let base = "/users";

html! {
    // Single expression
    span data-id=(user_id) { "User" }

    // Concatenation
    a href={ (base) "/" (user_id) } { "Profile" }
}

Boolean attributes and toggles

Square brackets conditionally toggle boolean attributes and classes:

let is_active = true;
let is_disabled = false;

html! {
    button disabled[is_disabled] { "Submit" }
    a.nav-link.active[is_active] href="/" { "Home" }
}
// Produces:
// <button>Submit</button>
// <a class="nav-link active" href="/">Home</a>

Optional attributes

Attributes that take an Option value render only when Some:

let tooltip: Option<&str> = Some("More info");
let label: Option<&str> = None;

html! {
    span title=[tooltip] { "Hover me" }
    span aria-label=[label] { "No aria-label rendered" }
}

Control flow

Prefix control structures with @:

let user: Option<&str> = Some("Alice");
let items = vec!["Bread", "Milk", "Eggs"];

html! {
    // if / else
    @if let Some(name) = user {
        p { "Welcome, " (name) }
    } @else {
        p { a href="/login" { "Log in" } }
    }

    // Loops
    ul {
        @for item in &items {
            li { (item) }
        }
    }

    // Let bindings
    @for (i, item) in items.iter().enumerate() {
        @let label = format!("{}. {}", i + 1, item);
        p { (label) }
    }

    // Match
    @match items.len() {
        0 => { p { "No items." } },
        1 => { p { "One item." } },
        n => { p { (n) " items." } },
    }
}

DOCTYPE

Maud provides a DOCTYPE constant:

use maud::DOCTYPE;

html! {
    (DOCTYPE)
    html lang="en" {
        head { title { "My App" } }
        body { h1 { "Hello" } }
    }
}
// Outputs: <!DOCTYPE html><html lang="en">...

Raw HTML with PreEscaped

Maud escapes all spliced content by default. When you have trusted HTML that should not be escaped, wrap it in PreEscaped:

use maud::PreEscaped;

let svg = r#"<svg viewBox="0 0 100 100"><circle cx="50" cy="50" r="40"/></svg>"#;

html! {
    div.icon { (PreEscaped(svg)) }
}

Use this for inline SVGs, pre-rendered markdown, or other HTML you control. Never pass user input to PreEscaped.

Components as functions

Maud has no built-in component system. Components are Rust functions that return Markup. This is simpler and more flexible than a template inheritance system, because you have the full language for composition, branching, and parameterisation.

A basic component:

use maud::{html, Markup};

fn nav_link(href: &str, text: &str, active: bool) -> Markup {
    html! {
        a.nav-link.active[active] href=(href) { (text) }
    }
}

Use it by calling the function inside a splice:

html! {
    nav {
        (nav_link("/", "Home", true))
        (nav_link("/about", "About", false))
        (nav_link("/contact", "Contact", false))
    }
}

Passing content blocks

The simplest approach is accepting Markup directly:

fn card(title: &str, body: Markup) -> Markup {
    html! {
        div.card {
            div.card-header { h3 { (title) } }
            div.card-body { (body) }
        }
    }
}

// Usage
let output = card("Settings", html! {
    p { "Adjust your preferences below." }
    form method="post" {
        // form fields
    }
});

A more flexible approach is accepting anything that implements Render. This lets callers pass Markup, strings, numbers, or any custom type with a Render implementation, without forcing them to wrap everything in html!:

use maud::Render;

fn card(title: &str, body: impl Render) -> Markup {
    html! {
        div.card {
            div.card-header { h3 { (title) } }
            div.card-body { (body) }
        }
    }
}

// All of these work:
card("Note", html! { p { "Rich content." } });
card("Note", "Plain text content");
card("Note", my_renderable_struct);

Prefer impl Render over Markup for component parameters. It is a small change that makes components more composable.

Components as structs with Render

When a component has several fields, or when you want it to compose via splice syntax rather than a function call, make it a struct that implements Render:

use maud::{html, Markup, Render};

enum AlertLevel {
    Info,
    Warning,
    Error,
}

struct Alert<'a, B: Render> {
    level: AlertLevel,
    title: &'a str,
    body: B,
    dismissible: bool,
}

impl<B: Render> Render for Alert<'_, B> {
    fn render(&self) -> Markup {
        let class = match self.level {
            AlertLevel::Info => "alert-info",
            AlertLevel::Warning => "alert-warning",
            AlertLevel::Error => "alert-error",
        };
        html! {
            div.alert.(class) role="alert" {
                strong { (self.title) }
                div { (self.body) }
                @if self.dismissible {
                    button.close type="button" { "×" }
                }
            }
        }
    }
}

Splice it directly, no wrapper function needed:

html! {
    (Alert {
        level: AlertLevel::Warning,
        title: "Disk space low",
        body: html! { p { "Less than 10% remaining." } },
        dismissible: true,
    })
}

Another example, a breadcrumb navigation:

struct Breadcrumb {
    segments: Vec<(String, String)>, // (label, href)
}

impl Render for Breadcrumb {
    fn render(&self) -> Markup {
        html! {
            nav aria-label="breadcrumb" {
                ol.breadcrumb {
                    @for (i, (label, href)) in self.segments.iter().enumerate() {
                        @let is_last = i == self.segments.len() - 1;
                        li.breadcrumb-item.active[is_last] {
                            @if is_last {
                                (label)
                            } @else {
                                a href=(href) { (label) }
                            }
                        }
                    }
                }
            }
        }
    }
}

Reach for Render when a component has enough fields that a function signature would get unwieldy, when it will be stored in collections and rendered in loops, or when other crates need to provide renderable types. For simple one- or two-parameter components, plain functions are shorter and sufficient.

Page layouts

A layout is a function that wraps content in a full HTML document. Since layouts in an HDA application typically need request context (the current user, flash messages, navigation state), build the layout as an Axum extractor.

First, a minimal layout function to show the shape:

use maud::{html, Markup, DOCTYPE};

fn base_layout(title: &str, content: Markup) -> Markup {
    html! {
        (DOCTYPE)
        html lang="en" {
            head {
                meta charset="utf-8";
                meta name="viewport" content="width=device-width, initial-scale=1";
                title { (title) }
                link rel="stylesheet" href="/assets/style.css";
                script src="/assets/htmx.min.js" defer {}
            }
            body {
                main { (content) }
            }
        }
    }
}

In practice, layouts need data from the request: the authenticated user for navigation, flash messages from the session, the current path for active link highlighting. Extract all of this in a layout struct that implements FromRequestParts:

use axum::extract::FromRequestParts;
use axum::http::request::Parts;
use maud::{html, Markup, DOCTYPE};

struct PageLayout {
    user: Option<User>,
    current_path: String,
}

impl<S: Send + Sync> FromRequestParts<S> for PageLayout {
    type Rejection = std::convert::Infallible;

    async fn from_request_parts(
        parts: &mut Parts,
        _state: &S,
    ) -> Result<Self, Self::Rejection> {
        let user = parts.extensions.get::<User>().cloned();
        let current_path = parts.uri.path().to_string();
        Ok(PageLayout { user, current_path })
    }
}

impl PageLayout {
    fn render(self, title: &str, content: Markup) -> Markup {
        html! {
            (DOCTYPE)
            html lang="en" {
                head {
                    meta charset="utf-8";
                    meta name="viewport" content="width=device-width, initial-scale=1";
                    title { (title) }
                    link rel="stylesheet" href="/assets/style.css";
                    script src="/assets/htmx.min.js" defer {}
                }
                body {
                    nav {
                        a.active[self.current_path == "/"] href="/" { "Home" }
                        a.active[self.current_path.starts_with("/users")]
                            href="/users" { "Users" }

                        div.nav-right {
                            @if let Some(user) = &self.user {
                                span { (user.name) }
                                a href="/logout" { "Log out" }
                            } @else {
                                a href="/login" { "Log in" }
                            }
                        }
                    }
                    main { (content) }
                    footer {
                        p { "© 2026" }
                    }
                }
            }
        }
    }
}

Handlers extract the layout alongside other parameters:

async fn user_list(
    layout: PageLayout,
    State(state): State<AppState>,
) -> Markup {
    let users = fetch_users(&state.db).await;

    layout.render("Users", html! {
        h1 { "Users" }
        ul {
            @for user in &users {
                li { (user.name) }
            }
        }
    })
}

The handler focuses on its content. The layout handles the document shell, navigation, and any request-scoped data. Add fields to PageLayout as the application grows (flash messages, CSRF tokens, feature flags) without changing handler signatures.

Full pages vs HTML fragments

In an HDA application, the same handler often needs to return a full HTML page for normal browser requests and a bare HTML fragment for htmx requests. A normal navigation loads the entire page. An htmx-boosted link or hx-get request only needs the content that will be swapped into the page.

The axum-htmx crate provides typed extractors for htmx request headers:

[dependencies]
axum-htmx = "0.6"

Use HxBoosted to detect boosted navigation (where htmx intercepts a normal link click and swaps just the body), or HxRequest to detect any htmx-initiated request:

use axum_htmx::HxBoosted;

async fn user_list(
    HxBoosted(boosted): HxBoosted,
    layout: PageLayout,
    State(state): State<AppState>,
) -> Markup {
    let users = fetch_users(&state.db).await;

    let content = html! {
        h1 { "Users" }
        ul {
            @for user in &users {
                li { (user.name) }
            }
        }
    };

    if boosted {
        content
    } else {
        layout.render("Users", content)
    }
}

For targeted fragment swaps (where htmx replaces a specific element on the page), handlers return only the fragment:

use axum_htmx::HxRequest;

async fn user_search(
    HxRequest(is_htmx): HxRequest,
    Query(params): Query<SearchParams>,
    layout: PageLayout,
    State(state): State<AppState>,
) -> Markup {
    let users = search_users(&state.db, &params.q).await;

    let results = html! {
        ul #search-results {
            @for user in &users {
                li { (user.name) }
            }
        }
    };

    if is_htmx {
        results
    } else {
        layout.render("Search", html! {
            h1 { "Search users" }
            input type="search" name="q" value=(params.q)
                hx-get="/users/search"
                hx-target="#search-results"
                hx-trigger="input changed delay:300ms";
            (results)
        })
    }
}

This pattern means every URL works as a full page when accessed directly (bookmarks, shared links, first page load) and as a fragment when accessed via htmx. No separate endpoint needed.

htmx attributes in Maud

htmx attributes use the hx- prefix, which works naturally in Maud:

html! {
    // Click to load
    button hx-get="/api/data" hx-target="#results" hx-swap="innerHTML" {
        "Load data"
    }

    // Form submission
    form hx-post="/contacts" hx-target="#contact-list" hx-swap="beforeend" {
        input type="text" name="name" required;
        button type="submit" { "Add contact" }
    }

    // Inline editing
    tr hx-get={ "/users/" (user.id) "/edit" } hx-trigger="click"
       hx-target="this" hx-swap="outerHTML" {
        td { (user.name) }
        td { (user.email) }
    }

    // Delete with confirmation
    button hx-delete={ "/users/" (user.id) }
           hx-confirm="Delete this user?"
           hx-target="closest tr"
           hx-swap="outerHTML swap:500ms" {
        "Delete"
    }
}

The Interactivity with htmx section covers htmx patterns in full.

Gotchas

Semicolons on void elements. Forgetting the semicolon on input, br, meta, link, or img causes a compile error. If the compiler complains about unexpected tokens after an element name, check for a missing semicolon.

// Wrong: Maud expects children
input type="text" { }

// Correct: semicolon terminates void elements
input type="text";

The @ prefix is mandatory for control flow. All if, for, let, and match inside html! must start with @. Without it, Maud tries to parse the keyword as an element name.

Brace vs parenthesis in attributes. Parentheses splice a single expression. Braces concatenate multiple parts. Using parentheses when you need concatenation silently drops everything after the first expression:

let id = 42;

// Wrong: only inserts base, "/users/" and id are treated as element content
a href=("/users/") (id) { "Profile" }

// Correct: braces concatenate
a href={ "/users/" (id) } { "Profile" }

Compile-time cost. Maud macros expand at compile time, which is good for runtime performance but can slow incremental builds on large templates. Breaking templates into smaller functions across modules helps, because Rust only recompiles the modules that changed.

Interactivity with HTMX

CSS Without Frameworks

CSS frameworks and preprocessors exist to solve problems that the web platform now handles natively. CSS nesting, container queries, :has(), @layer, and custom properties eliminate the need for Sass, Less, or utility-class frameworks. This section covers writing plain CSS for an HDA application, processing it with the lightningcss crate, and co-locating styles alongside Maud components using the inventory crate.

The result is a single processed stylesheet, built at startup from a base CSS file and component-scoped fragments, minified and vendor-prefixed, served from memory with cache-busting.

Plain CSS in 2026

Native CSS now provides the features that historically required preprocessors:

  • Nesting replaces Sass/Less nesting syntax. Write .card { .title { ... } } directly.
  • Custom properties (--color-primary: #1a1a2e;) replace preprocessor variables, with the advantage of being runtime-configurable and inheritable through the DOM.
  • @layer controls cascade priority without specificity hacks.
  • Container queries let components respond to their container’s size rather than the viewport.
  • :has() selects elements based on their children, replacing many patterns that previously required JavaScript.

The Web Platform Has Caught Up section covers these features in detail. This section focuses on the tooling pipeline: how to write, process, and serve CSS in a Rust HDA application.

CSS organisation with RSCSS

RSCSS (Reasonable System for CSS Stylesheet Structure) provides a lightweight naming convention that works well with component-based architectures. It imposes just enough structure to keep styles maintainable without the ceremony of BEM or the magic of CSS Modules.

The core rules:

  • Components are named with at least two words, separated by dashes: .search-form, .article-card, .user-profile.
  • Elements within a component use a single word: .title, .body, .avatar. Multi-word elements are concatenated: .firstname, .submitbutton. Use the child selector (>) to prevent styles bleeding into nested components.
  • Variants modify a component or element. RSCSS normally prefixes variants with a dash (.search-form.-compact), but dashes at the start of a class name are awkward in Maud templates. Use a double underscore prefix instead: .search-form.__compact. The double underscore distinguishes variants from helpers at a glance.
  • Helpers are global utility classes prefixed with a single underscore: ._hidden, ._center. Keep these minimal.

In practice:

.article-card {
    border: 1px solid var(--border);
    border-radius: 0.5rem;
    padding: 1rem;

    > .title {
        font-size: 1.25rem;
        font-weight: 600;
    }

    > .meta {
        color: var(--text-muted);
        font-size: 0.875rem;
    }

    &.__featured {
        border-color: var(--accent);
    }
}

And the corresponding Maud component:

fn article_card(article: &Article) -> Markup {
    html! {
        div.article-card.__featured[article.featured] {
            h2.title { (article.title) }
            p.meta { "By " (article.author) }
        }
    }
}

The two-word component rule means component classes never collide with single-word element classes. The double-underscore variant prefix is visually distinct from both element classes and helper utilities, and works cleanly in Maud’s class syntax.

lightningcss

lightningcss is a CSS parser, transformer, and minifier written in Rust by the Parcel team. It processes over 2.7 million lines of CSS per second on a single thread. Use it to minify, vendor-prefix, and downlevel modern CSS syntax for older browsers.

Add it to Cargo.toml:

[dependencies]
lightningcss = { version = "1.0.0-alpha.70", default-features = false }

Disable default features to avoid pulling in Node.js binding dependencies. Enable bundler if you need @import resolution, or visitor if you need custom AST transforms.

A function to process a CSS string:

use lightningcss::stylesheet::{StyleSheet, ParserOptions, MinifyOptions};
use lightningcss::printer::PrinterOptions;
use lightningcss::targets::{Targets, Browsers};

pub fn process_css(raw: &str) -> Result<String, String> {
    let targets = Targets::from(Browsers {
        chrome: Some(95 << 16),
        firefox: Some(90 << 16),
        safari: Some(15 << 16),
        ..Browsers::default()
    });

    let mut stylesheet = StyleSheet::parse(raw, ParserOptions {
        filename: "styles.css".to_string(),
        ..ParserOptions::default()
    })
    .map_err(|e| format!("CSS parse error: {e}"))?;

    stylesheet
        .minify(MinifyOptions {
            targets,
            ..MinifyOptions::default()
        })
        .map_err(|e| format!("CSS minify error: {e}"))?;

    let result = stylesheet
        .to_css(PrinterOptions {
            minify: true,
            targets,
            ..PrinterOptions::default()
        })
        .map_err(|e| format!("CSS print error: {e}"))?;

    Ok(result.code)
}

Browser targets are encoded as major << 16 | minor << 8 | patch. Chrome 95 is 95 << 16.

Pass targets to both MinifyOptions and PrinterOptions. The minify step transforms modern syntax (nesting, oklch() colours, logical properties) into forms the target browsers understand. The printer step serialises the result, applying minification when minify: true.

What lightningcss handles automatically:

  • Flattens CSS nesting for older browsers
  • Adds vendor prefixes (-webkit-, -moz-) where targets require them
  • Converts modern colour functions (oklch(), lab(), color-mix()) to rgb()/rgba() fallbacks
  • Transpiles logical properties (margin-inline-start) to physical equivalents
  • Converts media query range syntax (@media (width >= 768px)) to min-width form
  • Merges longhand properties into shorthands
  • Removes redundant vendor prefixes the targets don’t need

Locality of behaviour with inventory

The inventory crate provides a distributed registration pattern: declare values in any module, collect them all in one place at startup. This enables locality of behaviour for CSS, where each component’s styles live in the same file as its markup.

[dependencies]
inventory = "0.3"

Define a CSS fragment type

Create a type to hold a CSS fragment and register it with inventory::collect!:

// src/styles.rs

pub struct CssFragment(pub &'static str);

inventory::collect!(CssFragment);

Co-locate CSS with components

In each component file, declare the CSS alongside the markup using inventory::submit!:

// src/components/article_card.rs

use maud::{html, Markup};
use crate::styles::CssFragment;

inventory::submit! {
    CssFragment(r#"
        .article-card {
            border: 1px solid var(--border);
            border-radius: 0.5rem;
            padding: 1rem;

            > .title {
                font-size: 1.25rem;
                font-weight: 600;
            }

            > .meta {
                color: var(--text-muted);
                font-size: 0.875rem;
            }

            &.__featured {
                border-color: var(--accent);
            }
        }
    "#)
}

pub fn article_card(article: &Article) -> Markup {
    html! {
        div.article-card.__featured[article.featured] {
            h2.title { (article.title) }
            p.meta { "By " (article.author) }
        }
    }
}

Adding a new component with styles requires no changes to any other file. The CSS lives next to the markup that uses it.

Another component

// src/components/nav_bar.rs

use maud::{html, Markup};
use crate::styles::CssFragment;

inventory::submit! {
    CssFragment(r#"
        .nav-bar {
            display: flex;
            align-items: center;
            gap: 1rem;
            padding: 0.75rem 1.5rem;
            background: var(--nav-bg);

            > .link {
                color: var(--nav-link);
                text-decoration: none;
            }

            > .link.__active {
                font-weight: 600;
                color: var(--nav-link-active);
            }
        }
    "#)
}

pub fn nav_bar(current_path: &str) -> Markup {
    html! {
        nav.nav-bar {
            a.link.__active[current_path == "/"] href="/" { "Home" }
            a.link.__active[current_path.starts_with("/users")] href="/users" { "Users" }
        }
    }
}

The processing pipeline

At startup, collect all CSS fragments, concatenate them with a base stylesheet, process through lightningcss, and cache the result in memory. A content hash in the filename enables indefinite browser caching.

Base stylesheet

A base.css file contains resets, custom properties, and global styles that don’t belong to any component:

/* assets/base.css */

*,
*::before,
*::after {
    box-sizing: border-box;
}

:root {
    --text: #1a1a2e;
    --text-muted: #6b7280;
    --bg: #ffffff;
    --border: #e5e7eb;
    --accent: #2563eb;
    --nav-bg: #f9fafb;
    --nav-link: #374151;
    --nav-link-active: #1a1a2e;
}

body {
    font-family: system-ui, -apple-system, sans-serif;
    color: var(--text);
    background: var(--bg);
    margin: 0;
    line-height: 1.6;
}

Build and serve the stylesheet

// src/styles.rs

use lightningcss::stylesheet::{StyleSheet, ParserOptions, MinifyOptions};
use lightningcss::printer::PrinterOptions;
use lightningcss::targets::{Targets, Browsers};
use std::sync::LazyLock;

pub struct CssFragment(pub &'static str);

inventory::collect!(CssFragment);

static BASE_CSS: &str = include_str!("../assets/base.css");

pub struct ProcessedCss {
    pub body: String,
    pub filename: String,
    pub route: String,
}

static STYLESHEET: LazyLock<ProcessedCss> = LazyLock::new(|| build_stylesheet());

pub fn stylesheet() -> &'static ProcessedCss {
    &STYLESHEET
}

fn build_stylesheet() -> ProcessedCss {
    // Concatenate base CSS and all component fragments
    let mut raw = String::from(BASE_CSS);
    for fragment in inventory::iter::<CssFragment> {
        raw.push('\n');
        raw.push_str(fragment.0);
    }

    // Process with lightningcss
    let targets = Targets::from(Browsers {
        chrome: Some(95 << 16),
        firefox: Some(90 << 16),
        safari: Some(15 << 16),
        ..Browsers::default()
    });

    let mut sheet = StyleSheet::parse(&raw, ParserOptions {
        filename: "styles.css".to_string(),
        ..ParserOptions::default()
    })
    .expect("CSS parse error");

    sheet
        .minify(MinifyOptions {
            targets,
            ..MinifyOptions::default()
        })
        .expect("CSS minify error");

    let result = sheet
        .to_css(PrinterOptions {
            minify: true,
            targets,
            ..PrinterOptions::default()
        })
        .expect("CSS print error");

    // Hash the output for cache-busting
    let hash = {
        use std::hash::{Hash, Hasher};
        let mut hasher = std::collections::hash_map::DefaultHasher::new();
        result.code.hash(&mut hasher);
        format!("{:x}", hasher.finish())
    };

    let filename = format!("style.{hash}.css");
    let route = format!("/assets/{filename}");

    ProcessedCss {
        body: result.code,
        filename,
        route,
    }
}

The LazyLock ensures the CSS is built once on first access and cached for the lifetime of the process. include_str! embeds base.css into the binary at compile time, so the binary is self-contained.

Wire it into Axum

Expose the stylesheet as a route and make the filename available to the layout:

// src/main.rs

use axum::{
    http::header,
    response::IntoResponse,
    routing::get,
    Router,
};

mod styles;
mod components;

async fn css_handler() -> impl IntoResponse {
    let css = styles::stylesheet();
    (
        [
            (header::CONTENT_TYPE, "text/css"),
            (header::CACHE_CONTROL, "public, max-age=31536000, immutable"),
        ],
        css.body.clone(),
    )
}

fn app() -> Router {
    let css = styles::stylesheet();

    Router::new()
        .route(&css.route, get(css_handler))
        // ... other routes
}

The Cache-Control header tells browsers to cache the file for a year. Because the filename contains a content hash, deploying new CSS produces a new filename, and browsers fetch the new version automatically. Old cached versions expire naturally.

Reference the stylesheet in the layout

The layout component needs the hashed filename to build the <link> tag:

use maud::{html, Markup, DOCTYPE};
use crate::styles;

fn base_layout(title: &str, content: Markup) -> Markup {
    let css = styles::stylesheet();

    html! {
        (DOCTYPE)
        html lang="en" {
            head {
                meta charset="utf-8";
                meta name="viewport" content="width=device-width, initial-scale=1";
                title { (title) }
                link rel="stylesheet" href=(css.route);
                script src="/assets/htmx.min.js" defer {}
            }
            body {
                (content)
            }
        }
    }
}

Every page automatically references the current stylesheet version. When any component’s CSS changes, the hash changes, the filename changes, and browsers fetch the new file on the next page load.

How inventory works

inventory uses platform-specific linker constructor sections (the same mechanism as __attribute__((constructor)) in C). Each inventory::submit! call creates a static value and a constructor function that registers it in an atomic linked list. The OS loader runs all constructors before main() starts, so by the time your application code runs, every fragment is already registered and inventory::iter yields them all.

Three things to keep in mind:

  • No ordering guarantees. Fragments are yielded in whatever order the linker placed them. If CSS cascade order matters between components, switch to a struct with a weight field and sort after collecting. In practice, well-scoped component styles rarely depend on source order.
  • Same-crate usage is safe. The known linker dead-code-elimination issue (where submitted items in an unreferenced crate get stripped) does not apply when collect! and submit! are in the same crate. For a workspace with multiple crates, ensure each crate that submits fragments is referenced by at least one symbol in the binary crate.
  • submit! is module-level only. It cannot appear inside a function body. It is a static declaration, not a runtime statement.

Putting it together

The full flow:

  1. base.css contains resets, custom properties, and global styles. It is embedded with include_str!.
  2. Each component file uses inventory::submit! to register its CSS alongside its Maud markup.
  3. At startup, build_stylesheet() concatenates the base CSS with all registered fragments, processes the result through lightningcss, and hashes the output.
  4. The hashed filename is available to the layout via styles::stylesheet().route.
  5. A single Axum route serves the processed CSS from memory with long-lived cache headers.

No build step. No CSS preprocessor. No file watchers. The Rust compiler and lightningcss handle everything at compile time and startup.

Data

Database with PostgreSQL and SQLx

SQLx is an async database library for Rust that checks your SQL queries against a real PostgreSQL database at compile time. If a query references a column that does not exist, uses the wrong type, or has a syntax error, the compiler catches it before the application runs. This is the primary reason to choose SQLx over other database libraries.

SQLx is not an ORM. There is no query builder, no model macros, and no schema-to-struct code generation. Write SQL directly, and SQLx verifies it.

Setup

Add SQLx to your Cargo.toml:

[dependencies]
sqlx = { version = "0.8", features = [
    "runtime-tokio",
    "tls-rustls-ring-webpki",
    "postgres",
    "macros",
    "migrate",
] }

Feature breakdown:

  • runtime-tokio selects the Tokio async runtime.
  • tls-rustls-ring-webpki enables TLS via rustls with WebPKI certificate roots. For local development without TLS, this still needs to be present but the connection will negotiate plaintext if the server allows it.
  • postgres enables the PostgreSQL driver.
  • macros enables query!, query_as!, and the other compile-time checked query macros.
  • migrate enables the migration runner and migrate! macro.

Add type integration features as needed:

sqlx = { version = "0.8", features = [
    "runtime-tokio",
    "tls-rustls-ring-webpki",
    "postgres",
    "macros",
    "migrate",
    "uuid",
    "time",
    "json",
] }

These enable uuid::Uuid, time crate date/time types, and serde_json::Value / Json<T> for JSONB columns, respectively.

Install the CLI

The sqlx-cli tool manages databases and migrations:

cargo install sqlx-cli --no-default-features --features rustls,postgres

This installs only PostgreSQL support, which keeps the build faster than the full default install.

Connecting to PostgreSQL

SQLx reads the database connection string from the DATABASE_URL environment variable. Set it in a .env file at the project root:

DATABASE_URL=postgres://myapp:password@localhost:5432/myapp_dev

The format is postgres://user:password@host:port/database. SQLx’s macros use dotenvy to read .env automatically at compile time.

PostgreSQL itself should be running as a Docker container managed by Docker Compose. See the Development Environment section for the container setup.

Connection pooling

Create a connection pool at application startup and share it through Axum’s application state. PgPool is internally reference-counted, so cloning it is cheap.

use sqlx::postgres::PgPoolOptions;
use sqlx::PgPool;

let pool = PgPoolOptions::new()
    .max_connections(5)
    .connect(&std::env::var("DATABASE_URL").expect("DATABASE_URL must be set"))
    .await
    .expect("failed to connect to database");

Pass the pool into your Axum AppState:

#[derive(Clone)]
struct AppState {
    db: PgPool,
}

let app = Router::new()
    .route("/", get(index))
    .with_state(AppState { db: pool });

Handlers extract it with State:

async fn list_users(State(state): State<AppState>) -> impl IntoResponse {
    let users = sqlx::query_as!(User, "SELECT id, name, email FROM users")
        .fetch_all(&state.db)
        .await
        .unwrap();
    // render users
}

The default pool configuration is reasonable for most applications:

OptionDefaultPurpose
max_connections10Maximum connections in the pool
min_connections0Minimum idle connections maintained
acquire_timeout30sHow long to wait for a connection
idle_timeout10 minClose idle connections after this duration
max_lifetime30 minClose connections older than this

Override them on PgPoolOptions if needed. For most web applications, setting max_connections to match your expected concurrency and leaving the rest at defaults works well.

For lazy connection establishment (useful in tests or CLIs where the database might not be needed):

let pool = PgPoolOptions::new()
    .max_connections(5)
    .connect_lazy(&database_url)?;

This returns immediately. Connections are established on first use.

Compile-time checked queries

The query! macro is the core of SQLx. At compile time, it connects to the database specified by DATABASE_URL, sends the query to PostgreSQL for parsing and type-checking, and generates Rust code that matches the result columns.

query!

query! returns an anonymous record type with fields matching the query’s output columns:

let row = sqlx::query!("SELECT id, name, email FROM users WHERE id = $1", user_id)
    .fetch_one(&pool)
    .await?;

// row.id: i32
// row.name: String
// row.email: String

Bind parameters use PostgreSQL’s $1, $2, … syntax. The macro checks that the number and types of bind arguments match what the query expects.

query_as!

query_as! maps results directly into a named struct:

struct User {
    id: i32,
    name: String,
    email: String,
}

let user = sqlx::query_as!(User, "SELECT id, name, email FROM users WHERE id = $1", user_id)
    .fetch_one(&pool)
    .await?;

The macro generates a struct literal, matching column names to field names. It does not use the FromRow trait. The struct does not need any derive macros.

Fetch methods

Choose the fetch method based on how many rows you expect:

MethodReturnsUse when
.execute(&pool)PgQueryResultINSERT, UPDATE, DELETE with no RETURNING
.fetch_one(&pool)TExactly one row expected (errors if zero or multiple)
.fetch_optional(&pool)Option<T>Zero or one row
.fetch_all(&pool)Vec<T>Collect all rows into a Vec
.fetch(&pool)impl Stream<Item = Result<T>>Stream rows without buffering

fetch_one returns an error if the query produces zero rows or more than one. Use fetch_optional when the row might not exist.

Nullable columns

The macro infers nullability from the database schema. A column with a NOT NULL constraint maps to T; a nullable column maps to Option<T>.

Override nullability in the column alias when the macro gets it wrong (common with expressions, COALESCE, or complex joins):

// Force non-null (panics at runtime if NULL)
sqlx::query!(r#"SELECT count(*) as "count!" FROM users"#)

// Force nullable
sqlx::query!(r#"SELECT name as "name?" FROM users"#)

// Override both nullability and type
sqlx::query!(r#"SELECT id as "id!: uuid::Uuid" FROM users"#)

The override syntax uses the column alias in double quotes:

  • "col!" forces non-null
  • "col?" forces nullable
  • "col: Type" overrides the Rust type
  • "col!: Type" forces non-null with a type override

RETURNING clauses

PostgreSQL’s RETURNING clause turns INSERT, UPDATE, and DELETE into queries that produce rows. Use fetch_one with query_as! to get the created or modified record back:

let user = sqlx::query_as!(
    User,
    "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email",
    name,
    email
)
.fetch_one(&pool)
.await?;

This avoids a separate SELECT after every insert.

Offline mode for CI

Compile-time query checking requires a running PostgreSQL database. In CI environments where a database is not available during compilation, SQLx provides offline mode.

  1. With the database running locally, generate the query cache:
cargo sqlx prepare --workspace

This creates a .sqlx/ directory containing metadata for every compile-time checked query in the project.

  1. Commit .sqlx/ to version control.

  2. When DATABASE_URL is absent at compile time and .sqlx/ exists, the macros use the cached metadata instead of connecting to a database.

  3. In CI, verify the cache is up to date:

cargo sqlx prepare --workspace --check

This fails if any query has changed without regenerating the cache, catching stale metadata before it causes runtime surprises.

To include queries from tests and other non-default targets:

cargo sqlx prepare --workspace -- --all-targets --all-features

Set SQLX_OFFLINE=true to force offline mode even when DATABASE_URL is present. This is useful for verifying that the offline cache works correctly.

Writing and organising queries

Keep queries inline, next to the code that uses them. SQLx’s macros are designed for this: the query text and its bind parameters live together in the handler or module function, so the reader sees the full picture without jumping between files.

pub async fn find_user_by_email(pool: &PgPool, email: &str) -> Result<Option<User>, sqlx::Error> {
    sqlx::query_as!(
        User,
        "SELECT id, name, email, created_at FROM users WHERE email = $1",
        email
    )
    .fetch_optional(pool)
    .await
}

pub async fn create_user(pool: &PgPool, name: &str, email: &str) -> Result<User, sqlx::Error> {
    sqlx::query_as!(
        User,
        "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email, created_at",
        name,
        email
    )
    .fetch_one(pool)
    .await
}

For queries that are genuinely long (complex joins, CTEs), query_file_as! reads SQL from a separate file:

-- queries/users_with_posts.sql
SELECT u.id, u.name, u.email, count(p.id) as "post_count!"
FROM users u
LEFT JOIN posts p ON p.user_id = u.id
GROUP BY u.id, u.name, u.email
ORDER BY u.name
let users = sqlx::query_file_as!(UserWithPosts, "queries/users_with_posts.sql")
    .fetch_all(&pool)
    .await?;

File paths are relative to the crate’s Cargo.toml directory. The file is still checked at compile time against the database.

Mapping query results to Rust types

With macros (preferred)

query_as! maps columns to struct fields by name. The struct needs no special derives:

struct User {
    id: i32,
    name: String,
    email: String,
    bio: Option<String>,       // nullable column
    created_at: time::OffsetDateTime, // TIMESTAMPTZ with the `time` feature
}

let users = sqlx::query_as!(User, "SELECT id, name, email, bio, created_at FROM users")
    .fetch_all(&pool)
    .await?;

The macro matches column names to field names at compile time. If the types do not match (e.g., a NOT NULL TEXT column mapped to i32), compilation fails.

With FromRow (runtime)

For cases where compile-time checking is not available (dynamic queries, generic code), use sqlx::FromRow:

#[derive(Debug, sqlx::FromRow)]
struct User {
    id: i32,
    name: String,
    email: String,
    bio: Option<String>,
}

let users: Vec<User> = sqlx::query_as::<_, User>("SELECT id, name, email, bio FROM users")
    .fetch_all(&pool)
    .await?;

Note the distinction: query_as! (with !) is a macro that checks at compile time and does not use FromRow. query_as::<_, T>() (without !) is a runtime function that requires T: FromRow.

FromRow supports field-level attributes for column renaming, defaults, and type conversion:

#[derive(sqlx::FromRow)]
struct User {
    id: i32,
    #[sqlx(rename = "user_name")]
    name: String,
    #[sqlx(default)]
    role: String,
}

PostgreSQL type mappings

SQLx maps PostgreSQL types to Rust types. The common mappings, using the feature flags from the setup above:

PostgreSQLRustFeature
BOOLbool
INT2 / SMALLINTi16
INT4 / INTi32
INT8 / BIGINTi64
FLOAT4 / REALf32
FLOAT8 / DOUBLE PRECISIONf64
TEXT, VARCHARString
BYTEAVec<u8>
UUIDuuid::Uuiduuid
TIMESTAMPTZtime::OffsetDateTimetime
TIMESTAMPtime::PrimitiveDateTimetime
DATEtime::Datetime
TIMEtime::Timetime
JSON, JSONBserde_json::Value or Json<T>json
INT4[], TEXT[], etc.Vec<T>

UUID

UUID primary keys are common in web applications. Enable the uuid feature and use uuid::Uuid directly:

use uuid::Uuid;

struct User {
    id: Uuid,
    name: String,
    email: String,
}

let user = sqlx::query_as!(
    User,
    "INSERT INTO users (id, name, email) VALUES ($1, $2, $3) RETURNING id, name, email",
    Uuid::new_v4(),
    name,
    email
)
.fetch_one(&pool)
.await?;

Add uuid to your direct dependencies too, since you will construct values from it:

uuid = { version = "1", features = ["v4"] }

Timestamps with the time crate

Enable the time feature for date and time support. TIMESTAMPTZ columns map to time::OffsetDateTime, which carries a UTC offset:

use time::OffsetDateTime;

struct AuditEntry {
    id: i32,
    action: String,
    created_at: OffsetDateTime,
}

let entry = sqlx::query_as!(
    AuditEntry,
    "INSERT INTO audit_log (action) VALUES ($1) RETURNING id, action, created_at",
    action
)
.fetch_one(&pool)
.await?;

PostgreSQL stores TIMESTAMPTZ in UTC internally. The OffsetDateTime you receive will always have a UTC offset.

For the time crate, add it as a direct dependency:

time = "0.3"

JSONB

JSONB is useful for semi-structured data that does not warrant its own columns. Enable the json feature and use serde_json::Value for unstructured JSON or sqlx::types::Json<T> for typed deserialization:

use sqlx::types::Json;

#[derive(serde::Serialize, serde::Deserialize)]
struct Preferences {
    theme: String,
    notifications: bool,
}

// Insert typed JSON
sqlx::query!(
    "UPDATE users SET preferences = $1 WHERE id = $2",
    Json(&prefs) as _,
    user_id
)
.execute(&pool)
.await?;

// Read typed JSON
let row = sqlx::query!(
    r#"SELECT preferences as "preferences!: Json<Preferences>" FROM users WHERE id = $1"#,
    user_id
)
.fetch_one(&pool)
.await?;

let prefs: Preferences = row.preferences.0;

The as _ cast on the insert side is required to help the macro infer the correct PostgreSQL type. On the read side, the type override in the column alias tells the macro to deserialise into Json<Preferences>.

Custom enum types

Map PostgreSQL enum types to Rust enums with sqlx::Type:

#[derive(Debug, sqlx::Type)]
#[sqlx(type_name = "user_role", rename_all = "lowercase")]
enum UserRole {
    Admin,
    Member,
    Guest,
}

This corresponds to a PostgreSQL type created with:

CREATE TYPE user_role AS ENUM ('admin', 'member', 'guest');

Use the enum directly in queries:

sqlx::query!(
    "INSERT INTO users (name, role) VALUES ($1, $2)",
    name,
    role as UserRole
)
.execute(&pool)
.await?;

The as UserRole cast tells the macro which Rust type to use for encoding.

Transactions

A transaction groups multiple queries into an atomic unit. Either all succeed and the changes are committed, or any failure rolls everything back.

Start a transaction with pool.begin():

let mut tx = pool.begin().await?;

let user = sqlx::query_as!(
    User,
    "INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email",
    name,
    email
)
.execute(&mut *tx)
.await?;

sqlx::query!(
    "INSERT INTO audit_log (user_id, action) VALUES ($1, $2)",
    user.id,
    "account_created"
)
.execute(&mut *tx)
.await?;

tx.commit().await?;

Pass the transaction to queries with &mut *tx. This dereferences the Transaction to the underlying connection and reborrows it.

If commit() is never called, the transaction rolls back when it is dropped. This makes the ? operator transaction-safe: if any query fails and the function returns early, the transaction is dropped and automatically rolled back.

async fn transfer(
    pool: &PgPool,
    from_id: i32,
    to_id: i32,
    amount: i64,
) -> Result<(), sqlx::Error> {
    let mut tx = pool.begin().await?;

    sqlx::query!(
        "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
        amount,
        from_id
    )
    .execute(&mut *tx)
    .await?;  // rolls back on failure

    sqlx::query!(
        "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
        amount,
        to_id
    )
    .execute(&mut *tx)
    .await?;  // rolls back on failure

    tx.commit().await?;
    Ok(())
}

For explicit rollback (useful when a business rule fails after the queries succeed):

if balance_too_low {
    tx.rollback().await?;
    return Err(/* ... */);
}

Gotchas

DATABASE_URL must be set at compile time. The query! macros connect to PostgreSQL during compilation. If the variable is missing and no .sqlx/ cache exists, compilation fails. Keep a .env file in your project root for local development.

*&mut tx syntax. Passing a transaction to a query requires &mut *tx, not &mut tx or &tx. The Transaction type implements DerefMut to the underlying connection; the dereference-reborrow is needed for the borrow checker.

Column name matching in query_as!. The column names in the SELECT must match the struct field names exactly. Use AS to rename columns if the database naming convention differs:

sqlx::query_as!(
    User,
    "SELECT id, user_name AS name FROM users"
)

Nullable inference in expressions. The macro sometimes cannot determine nullability for computed expressions (count(*), COALESCE, subqueries). Use the "col!" override to tell it the result is non-null:

sqlx::query!(r#"SELECT count(*) as "total!" FROM users"#)

Pool exhaustion. If all connections are in use and acquire_timeout is reached, the next query fails. This usually means the pool is too small for the application’s concurrency, or a handler is holding a connection too long (a common cause is doing non-database work while a transaction is open). Keep transactions short.

Database Migrations

Migrations track every change to your database schema as versioned SQL files. SQLx includes a migration system that runs these files in order, records which have been applied, and validates that applied migrations have not been modified. The same sqlx-cli tool installed in the database section manages the full lifecycle.

Creating migrations

Generate a new migration with sqlx migrate add. Use the -r flag to create reversible migrations, which produce a .up.sql and .down.sql pair:

sqlx migrate add -r create_users

This creates two files in the migrations/ directory at the project root:

migrations/
  20260226140000_create_users.up.sql
  20260226140000_create_users.down.sql

The timestamp prefix is generated in UTC and determines execution order. Timestamp versioning is the default and prevents conflicts when multiple developers create migrations concurrently.

Once the first migration uses -r, subsequent calls to sqlx migrate add will produce reversible pairs automatically. The CLI infers the mode from existing files.

Writing the SQL

The .up.sql file contains the forward schema change:

-- migrations/20260226140000_create_users.up.sql
CREATE TABLE users (
    id         UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email      TEXT NOT NULL UNIQUE,
    name       TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

The .down.sql file reverses it:

-- migrations/20260226140000_create_users.down.sql
DROP TABLE users;

Keep each migration focused on a single change. A migration that creates a table should not also modify a different table. This makes reverting predictable and keeps the history readable.

Running migrations

At application startup

The migrate! macro embeds migration files directly into the compiled binary. Call .run() on the pool at startup to apply any pending migrations before the application begins serving requests:

use sqlx::PgPool;

#[tokio::main]
async fn main() {
    let pool = PgPool::connect(&std::env::var("DATABASE_URL").expect("DATABASE_URL must be set"))
        .await
        .expect("failed to connect to database");

    sqlx::migrate!()
        .run(&pool)
        .await
        .expect("failed to run migrations");

    // build router, start server...
}

migrate!() reads from the migrations/ directory relative to Cargo.toml. The migration SQL is baked into the binary at compile time, so the deployed binary is self-contained, it does not need the migration files on disk.

This is the simplest deployment model. One binary, one process, and the schema is always in sync with the code.

With the CLI

For larger deployments where migrations should run as a separate step before the application starts, use the CLI directly:

sqlx migrate run

This reads DATABASE_URL from the environment or a .env file. The CLI approach gives you explicit control over when schema changes happen, which matters when you have multiple application instances starting simultaneously, need to run migrations from a CI pipeline before deployment, or want human review of what will be applied before it runs.

The two approaches are not mutually exclusive. migrate run is idempotent: it skips any migration already recorded in the database. You can run migrations from the CLI in your deployment pipeline and keep sqlx::migrate!().run(&pool) in your application code as a safety net.

Recompilation caveat

The migrate! macro runs at compile time, but Cargo does not automatically detect changes to non-Rust files. Adding a new .sql migration without modifying any .rs file will not trigger recompilation. The application will silently use the old set of migrations.

Fix this by generating a build.rs that watches the migrations directory:

sqlx migrate build-script

This creates a build.rs at the project root:

// generated by `sqlx migrate build-script`
fn main() {
    println!("cargo:rerun-if-changed=migrations");
}

Commit this file. With it in place, any change to the migrations/ directory triggers a rebuild.

Reverting migrations

Revert the most recently applied migration:

sqlx migrate revert

This runs the .down.sql file for the last applied migration. Run it multiple times to step back further, or target a specific version:

# revert everything after version 20260226140000
sqlx migrate revert --target-version 20260226140000

# revert all migrations
sqlx migrate revert --target-version 0

Reverting is primarily a development tool. In production, writing a new forward migration to undo a change is usually safer than reverting, because other parts of the system may already depend on the schema change.

Checking migration status

Inspect which migrations have been applied and whether any are out of sync:

sqlx migrate info

This prints each migration’s version, description, applied status, and whether its checksum matches the file on disk. Use this to diagnose problems before making changes, especially in shared environments.

How SQLx tracks migrations

SQLx creates a _sqlx_migrations table automatically on first run. It records each applied migration’s version, description, checksum (SHA-256 of the SQL content), execution time, and success status.

Two behaviours follow from this:

Checksum validation. Every time migrations run, SQLx compares the stored checksum for each already-applied migration against the current file on disk. If a file has been edited after it was applied, SQLx raises an error. This catches accidental edits to applied migrations. If you need to correct a mistake, write a new migration rather than editing the old one.

Dirty state detection. If a migration fails partway through, its row may be recorded with success = false. SQLx refuses to run further migrations until the dirty state is resolved. In development, the simplest fix is to drop and recreate the database. In production, investigate the failure, fix it manually, and update the row.

Managing migrations across environments

Development

The typical workflow during development:

# create the database (if it doesn't exist)
sqlx database create

# apply all pending migrations
sqlx migrate run

# full reset when needed
sqlx database drop
sqlx database create
sqlx migrate run

CI

In CI, create a disposable database, apply migrations, and verify the offline query cache is up to date:

sqlx database create
sqlx migrate run
cargo sqlx prepare --workspace --check

The --check flag fails the build if any query! macro’s cached metadata in .sqlx/ is stale. This enforces that developers run cargo sqlx prepare after schema changes.

Production

For applications using the embedded migrate!() macro, no separate migration step is needed. The binary applies its own migrations on startup.

For CLI-based deployments, run sqlx migrate run as part of the deployment process, before starting the application. In Docker, this is typically an entrypoint script or an init container. The --dry-run flag shows what would be applied without executing, useful for pre-deployment review:

sqlx migrate run --dry-run

Concurrency safety

SQLx acquires a PostgreSQL advisory lock before running migrations. If multiple instances start simultaneously, only one will apply migrations while the others wait. This prevents race conditions during rolling deployments.

Gotchas

Never edit an applied migration. The checksum validation will reject it. Write a new corrective migration instead.

Don’t mix simple and reversible migrations. SQLx infers the migration type from existing files. Stick with one style (reversible, using -r) throughout the project.

Commit build.rs and .sqlx/. The build.rs file (from sqlx migrate build-script) ensures new migrations trigger recompilation. The .sqlx/ directory (from cargo sqlx prepare) enables compilation without a live database. Both belong in version control.

DATABASE_URL takes precedence over .sqlx/. In CI, if DATABASE_URL is set during compilation, the query! macros will try to connect to it rather than using the offline cache. Set SQLX_OFFLINE=true explicitly when you want to force offline mode.

Search

Auth & Security

Authentication

Session-based authentication fits naturally into a hypermedia-driven architecture. The server manages all auth state. The browser sends a cookie. No client-side token management, no JWT parsing in JavaScript, no OAuth dance in the browser. The server decides who the user is, renders the appropriate HTML, and sends it.

This section builds authentication with tower-sessions for session management, argon2 for password hashing, and tower-csrf for cross-site request forgery protection. PostgreSQL stores both user records and session data via tower-sessions-sqlx-store.

Dependencies

[dependencies]
tower-sessions = "0.14"
tower-sessions-sqlx-store = { version = "0.15", features = ["postgres"] }
tower-csrf = "0.1"
argon2 = "0.5"
sqlx = { version = "0.8", features = ["runtime-tokio", "postgres", "time", "uuid"] }
time = "0.3"
uuid = { version = "1", features = ["v4", "serde"] }

tower-sessions provides the session middleware layer. tower-sessions-sqlx-store backs it with PostgreSQL so sessions survive server restarts. argon2 handles password hashing using the Argon2id algorithm, the OWASP primary recommendation. tower-csrf protects state-changing requests from cross-site forgery.

Note the version pairing: tower-sessions 0.14 and tower-sessions-sqlx-store 0.15 are compatible through their shared dependency on tower-sessions-core 0.14. Check both crates for newer matching releases.

Password hashing

Argon2id is memory-hard and CPU-hard, which makes brute-force attacks expensive even with GPUs. The argon2 crate provides a pure-Rust implementation.

Passwords are stored as PHC-format strings. The algorithm, version, and parameters are embedded alongside the hash, making the value self-describing:

$argon2id$v=19$m=65536,t=2,p=1$<salt>$<hash>

This means you can change hashing parameters over time without breaking verification of existing hashes. During verification, the argon2 crate reads parameters from the stored hash, not from the Argon2 instance.

use argon2::{
    password_hash::{
        rand_core::OsRng, PasswordHash, PasswordHasher, PasswordVerifier, SaltString,
    },
    Algorithm, Argon2, Params, Version,
};

fn build_hasher() -> Argon2<'static> {
    let params = Params::new(
        64 * 1024,  // 64 MiB memory cost
        2,          // 2 iterations
        1,          // 1 degree of parallelism
        None,       // default output length (32 bytes)
    )
    .expect("valid argon2 params");

    Argon2::new(Algorithm::Argon2id, Version::V0x13, params)
}

fn hash_password(password: &str) -> Result<String, argon2::password_hash::Error> {
    let salt = SaltString::generate(&mut OsRng);
    let hash = build_hasher().hash_password(password.as_bytes(), &salt)?;
    Ok(hash.to_string())
}

fn verify_password(
    password: &str,
    stored_hash: &str,
) -> Result<(), argon2::password_hash::Error> {
    let parsed = PasswordHash::new(stored_hash)?;
    Argon2::default().verify_password(password.as_bytes(), &parsed)
}

SaltString::generate(&mut OsRng) produces a cryptographically random salt using the OS random number generator. The build_hasher function configures Argon2id with 64 MiB of memory, which is a reasonable starting point. Argon2::default() uses 19 MiB (the OWASP floor), but the recommendation is 64 MiB or higher if your server can handle it. Tune the memory parameter upward until hashing takes roughly 200ms on your production hardware.

The verify_password function uses Argon2::default() because it reads parameters from the stored hash, not from the instance. This means old hashes created with different parameters continue to verify correctly.

Peppering

A pepper is a secret key stored only in the application server, never in the database. If the database leaks but the application server is not compromised, the pepper makes the stolen hashes unverifiable. Argon2 has a built-in secret parameter for this:

fn build_hasher_with_pepper(pepper: &[u8]) -> Argon2<'_> {
    let params = Params::new(64 * 1024, 2, 1, None).expect("valid argon2 params");
    Argon2::new_with_secret(pepper, Algorithm::Argon2id, Version::V0x13, params)
        .expect("valid argon2 secret")
}

Generate the pepper once (32 random bytes from a CSPRNG), store it as an environment variable or in a secrets manager, and load it at application startup. If the pepper is lost, all password hashes become unverifiable and every user must reset their password. Treat it with the same care as a database encryption key.

If you want a simpler API, the password-auth crate wraps argon2 with two functions (generate_hash, verify_password) and provides is_hash_obsolete() for detecting when stored hashes should be re-hashed with newer parameters. The lower-level API shown here gives more control when you need it.

Async context

Argon2 hashing is CPU-intensive. A single hash takes 50-200ms depending on hardware. Running it directly in an async handler blocks the tokio worker thread and starves other requests. Always offload to the blocking thread pool:

use tokio::task;

async fn hash_password_async(password: String) -> Result<String, anyhow::Error> {
    task::spawn_blocking(move || hash_password(&password))
        .await?
        .map_err(Into::into)
}

async fn verify_password_async(
    password: String,
    stored_hash: String,
) -> Result<(), anyhow::Error> {
    task::spawn_blocking(move || verify_password(&password, &stored_hash))
        .await?
        .map_err(Into::into)
}

The closure takes owned String values because spawn_blocking requires 'static. This moves the work to tokio’s dedicated blocking thread pool (separate from the async worker threads), keeping the async runtime responsive.

Password validation

Enforce constraints before hashing:

  • Minimum length: 10 characters. Shorter passwords are too easy to brute-force.
  • Maximum length: 128 characters. Without a maximum, an attacker can submit multi-megabyte passwords to exhaust server resources through expensive hashing.
  • Unicode normalisation: Apply NFKC normalisation before hashing. Different systems represent the same characters differently, which causes cross-platform login failures. The unicode-normalization crate handles this.

For password quality checking, the zxcvbn crate (a Rust port of Dropbox’s password strength estimator) catches common and weak passwords without maintaining a separate banned-password list.

Session layer

Set up a PostgreSQL-backed session store, run its migration to create the session table, and start a background task to clean up expired sessions.

use axum::Router;
use sqlx::PgPool;
use tower_sessions::{Expiry, SessionManagerLayer};
use tower_sessions_sqlx_store::PostgresStore;
use time::Duration;

async fn session_layer(pool: PgPool) -> SessionManagerLayer<PostgresStore> {
    let store = PostgresStore::new(pool);
    store.migrate().await.expect("session table migration failed");

    // Clean up expired sessions every 60 seconds
    tokio::task::spawn(
        store
            .clone()
            .continuously_delete_expired(tokio::time::Duration::from_secs(60)),
    );

    SessionManagerLayer::new(store)
        .with_secure(true)
        .with_expiry(Expiry::OnInactivity(Duration::hours(24)))
}

PostgresStore::migrate() creates a tower_sessions schema with a session table (columns: id TEXT, data BYTEA, expiry_date TIMESTAMPTZ). The continuously_delete_expired task runs in the background, removing sessions that have passed their expiry date.

Cookie configuration

SessionManagerLayer configures the session cookie through builder methods:

  • with_secure(true) sets the Secure flag so the cookie is only sent over HTTPS. Always enable this in production.
  • with_http_only(true) is the default. The cookie is inaccessible to JavaScript, protecting against XSS-based session theft.
  • with_same_site(SameSite::Lax) is the default. Cookies are sent on top-level navigations but not on cross-site subrequests. Combined with CSRF protection, this is sufficient for most applications. Use SameSite::Strict for high-security applications, with the trade-off that users clicking links to your site from email will appear logged out on first load.

Expiry options

tower-sessions supports three expiry strategies:

  • Expiry::OnInactivity(Duration) resets the expiration on each request. A sliding window. Good for most applications.
  • Expiry::AtDateTime(OffsetDateTime) sets a fixed expiration. The session expires at that time regardless of activity.
  • Expiry::OnSessionEnd creates a browser session cookie with no Max-Age. The cookie is deleted when the browser closes.

The default when no expiry is set is two weeks. For applications handling sensitive data, consider shorter windows (1-24 hours) and requiring re-authentication for high-risk actions.

Layer ordering

Apply the session layer as the outermost middleware so sessions are available to all inner layers and handlers:

let app = Router::new()
    .route("/register", get(show_register).post(handle_register))
    .route("/login", get(show_login).post(handle_login))
    .route("/logout", post(handle_logout))
    .layer(csrf_layer)
    .layer(session_layer(pool).await);

In Axum, the last .layer() call is the outermost layer and processes requests first. Here, the session layer processes first (loads the session from the cookie), then the CSRF layer checks the request origin, then the handler runs.

User table

Create a migration for the users table:

CREATE TABLE users (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email TEXT UNIQUE NOT NULL,
    email_confirmed_at TIMESTAMPTZ,
    password_hash TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
    updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

The corresponding Rust struct:

use sqlx::types::time::OffsetDateTime;
use uuid::Uuid;

#[derive(Debug, Clone, sqlx::FromRow)]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub email_confirmed_at: Option<OffsetDateTime>,
    pub password_hash: String,
    pub created_at: OffsetDateTime,
    pub updated_at: OffsetDateTime,
}

Registration

The registration handler validates input, hashes the password, and creates the user. It does not reveal whether an email is already taken, to prevent account enumeration.

use axum::{extract::State, response::IntoResponse, Form};
use maud::{html, Markup};

#[derive(serde::Deserialize)]
struct RegisterForm {
    email: String,
    password: String,
    password_confirmation: String,
}

async fn show_register() -> Markup {
    html! {
        h1 { "Create an account" }
        form method="post" action="/register" {
            label for="email" { "Email" }
            input type="email" name="email" id="email" required;

            label for="password" { "Password" }
            input type="password" name="password" id="password"
                required minlength="10" maxlength="128"
                autocomplete="new-password";

            label for="password_confirmation" { "Confirm password" }
            input type="password" name="password_confirmation"
                id="password_confirmation" required
                autocomplete="new-password";

            button type="submit" { "Register" }
        }
    }
}

async fn handle_register(
    State(state): State<AppState>,
    Form(form): Form<RegisterForm>,
) -> impl IntoResponse {
    if form.password != form.password_confirmation {
        return show_error("Passwords do not match").into_response();
    }
    if form.password.len() < 10 || form.password.len() > 128 {
        return show_error("Password must be 10 to 128 characters").into_response();
    }

    let password_hash = match hash_password_async(form.password).await {
        Ok(hash) => hash,
        Err(_) => return show_error("Registration failed").into_response(),
    };

    // ON CONFLICT DO NOTHING prevents errors on duplicate email
    // without revealing whether the email already exists
    let result = sqlx::query(
        "INSERT INTO users (email, password_hash) \
         VALUES ($1, $2) ON CONFLICT (email) DO NOTHING",
    )
    .bind(&form.email)
    .bind(&password_hash)
    .execute(&state.db)
    .await;

    // Always show the same message. In the background, send different emails:
    // - New user: send a confirmation link
    // - Existing email: send "someone tried to register with your email"
    // See the Email Confirmation section below for the token flow.
    html! {
        h1 { "Check your email" }
        p { "If this email can be used for an account, you will receive further instructions." }
    }
    .into_response()
}

The ON CONFLICT (email) DO NOTHING query combined with a uniform response prevents attackers from probing which emails have accounts. The autocomplete="new-password" attribute tells password managers this is a registration form.

Login

The login handler verifies the password against the stored hash, creates a session, and cycles the session ID to prevent fixation attacks.

use axum::response::Redirect;
use tower_sessions::Session;

#[derive(serde::Deserialize)]
struct LoginForm {
    email: String,
    password: String,
}

async fn handle_login(
    session: Session,
    State(state): State<AppState>,
    Form(form): Form<LoginForm>,
) -> impl IntoResponse {
    let user: Option<User> = sqlx::query_as("SELECT * FROM users WHERE email = $1")
        .bind(&form.email)
        .fetch_optional(&state.db)
        .await
        .unwrap_or(None);

    let Some(user) = user else {
        // Run a dummy hash to prevent timing-based user enumeration
        let _ = hash_password_async("dummy-password".to_string()).await;
        return show_login_error("Invalid email or password").into_response();
    };

    if verify_password_async(form.password, user.password_hash.clone())
        .await
        .is_err()
    {
        return show_login_error("Invalid email or password").into_response();
    }

    // Prevent session fixation: generate a new session ID, preserving data
    session.cycle_id().await.expect("failed to cycle session ID");

    // Store user identity in the session
    session
        .insert("user_id", user.id)
        .await
        .expect("failed to insert session data");

    // Validate redirect target if using a ?next= parameter.
    // Only allow relative paths. Reject absolute URLs to prevent open redirects.
    Redirect::to("/").into_response()
}

Three security details matter here:

Timing attack prevention. When no user is found, a dummy hash_password_async call runs so the response time is similar regardless of whether the email exists. Without this, an attacker can distinguish “email not found” from “wrong password” by measuring response latency.

Session fixation prevention. session.cycle_id() generates a new session ID while preserving session data. Without this, an attacker who planted a known session ID (via a crafted link or subdomain cookie injection) could hijack the authenticated session.

Post-login redirect validation. If you add a ?next= parameter so users return to the page they were visiting before login, validate the target strictly. Allow only relative paths. Reject absolute URLs, URLs with different schemes or hosts, and URLs with embedded credentials. Without validation, an attacker can craft https://yoursite.com/login?next=https://evil.com, and the user sees a legitimate login page that redirects to a phishing site after authentication.

The error message is the same for both “user not found” and “wrong password”. Never reveal which one failed.

Rate limiting

Without rate limiting, the login endpoint is vulnerable to brute-force and credential stuffing attacks. Apply limits at two levels:

  • Per-account: Lock the account after a threshold of failed attempts (for example, 10). Unlock after a cooldown period (15 minutes) or via email. This stops targeted attacks against a single user.
  • Per-IP: Apply a sliding window limit (for example, 20 attempts per minute per IP). Return HTTP 429 with a Retry-After header. This slows distributed scanning.

Per-account limiting is the primary defence. Per-IP limiting alone is insufficient because botnets rotate IP addresses.

For Axum, tower_governor provides a Tower-compatible rate limiting layer based on the governor crate. Apply it to your auth routes:

use tower_governor::{GovernorConfig, GovernorLayer};

let governor_config = GovernorConfig::default(); // 1 request per 500ms per IP
let governor_layer = GovernorLayer {
    config: governor_config,
};

let auth_routes = Router::new()
    .route("/login", get(show_login).post(handle_login))
    .route("/register", get(show_register).post(handle_register))
    .layer(governor_layer);

This handles per-IP limiting. For per-account lockout, track failed attempts in a database column or a Redis counter keyed by email, and check it before verifying the password.

Logout

Destroy the session and redirect. Protect logout with a POST request, not GET, so cross-site <img> tags or link prefetching cannot force a logout.

async fn handle_logout(session: Session) -> impl IntoResponse {
    session.flush().await.expect("failed to flush session");
    Redirect::to("/login")
}

session.flush() clears all session data, deletes the record from the database, and nullifies the session cookie.

Extracting the current user

Build an Axum extractor that loads the authenticated user from the session. Use this wherever a handler needs the current user.

use axum::{
    extract::FromRequestParts,
    http::{request::Parts, StatusCode},
};

pub struct AuthUser(pub User);

impl<S: Send + Sync> FromRequestParts<S> for AuthUser {
    type Rejection = StatusCode;

    async fn from_request_parts(
        parts: &mut Parts,
        state: &S,
    ) -> Result<Self, Self::Rejection> {
        let session = Session::from_request_parts(parts, state)
            .await
            .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;

        let user_id: Uuid = session
            .get("user_id")
            .await
            .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
            .ok_or(StatusCode::UNAUTHORIZED)?;

        let pool = parts
            .extensions
            .get::<PgPool>()
            .ok_or(StatusCode::INTERNAL_SERVER_ERROR)?;

        let user: User = sqlx::query_as("SELECT * FROM users WHERE id = $1")
            .bind(user_id)
            .fetch_optional(pool)
            .await
            .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
            .ok_or(StatusCode::UNAUTHORIZED)?;

        Ok(AuthUser(user))
    }
}

Handlers that need authentication add AuthUser as a parameter. If no valid session exists, the request returns 401 before the handler body runs:

async fn dashboard(AuthUser(user): AuthUser) -> Markup {
    html! {
        h1 { "Welcome, " (user.email) }
    }
}

For the extractor to access the database pool, add it to request extensions via middleware, or make the extractor generic over your AppState. The approach depends on how you structure shared state; see Web Server with Axum.

CSRF protection

Cross-site request forgery tricks a logged-in user’s browser into making unintended requests to your application. Traditional defences embed hidden tokens in forms. A simpler approach validates the request origin using headers the browser sends automatically.

tower-csrf implements this origin-based approach, inspired by Filippo Valsorda’s analysis of CSRF and the defence built into Go 1.25’s net/http. Instead of managing tokens, it checks the Sec-Fetch-Site and Origin headers. Modern browsers (all major browsers since 2023) send Sec-Fetch-Site: same-origin for same-site requests. Cross-origin requests are blocked. Safe methods (GET, HEAD, OPTIONS) are allowed unconditionally.

use axum::{
    error_handling::HandleErrorLayer,
    http::StatusCode,
    response::IntoResponse,
};
use tower::ServiceBuilder;
use tower_csrf::{CrossOriginProtectionLayer, ProtectionError};

let csrf_layer = ServiceBuilder::new()
    .layer(HandleErrorLayer::new(
        |error: Box<dyn std::error::Error + Send + Sync>| async move {
            if error.downcast_ref::<ProtectionError>().is_some() {
                (StatusCode::FORBIDDEN, "Cross-origin request blocked").into_response()
            } else {
                StatusCode::INTERNAL_SERVER_ERROR.into_response()
            }
        },
    ))
    .layer(CrossOriginProtectionLayer::default());

No hidden form fields. No hx-headers configuration for htmx. Same-origin requests pass automatically because the browser attests to the origin. This is a clean fit for HDA applications where every form submission and htmx request originates from the same domain.

If you need to accept cross-origin requests from specific origins (SSO callbacks, webhooks), add them explicitly:

let csrf = CrossOriginProtectionLayer::default()
    .add_trusted_origin("https://sso.example.com")
    .expect("valid origin URL");

For the full argument behind origin-based CSRF validation and why token-based CSRF is unnecessary in modern browsers, read Filippo Valsorda’s analysis.

If you need to support browsers that do not send Sec-Fetch-Site headers (pre-2023), or you prefer a traditional token-based approach, axum_csrf provides a double-submit cookie pattern compatible with Axum 0.8.

Email confirmation

Confirm email addresses before activating accounts. Without confirmation, anyone can register with someone else’s email, and your application sends unwanted messages to non-users.

The flow uses a split token pattern: a 16-byte identifier for database lookup and a 16-byte verifier for constant-time comparison. Store the SHA-256 hash of the verifier, never the verifier itself. If the database leaks, attackers cannot reconstruct valid confirmation links.

Flow

  1. On registration, generate an identifier (16 random bytes) and a verifier (16 random bytes) using a CSPRNG (OsRng).
  2. Store in a confirmations table: identifier (indexed), SHA-256(verifier), user ID, expiration (24-48 hours), and action type (email_confirmation).
  3. Base64url-encode the concatenated identifier + verifier into a link: https://example.com/confirm?token=<encoded>.
  4. Send the link via email. See Email for sending with Lettre and testing with MailCrab.
  5. When the user clicks the link, require an active session (the user must be logged in). Split the token back into identifier and verifier. Look up by identifier. Check expiration. Constant-time compare SHA-256(received verifier) with the stored hash using the subtle crate.
  6. On success, set email_confirmed_at on the user record and delete the confirmation record.

Requiring an active session at step 5 prevents an attacker who intercepts the confirmation email (compromised mailbox, network interception) from confirming the account without knowing the password. The user must both possess the token and be authenticated.

Preventing enumeration

Never reveal whether an email is already registered. On registration:

  • Always display: “Check your email to complete registration.”
  • If the email is new, send a confirmation link.
  • If the email already exists, send a different message: “Someone attempted to register with your email. If this was you, you can log in or reset your password.”

Schedule the email step asynchronously so the response time is identical in both cases. A timing difference between “new account” and “existing account” is enough for an attacker to enumerate emails.

Confirmations table

A single table handles email confirmations, password resets, and email changes:

CREATE TABLE confirmations (
    identifier BYTEA PRIMARY KEY,
    verifier_hash BYTEA NOT NULL,
    user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    action_type TEXT NOT NULL,
    details JSONB,
    expires_at TIMESTAMPTZ NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE INDEX idx_confirmations_user_id ON confirmations(user_id);

The action_type column distinguishes confirmation purposes. The details column holds action-specific data as JSON (for example, the new email address during an email change).

Password reset

Password resets follow the same split token pattern as email confirmation. The key differences are a shorter expiration and the requirement to invalidate all existing sessions after a successful reset.

Flow

  1. User submits their email on the reset form.
  2. Display: “If this email belongs to an account, you will receive reset instructions.” Never reveal whether the email has an account.
  3. Look up the user. If found, generate a split token, store it with a 30-minute expiration and action type password_reset, and email the link. If not found, do nothing. Schedule the work asynchronously for consistent response timing.
  4. When the user clicks the link, verify the token (same split-and-compare as email confirmation). Show a new password form with the token as a hidden field.
  5. On form submission, re-verify the token, hash the new password, update the user record, delete all reset tokens for this user, invalidate all sessions for this user, create a new session, and log them in.

Security considerations

  • 30-minute expiration. Reset tokens are high-value targets. Keep the window short.
  • Invalidate all sessions after a successful reset. If the password was compromised, existing sessions may belong to an attacker.
  • Allow multiple outstanding tokens. Don’t delete old tokens when a new reset is requested. The user may request a reset, not receive the email, and request again.
  • Delete tokens on login. If the user remembers their password and logs in normally, delete their outstanding reset tokens.
  • Require the new password on the same form as the token. Don’t split this into two steps. Re-verify the token on submission to prevent replay.

When to delegate authentication

Session-based auth works well for a single application with its own user base and straightforward login requirements. Delegate to an external Identity Provider when the requirements outgrow what the application should manage directly:

  • Multiple applications need SSO. Users log in once and access several services. A shared identity layer is easier to maintain than per-application auth.
  • Enterprise customers expect SAML or OIDC. B2B SaaS products typically need to integrate with customers’ corporate identity systems.
  • Compliance frameworks require it. SOC 2, HIPAA, and PCI-DSS audits favour dedicated identity infrastructure with built-in audit logging, brute-force protection, and pre-certified MFA controls. An external IdP gives auditors a clear separation of concerns.
  • Existing infrastructure. Your organisation already runs Active Directory, LDAP, or a corporate IdP that users expect to log in with.

Self-hosted identity providers

Keycloak is a full-featured open-source IdP (CNCF incubation project) supporting OAuth2, OIDC, SAML, and LDAP federation. It handles SSO, MFA, identity brokering, and user management. The trade-off is operational weight: it is a Java application with significant resource requirements.

Authentik is a lighter alternative with a more modern developer experience, supporting OAuth2, OIDC, SAML, LDAP, and SCIM.

Both align with this guide’s preference for self-hosted infrastructure.

Integration with OAuth2 Proxy

OAuth2 Proxy sits between users and your Axum application as a reverse proxy. It handles the OAuth2/OIDC flow with your IdP and forwards authenticated requests with identity headers:

use axum::http::HeaderMap;

async fn handler(headers: HeaderMap) -> Markup {
    let email = headers
        .get("X-Forwarded-Email")
        .and_then(|v| v.to_str().ok())
        .unwrap_or("anonymous");

    html! { p { "Logged in as " (email) } }
}

The application reads identity from trusted headers (X-Forwarded-User, X-Forwarded-Email) without implementing OAuth2 flows directly. The proxy strips any client-supplied identity headers before injecting authenticated values, preventing spoofing.

Your application must only be reachable through the proxy, never directly from the internet. Enforce this at the network level: firewall rules, container networking, or Tailscale ACLs. If a client can bypass the proxy, it can set X-Forwarded-User to any value.

Choosing an auth strategy

SituationApproach
Single app, simple loginSession auth (tower-sessions + argon2)
Single app, social login (GitHub, Google)oauth2 / openidconnect crate in Axum
Multiple apps needing SSOExternal IdP (Keycloak/Authentik) + OAuth2 Proxy
B2B SaaS, enterprise customersExternal IdP or managed service (Auth0, WorkOS)
SOC 2 / HIPAA / PCI-DSS complianceExternal IdP strongly recommended
Existing Active Directory / LDAPKeycloak

Start with session-based auth. Move to an external IdP when you hit one of the triggers above. The migration is additive: OAuth2 Proxy sits in front of your existing application, and the AuthUser extractor reads from proxy headers instead of session data.

Implementation resources

For AI coding agents implementing authentication: the secure-auth skill provides detailed security reference material covering cryptographic fundamentals, password hashing parameters, the split token pattern, session management, MFA (TOTP, WebAuthn, recovery codes), and security review checklists. Use it as context when building the patterns described in this section.

For access to the secure-auth skill and detailed implementation guidance, contact the author.

Gotchas

spawn_blocking for all password operations. Forgetting to offload argon2 to the blocking thread pool is the most common mistake. Under load, a single blocked tokio worker thread cascades into request timeouts across the application.

Session ID cycling must happen before inserting user data. Call session.cycle_id() before session.insert("user_id", ...). If you insert first and the cycle fails, the old (potentially attacker-controlled) session ID now has authenticated data.

tower-sessions version compatibility. The tower-sessions and tower-sessions-sqlx-store crates track tower-sessions-core versions independently. If Cargo reports a version conflict on tower-sessions-core, check that the published sqlx-store version matches your tower-sessions version. Pin both until they align.

Consistent error messages on auth forms. Every registration, login, and reset form must give the same response regardless of whether the email exists. This includes response timing. An async email-sending step after registration or reset prevents timing leaks.

CSRF on logout. Logout must be a POST request protected by CSRF, not a GET link. A GET-based logout allows any cross-site <img> tag to force a logout, which is a nuisance attack that can also be chained with session fixation.

Authorization

Web Application Security

Forms & Errors

Form Handling and Validation

HTML forms are the primary input mechanism in a hypermedia-driven application. The browser collects data, the server validates and processes it, and the response is HTML. There is no JSON serialisation layer, no client-side state management for form data, and no separate API to keep in sync.

This section covers extracting form data in Axum handlers, sanitising and validating it, building a custom ValidatedForm extractor that combines all three steps, displaying errors with Maud and htmx, and the Post/Redirect/Get pattern for safe form submissions.

Extracting form data

Use the Form<T> extractor from axum-extra (not the one in axum itself). The axum-extra version uses serde_html_form under the hood, which correctly handles multi-value fields: multiple <input> elements with the same name (checkboxes, for example) and <select> elements with the multiple attribute. The standard axum::extract::Form uses serde_urlencoded, which does not support these cases.

[dependencies]
axum-extra = { version = "0.10", features = ["form"] }

Define a struct with Deserialize:

use serde::Deserialize;

#[derive(Deserialize)]
struct CreateContact {
    name: String,
    email: String,
    phone: Option<String>,
}

Extract it in a handler:

use axum_extra::extract::Form;
use axum::response::Redirect;

async fn create_contact(
    Form(input): Form<CreateContact>,
) -> Redirect {
    // input.name, input.email, input.phone are ready to use
    Redirect::to("/contacts")
}

For multi-value fields, collect into a Vec:

#[derive(Deserialize)]
struct SurveyResponse {
    name: String,
    #[serde(rename = "interest")]
    interests: Vec<String>,
}
html! {
    fieldset {
        legend { "Interests" }
        label {
            input type="checkbox" name="interest" value="rust";
            " Rust"
        }
        label {
            input type="checkbox" name="interest" value="web";
            " Web development"
        }
        label {
            input type="checkbox" name="interest" value="databases";
            " Databases"
        }
    }
}

Each checked box sends interest=rust&interest=web, and serde_html_form collects them into the Vec<String>.

Wire the handler to a POST route:

use axum::{routing::{get, post}, Router};

let app = Router::new()
    .route("/contacts", get(list_contacts))
    .route("/contacts", post(create_contact));

The corresponding HTML form:

use maud::{html, Markup};

fn contact_form() -> Markup {
    html! {
        form method="post" action="/contacts" {
            label for="name" { "Name" }
            input #name type="text" name="name" required;

            label for="email" { "Email" }
            input #email type="email" name="email" required;

            label for="phone" { "Phone" }
            input #phone type="tel" name="phone";

            button type="submit" { "Save" }
        }
    }
}

The name attributes on the <input> elements must match the struct field names. serde handles the mapping. For fields with names that differ from Rust conventions, use #[serde(rename = "field-name")].

Option<String> fields map to inputs that may be left blank. If the field is absent or empty in the form submission, serde deserialises it as None.

Handling deserialisation failures

If the form body cannot be deserialised into the target struct (missing required fields, wrong types), Axum returns a 422 Unprocessable Entity by default. For a better user experience, accept a Result and handle the rejection:

use axum_extra::extract::FormRejection;

async fn create_contact(
    form: Result<Form<CreateContact>, FormRejection>,
) -> impl IntoResponse {
    match form {
        Ok(Form(input)) => {
            // process valid input
            Redirect::to("/contacts").into_response()
        }
        Err(_) => {
            // re-render the form with a general error
            (StatusCode::UNPROCESSABLE_ENTITY, contact_form()).into_response()
        }
    }
}

In practice, deserialisation failures are rare when the HTML form matches the struct. Validation errors (invalid email format, value out of range) are the common case.

Sanitising input

User input needs cleaning before validation. Leading and trailing whitespace, inconsistent casing, and stray non-alphanumeric characters cause validation failures that are not the user’s fault. The sanitizer crate provides a derive macro that declares sanitisation rules directly on struct fields, the same way validator declares validation rules.

[dependencies]
sanitizer = "1"

Add Sanitize alongside Deserialize:

use sanitizer::prelude::*;
use serde::Deserialize;

#[derive(Deserialize, Sanitize)]
struct CreateContact {
    #[sanitize(trim)]
    name: String,

    #[sanitize(trim, lower_case)]
    email: String,

    #[sanitize(trim)]
    phone: Option<String>,
}

Call .sanitize() to modify the struct in place:

let mut input = CreateContact {
    name: "  Alice   ".into(),
    email: " Alice@Example.COM ".into(),
    phone: None,
};
input.sanitize();
// input.name == "Alice"
// input.email == "alice@example.com"

Available sanitisers

SanitiserEffect
trimRemove leading and trailing whitespace
lower_caseConvert to lowercase
upper_caseConvert to UPPERCASE
camel_caseConvert to camelCase
snake_caseConvert to snake_case
screaming_snake_caseConvert to SCREAMING_SNAKE_CASE
numericRemove all non-numeric characters
alphanumericRemove all non-alphanumeric characters
e164Convert phone number to E.164 international format
clamp(min, max)Clamp an integer to a range
clamp(max)Truncate a string to a maximum length
custom(function_name)Apply a custom sanitisation function

Custom sanitisation functions

For rules beyond the built-ins, write a function that takes &str and returns String:

use sanitizer::StringSanitizer;

fn collapse_whitespace(input: &str) -> String {
    let mut s = StringSanitizer::from(input);
    s.trim();
    s.get()
        .split_whitespace()
        .collect::<Vec<_>>()
        .join(" ")
}

#[derive(Deserialize, Sanitize)]
struct CreatePost {
    #[sanitize(custom(collapse_whitespace))]
    title: String,
}

Sanitisation runs before validation. Trim whitespace, normalise casing, and clean up formatting first, then validate the cleaned values. This order matters: " " (a single space) fails a length(min = 1) check only if you trim it first.

Server-side validation with validator

The validator crate provides a Validate derive macro that adds declarative validation rules to structs. Validation runs on the server after sanitisation and before any database or business logic.

[dependencies]
validator = { version = "0.20", features = ["derive"] }

Add Validate to the struct:

use sanitizer::prelude::*;
use serde::Deserialize;
use validator::Validate;

#[derive(Deserialize, Sanitize, Validate)]
struct CreateContact {
    #[sanitize(trim)]
    #[validate(length(min = 1, max = 255, message = "Name is required"))]
    name: String,

    #[sanitize(trim, lower_case)]
    #[validate(email(message = "Enter a valid email address"))]
    email: String,

    #[sanitize(trim)]
    #[validate(length(max = 20, message = "Phone number too long"))]
    phone: Option<String>,
}

Built-in validators

ValidatorUsageChecks
email#[validate(email)]Valid email format per HTML5 spec
url#[validate(url)]Valid URL
length#[validate(length(min = 1, max = 100))]String or Vec length bounds
range#[validate(range(min = 0, max = 150))]Numeric value bounds
must_match#[validate(must_match(other = "password_confirm"))]Two fields have the same value
contains#[validate(contains(pattern = "@"))]String contains a substring
does_not_contain#[validate(does_not_contain(pattern = "admin"))]String does not contain a substring
regex#[validate(regex(path = *RE_PHONE))]Matches a compiled regex
custom#[validate(custom(function = "check_slug"))]Runs a custom function

Every validator accepts an optional message parameter that provides the error text shown to users. Without it, the crate produces a default message keyed by the validation rule name.

Custom validation functions

For rules that don’t fit the built-in validators, write a function that returns Result<(), ValidationError>:

use validator::ValidationError;

fn validate_no_profanity(value: &str) -> Result<(), ValidationError> {
    let blocked = ["spam", "scam"];
    if blocked.iter().any(|w| value.to_lowercase().contains(w)) {
        return Err(ValidationError::new("profanity")
            .with_message("Contains blocked content".into()));
    }
    Ok(())
}

#[derive(Deserialize, Sanitize, Validate)]
struct CreatePost {
    #[sanitize(trim)]
    #[validate(length(min = 1, max = 200))]
    title: String,

    #[sanitize(trim)]
    #[validate(custom(function = "validate_no_profanity"))]
    body: String,
}

Nested validation

Structs containing other validatable structs use #[validate(nested)]:

#[derive(Deserialize, Sanitize, Validate)]
struct Address {
    #[sanitize(trim)]
    #[validate(length(min = 1))]
    street: String,
    #[sanitize(trim)]
    #[validate(length(min = 1))]
    city: String,
}

#[derive(Deserialize, Sanitize, Validate)]
struct CreateUser {
    #[sanitize(trim)]
    #[validate(length(min = 1))]
    name: String,
    #[sanitize]
    #[validate(nested)]
    address: Address,
}

The ValidatedForm extractor

Every form handler follows the same sequence: deserialise the body, sanitise the fields, validate, then branch on the result. A custom ValidatedForm<T> extractor wraps axum-extra’s Form and performs all three steps, so handlers never repeat the boilerplate.

The extractor uses FormRejection only for deserialisation failures (malformed request bodies). Validation failures are not rejections; they are a normal part of form handling. The extractor returns both the sanitised input and any validation errors, so the handler always has access to the user’s data for re-rendering the form.

use axum::extract::{FromRequest, Request};
use axum::http::StatusCode;
use axum::response::{IntoResponse, Response};
use axum_extra::extract::{Form, FormRejection};
use sanitizer::prelude::*;
use validator::{Validate, ValidationErrors};

pub struct ValidatedForm<T> {
    pub input: T,
    pub errors: Option<ValidationErrors>,
}

impl<S, T> FromRequest<S> for ValidatedForm<T>
where
    S: Send + Sync,
    T: serde::de::DeserializeOwned + Sanitize + Validate,
    Form<T>: FromRequest<S, Rejection = FormRejection>,
{
    type Rejection = FormRejection;

    async fn from_request(
        req: Request,
        state: &S,
    ) -> Result<Self, Self::Rejection> {
        let Form(mut input) = Form::<T>::from_request(req, state).await?;
        input.sanitize();
        let errors = input.validate().err();
        Ok(ValidatedForm { input, errors })
    }
}

The handler pattern becomes:

async fn create_contact(
    validated: ValidatedForm<CreateContact>,
) -> impl IntoResponse {
    if let Some(errors) = &validated.errors {
        return (
            StatusCode::UNPROCESSABLE_ENTITY,
            render_contact_form(&validated.input, errors),
        ).into_response();
    }

    save_contact(&validated.input).await;
    Redirect::to("/contacts").into_response()
}

The ValidatedForm extractor handles the mechanical work. The handler deals only with the business logic: render errors or save and redirect.

Place the ValidatedForm definition in a shared crate in your workspace (e.g., common or web). Every form handler across the application can use it.

HTML5 client-side validation

Use HTML5 validation attributes as the first line of defence. They provide instant feedback without a server round-trip and reduce unnecessary requests. The server always validates too, because client-side validation is trivially bypassed.

The relevant attributes:

AttributePurposeExample
requiredField must not be emptyinput required;
type="email"Must look like an emailinput type="email";
type="url"Must look like a URLinput type="url";
minlength / maxlengthText length boundsinput minlength="1" maxlength="255";
min / maxNumeric or date boundsinput type="number" min="0" max="150";
patternRegex matchinput pattern="[A-Za-z]+" title="Letters only";

Apply these in your Maud templates alongside the server-side validator rules. Keep the constraints consistent: if the server requires length(min = 1, max = 255), set required minlength="1" maxlength="255" on the input.

fn contact_form_fields(input: Option<&CreateContact>) -> Markup {
    let name_val = input.map(|i| i.name.as_str()).unwrap_or("");
    let email_val = input.map(|i| i.email.as_str()).unwrap_or("");

    html! {
        label for="name" { "Name" }
        input #name type="text" name="name" value=(name_val)
            required minlength="1" maxlength="255";

        label for="email" { "Email" }
        input #email type="email" name="email" value=(email_val)
            required;
    }
}

HTML5 validation is not a substitute for server-side validation. It is a UX optimisation that catches obvious mistakes before they hit the network.

Displaying validation errors with Maud

When validation fails, re-render the form with the user’s input preserved and error messages next to the relevant fields. The ValidationErrors struct from validator maps field names to a list of ValidationError values, each with a message field.

A helper to extract the first error message for a given field:

use validator::ValidationErrors;

fn field_error(errors: &ValidationErrors, field: &str) -> Option<String> {
    errors
        .field_errors()
        .get(field)
        .and_then(|errs| errs.first())
        .and_then(|e| e.message.as_ref())
        .map(|msg| msg.to_string())
}

An error message component:

fn field_error_message(errors: Option<&ValidationErrors>, field: &str) -> Markup {
    let msg = errors.and_then(|e| field_error(e, field));
    html! {
        @if let Some(msg) = msg {
            span.field-error role="alert" { (msg) }
        }
    }
}

Wire it into the form:

fn render_contact_form(
    input: &CreateContact,
    errors: &ValidationErrors,
) -> Markup {
    html! {
        form method="post" action="/contacts" {
            div.form-error role="alert" {
                p { "Please fix the errors below." }
            }

            div.field {
                label for="name" { "Name" }
                input #name type="text" name="name" value=(input.name)
                    required minlength="1" maxlength="255";
                (field_error_message(Some(errors), "name"))
            }

            div.field {
                label for="email" { "Email" }
                input #email type="email" name="email" value=(input.email)
                    required;
                (field_error_message(Some(errors), "email"))
            }

            button type="submit" { "Save" }
        }
    }
}

The handler using ValidatedForm:

async fn show_contact_form() -> Markup {
    contact_form()
}

async fn create_contact(
    validated: ValidatedForm<CreateContact>,
) -> impl IntoResponse {
    if let Some(errors) = &validated.errors {
        return (
            StatusCode::UNPROCESSABLE_ENTITY,
            render_contact_form(&validated.input, errors),
        ).into_response();
    }

    save_contact(&validated.input).await;
    Redirect::to("/contacts").into_response()
}

Inline field validation with htmx

The full-form pattern above works without JavaScript. For a more responsive experience, add inline validation that checks individual fields as the user fills them in, using htmx to swap error messages without a full page reload.

Create a validation endpoint that accepts a single field value and returns just the error markup:

#[derive(Deserialize)]
struct FieldValidation {
    name: Option<String>,
    email: Option<String>,
}

async fn validate_field(
    Form(input): Form<FieldValidation>,
) -> Markup {
    // Build a partial struct for validation
    let mut contact = CreateContact {
        name: input.name.clone().unwrap_or_default(),
        email: input.email.clone().unwrap_or_default(),
        phone: None,
    };
    contact.sanitize();

    let errors = contact.validate().err();
    // Determine which field was submitted and return its error
    if input.name.is_some() {
        return field_error_message(errors.as_ref(), "name");
    }
    if input.email.is_some() {
        return field_error_message(errors.as_ref(), "email");
    }
    html! {}
}

Add htmx attributes to the form inputs. Each field posts its value on blur and swaps the error message next to it:

fn contact_form_with_inline_validation(
    input: Option<&CreateContact>,
    errors: Option<&ValidationErrors>,
) -> Markup {
    let name_val = input.map(|i| i.name.as_str()).unwrap_or("");
    let email_val = input.map(|i| i.email.as_str()).unwrap_or("");

    html! {
        form method="post" action="/contacts" {
            div.field {
                label for="name" { "Name" }
                input #name type="text" name="name" value=(name_val)
                    required minlength="1" maxlength="255"
                    hx-post="/contacts/validate"
                    hx-trigger="blur"
                    hx-target="next .field-error-slot"
                    hx-swap="innerHTML";
                span.field-error-slot {
                    (field_error_message(errors, "name"))
                }
            }

            div.field {
                label for="email" { "Email" }
                input #email type="email" name="email" value=(email_val)
                    required
                    hx-post="/contacts/validate"
                    hx-trigger="blur"
                    hx-target="next .field-error-slot"
                    hx-swap="innerHTML";
                span.field-error-slot {
                    (field_error_message(errors, "email"))
                }
            }

            button type="submit" { "Save" }
        }
    }
}

Register the validation endpoint:

let app = Router::new()
    .route("/contacts/new", get(show_contact_form))
    .route("/contacts", post(create_contact))
    .route("/contacts/validate", post(validate_field));

This layered approach gives three levels of validation feedback:

  1. HTML5 attributes catch basic mistakes instantly in the browser.
  2. htmx inline validation checks fields against server rules on blur, before submission.
  3. Full-form server validation on POST is the final authority. It always runs, catching anything the first two layers missed.

The form works without JavaScript (levels 1 and 3). htmx enhances it progressively.

Post/Redirect/Get

The Post/Redirect/Get (PRG) pattern prevents duplicate form submissions when users refresh the page after a POST. Without it, refreshing re-submits the form, potentially creating duplicate records.

The pattern:

  1. The browser POSTs the form data.
  2. The server processes it and responds with a 303 See Other redirect.
  3. The browser follows the redirect with a GET request.
  4. Refreshing the page repeats only the GET, not the POST.

In Axum:

use axum::response::Redirect;

async fn create_contact(
    validated: ValidatedForm<CreateContact>,
) -> impl IntoResponse {
    if let Some(errors) = &validated.errors {
        return (
            StatusCode::UNPROCESSABLE_ENTITY,
            render_contact_form(&validated.input, errors),
        ).into_response();
    }

    save_contact(&validated.input).await;
    Redirect::to("/contacts").into_response()
}

Redirect::to() sends a 303 See Other by default, which is correct for PRG. The browser converts the redirect to a GET regardless of the original method.

For success feedback after the redirect (a “Contact saved” flash message), store the message in the session before redirecting and display it on the next GET. Session management is covered in the Authentication section.

When the form is submitted via htmx (not a full page navigation), PRG is unnecessary. htmx replaces a targeted DOM fragment, and there is no browser history entry for the POST. The server can return an HTML fragment directly. Use the HxRequest extractor from axum-htmx to branch:

use axum_htmx::HxRequest;

async fn create_contact(
    HxRequest(is_htmx): HxRequest,
    validated: ValidatedForm<CreateContact>,
) -> impl IntoResponse {
    if let Some(errors) = &validated.errors {
        return (
            StatusCode::UNPROCESSABLE_ENTITY,
            render_contact_form(&validated.input, errors),
        ).into_response();
    }

    save_contact(&validated.input).await;

    if is_htmx {
        // Return updated content fragment
        render_contact_list().await.into_response()
    } else {
        // Standard PRG for non-htmx submissions
        Redirect::to("/contacts").into_response()
    }
}

CSRF protection

Every form that performs a state-changing action (POST, PUT, DELETE) needs protection against cross-site request forgery. Without it, a malicious page can submit a hidden form to your application using the victim’s authenticated session. CSRF protection is not specific to authentication forms; it applies to every form in the application, including the contact form examples above.

Apply the CSRF middleware layer to the router so it covers all routes with form handlers. The setup, configuration, and layer ordering are covered in the Authentication section.

File uploads

For forms that include file uploads, the browser sends multipart/form-data instead of URL-encoded data. The axum-typed-multipart crate provides a derive macro that handles multipart parsing with the same type-safe pattern as Form<T>.

[dependencies]
axum-typed-multipart = { version = "0.16", features = ["tempfile_3"] }
tempfile = "3"

The tempfile_3 feature streams uploads to temporary files instead of holding them in memory.

Define the upload struct:

use axum_typed_multipart::{FieldData, TryFromMultipart, TypedMultipart};
use tempfile::NamedTempFile;

#[derive(TryFromMultipart)]
struct CreateDocument {
    title: String,

    #[form_data(limit = "10MB")]
    file: FieldData<NamedTempFile>,
}

FieldData<NamedTempFile> streams the upload to a temporary file on disk. The FieldData wrapper provides metadata: file.metadata.file_name for the original filename and file.metadata.content_type for the MIME type.

The handler:

async fn upload_document(
    TypedMultipart(input): TypedMultipart<CreateDocument>,
) -> impl IntoResponse {
    let file_name = input.file.metadata.file_name
        .unwrap_or_else(|| "unnamed".to_string());
    let content_type = input.file.metadata.content_type
        .unwrap_or_else(|| "application/octet-stream".parse().unwrap());

    // input.file.contents is the NamedTempFile
    // Move it to permanent storage or upload to S3
    let temp_path = input.file.contents.path();

    // ... process the file

    Redirect::to("/documents")
}

The form needs enctype="multipart/form-data":

html! {
    form method="post" action="/documents" enctype="multipart/form-data" {
        label for="title" { "Title" }
        input #title type="text" name="title" required;

        label for="file" { "File" }
        input #file type="file" name="file" required accept=".pdf,.doc,.docx";

        button type="submit" { "Upload" }
    }
}

For processing and storing uploaded files (S3-compatible storage, permanent paths, serving files back), see the File Storage section.

Alternatives

garde is an alternative validation crate with a different API style. Where validator uses string-based attribute arguments (#[validate(length(min = 1))]), garde uses Rust expressions (#[garde(length(min = 1))]) and supports context-dependent validation through a generic context parameter. Both crates are actively maintained. This guide uses validator because it is more widely adopted and its API is sufficient for typical web form validation.

Gotchas

Field names must match. The name attribute in the HTML form must match the struct field name exactly (or the #[serde(rename)] value). A mismatch causes deserialisation to silently use the default or fail entirely, depending on whether the field is Option.

Sanitise before validating. The ValidatedForm extractor handles this order automatically. If you call .validate() without sanitising first, a value like " " (whitespace) passes a length(min = 1) check even though it contains no meaningful content.

Validation runs after deserialisation. If serde cannot parse the form body at all (e.g., a required field is completely missing), Axum rejects the request before sanitisation or validation ever runs. The ValidatedForm extractor surfaces this as a FormRejection.

validator checks values, not business rules. Format and range checks belong on the struct. Rules that require database access (uniqueness, referential integrity) belong in the handler or service layer, after validation passes.

Optional fields need special handling with validator. #[validate(email)] on an Option<String> only validates the inner value when it is Some. An empty optional field passes validation, which is usually what you want. If a field should be non-empty when present, add #[validate(length(min = 1))].

CSRF protection is not optional. Every POST form needs CSRF middleware, not just login and registration. A contacts form, a settings page, a comment box: if it changes state, it needs protection. See CSRF protection for setup.

multipart/form-data for file uploads. Standard Form<T> only handles URL-encoded bodies. If the form includes a file input, the enctype must be multipart/form-data and the handler must use TypedMultipart<T> instead of Form<T>. The ValidatedForm extractor does not apply to multipart forms.

Error Handling

Integrations

Server-Sent Events and Real-Time Updates

HTTP Client and External APIs

Background Jobs and Durable Execution with Restate

AI and LLM Integration

Infrastructure

File Storage

Email

Configuration and Secrets

Observability

Operations

Testing

Continuous Integration and Delivery

Deployment

Web Application Performance

Practices

Rust Best Practices for Web Development

Building with AI Coding Agents

© 2026 Hypermedia-Driven Applications with Rust