File Storage

File uploads and downloads are a core feature in most web applications: user avatars, document attachments, image galleries, CSV exports. The S3 API has become the standard interface for object storage, and every major provider implements it. Write your code against S3 once, then swap between a local development server and any production provider by changing environment variables.

This section covers setting up an S3-compatible storage backend, handling file uploads in Axum (both server-side and direct-to-storage), generating presigned URLs, and serving files back to users.

Dependencies

[dependencies]
rust-s3 = "0.37"
axum = { version = "0.8", features = ["multipart"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
uuid = { version = "1", features = ["v4"] }

rust-s3 is a lightweight S3 client that works with any S3-compatible provider. It supports async operations via tokio out of the box, with a clean API centred on the Bucket type. The aws-sdk-s3 crate is the other option, but it pulls in a larger dependency tree and its API is more verbose. rust-s3 covers everything needed here.

The multipart feature on axum enables the Multipart extractor for handling file upload forms.

RustFS for local development

RustFS is an S3-compatible object storage server written in Rust. It serves as the local development replacement for production object storage, the same way PostgreSQL in Docker serves as the local database. RustFS is Apache 2.0 licensed, making it a good alternative to MinIO (AGPL).

Add RustFS to your Docker Compose file alongside your other backing services:

services:
  rustfs:
    image: rustfs/rustfs:latest
    ports:
      - "9000:9000"
      - "9001:9001"
    environment:
      RUSTFS_ACCESS_KEY: rustfsadmin
      RUSTFS_SECRET_KEY: rustfsadmin
      RUSTFS_CONSOLE_ENABLE: "true"
    volumes:
      - rustfs-data:/data
      - rustfs-logs:/logs
    command: /data

volumes:
  rustfs-data:
  rustfs-logs:

Port 9000 exposes the S3 API. Port 9001 exposes a web console for browsing buckets and objects. Default credentials are rustfsadmin / rustfsadmin.

After starting the container, create a bucket for development. You can do this through the web console at http://localhost:9001, or with the MinIO client CLI (which works with any S3-compatible server):

mc alias set rustfs http://localhost:9000 rustfsadmin rustfsadmin
mc mb rustfs/uploads

Configuring the S3 client

Build the Bucket handle from environment variables so the same code works in development (RustFS) and production (Hetzner Object Storage or any other provider).

use s3::bucket::Bucket;
use s3::creds::Credentials;
use s3::Region;

pub fn create_bucket() -> Box<Bucket> {
    let region = Region::Custom {
        region: env_var("S3_REGION"),
        endpoint: env_var("S3_ENDPOINT"),
    };

    let credentials = Credentials::new(
        Some(&env_var("S3_ACCESS_KEY")),
        Some(&env_var("S3_SECRET_KEY")),
        None,
        None,
        None,
    )
    .expect("valid S3 credentials");

    let bucket_name = env_var("S3_BUCKET");
    Bucket::new(&bucket_name, region, credentials).expect("valid S3 bucket configuration")
}

fn env_var(name: &str) -> String {
    std::env::var(name).unwrap_or_else(|_| panic!("{name} must be set"))
}

Region::Custom accepts any endpoint URL, which is how you point the client at RustFS locally or Hetzner in production. The Bucket type is the main handle for all S3 operations: uploads, downloads, listing, deletion, and presigned URL generation.

Add the bucket to your application state:

#[derive(Clone)]
pub struct AppState {
    pub db: sqlx::PgPool,
    pub bucket: Box<Bucket>,
}

Environment variables

For local development with RustFS:

S3_ENDPOINT=http://localhost:9000
S3_REGION=us-east-1
S3_ACCESS_KEY=rustfsadmin
S3_SECRET_KEY=rustfsadmin
S3_BUCKET=uploads

For production with Hetzner Object Storage:

S3_ENDPOINT=https://fsn1.your-objectstorage.com
S3_REGION=fsn1
S3_ACCESS_KEY=<hetzner-access-key>
S3_SECRET_KEY=<hetzner-secret-key>
S3_BUCKET=prod-uploads

Hetzner Object Storage provides S3-compatible storage in European data centres (Falkenstein, Nuremberg, Helsinki). Generate access keys from the Hetzner Cloud Console. The endpoint must match the region where your bucket was created.

Upload handling in Axum

Server-side upload via multipart

The most straightforward approach: the browser sends the file to your Axum handler via a standard HTML multipart form, and the handler uploads it to S3. No client-side JavaScript required, which fits the HDA model well.

The HTML form:

use maud::{html, Markup};

fn upload_form() -> Markup {
    html! {
        form method="post" action="/files" enctype="multipart/form-data" {
            label {
                "Choose file"
                input type="file" name="file" required;
            }
            button type="submit" { "Upload" }
        }
    }
}

The handler:

use axum::{
    extract::{Multipart, State},
    response::{IntoResponse, Redirect},
};
use uuid::Uuid;

pub async fn upload_file(
    State(state): State<AppState>,
    mut multipart: Multipart,
) -> Result<impl IntoResponse, AppError> {
    let field = multipart
        .next_field()
        .await?
        .ok_or(AppError::BadRequest("no file provided".into()))?;

    let original_name = field
        .file_name()
        .unwrap_or("unnamed")
        .to_string();

    let content_type = field
        .content_type()
        .unwrap_or("application/octet-stream")
        .to_string();

    let data = field.bytes().await?;

    // Generate a unique key to avoid collisions
    let ext = original_name
        .rsplit('.')
        .next()
        .unwrap_or("bin");
    let key = format!("uploads/{}.{}", Uuid::new_v4(), ext);

    state
        .bucket
        .put_object_with_content_type(&key, &data, &content_type)
        .await
        .map_err(|e| AppError::Internal(format!("S3 upload failed: {e}")))?;

    // Store the key and original filename in your database
    // sqlx::query!("INSERT INTO files ...")

    Ok(Redirect::to("/files"))
}

field.bytes() reads the entire file into memory. This is fine for files up to a few megabytes (avatars, documents). For larger files, use the presigned URL approach described below.

The object key uses a UUID to avoid filename collisions and path traversal issues. Store the mapping between the generated key and the original filename in your database.

Adjusting the body size limit

Axum’s default body size limit is 2 MB. For file uploads, you’ll typically need to raise this on the upload route:

use axum::{extract::DefaultBodyLimit, routing::post, Router};

let app = Router::new()
    .route("/files", post(upload_file))
    .layer(DefaultBodyLimit::max(25 * 1024 * 1024)); // 25 MB

Apply the limit to specific routes rather than globally. A 25 MB limit on your file upload route is reasonable; the same limit on your login form is not.

Direct upload via presigned URL

For larger files, skip the server entirely. The server generates a presigned PUT URL, and the browser uploads directly to S3. This avoids buffering the file through your application server, reducing memory usage and latency.

The flow:

  1. The browser requests a presigned upload URL from your server.
  2. The server generates a presigned PUT URL with a short expiry.
  3. The browser uploads the file directly to S3 using that URL.
  4. The browser notifies the server that the upload is complete.

The handler that generates the presigned URL:

use axum::{extract::State, Json};
use serde::{Deserialize, Serialize};
use uuid::Uuid;

#[derive(Deserialize)]
pub struct PresignedUploadRequest {
    pub filename: String,
    pub content_type: String,
}

#[derive(Serialize)]
pub struct PresignedUploadResponse {
    pub upload_url: String,
    pub object_key: String,
}

pub async fn presigned_upload_url(
    State(state): State<AppState>,
    Json(req): Json<PresignedUploadRequest>,
) -> Result<Json<PresignedUploadResponse>, AppError> {
    let ext = req
        .filename
        .rsplit('.')
        .next()
        .unwrap_or("bin");
    let key = format!("uploads/{}.{}", Uuid::new_v4(), ext);

    let url = state
        .bucket
        .presign_put(&key, 3600, None, None)
        .await
        .map_err(|e| AppError::Internal(format!("presign failed: {e}")))?;

    Ok(Json(PresignedUploadResponse {
        upload_url: url,
        object_key: key,
    }))
}

The presigned URL is valid for 3600 seconds (one hour). The client uploads with a PUT request to that URL. No credentials are needed on the client side because the URL itself contains the authentication signature.

On the client, a small amount of JavaScript handles the direct upload:

async function uploadFile(file) {
    // Step 1: Get presigned URL from server
    const res = await fetch("/files/presign", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
            filename: file.name,
            content_type: file.type,
        }),
    });
    const { upload_url, object_key } = await res.json();

    // Step 2: Upload directly to S3
    await fetch(upload_url, {
        method: "PUT",
        headers: { "Content-Type": file.type },
        body: file,
    });

    // Step 3: Notify server that upload is complete
    await fetch("/files/confirm", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ object_key }),
    });
}

The confirmation step (step 3) is where the server records the file in the database. Without it, orphaned objects accumulate in S3 from abandoned or failed uploads. Set up a lifecycle rule on your bucket to automatically delete objects older than 24 hours from the uploads/ prefix if they were never confirmed.

Choosing between the two approaches

ConcernServer-side multipartPresigned URL
SimplicitySimpler. Standard HTML form, no JavaScript.Requires JavaScript for the upload flow.
Server loadFile passes through your server. Memory proportional to file size.File goes directly to S3. Server only generates a URL.
File sizePractical up to ~25 MB.Works for files of any size.
Progress trackingRequires SSE or polling for progress on large files.Browser fetch can report upload progress natively.
HDA fitWorks naturally with forms and htmx.Requires a small JavaScript module.

Use server-side multipart as the default for typical uploads (documents, images, avatars). Switch to presigned URLs when files are large enough that buffering them through the server becomes a problem.

Serving files to users

Presigned GET URLs

The simplest way to serve a file: generate a presigned GET URL and redirect the browser to it.

pub async fn download_file(
    State(state): State<AppState>,
    Path(file_id): Path<Uuid>,
) -> Result<impl IntoResponse, AppError> {
    // Look up the file record in the database
    let file = sqlx::query_as!(
        FileRecord,
        "SELECT object_key, original_name FROM files WHERE id = $1",
        file_id
    )
    .fetch_optional(&state.db)
    .await?
    .ok_or(AppError::NotFound("file not found".into()))?;

    let url = state
        .bucket
        .presign_get(&file.object_key, 3600, None)
        .await
        .map_err(|e| AppError::Internal(format!("presign failed: {e}")))?;

    Ok(Redirect::temporary(&url))
}

The presigned URL expires after an hour. The browser follows the redirect and downloads directly from S3. This keeps file serving load off your application server.

Controlling Content-Disposition

By default, browsers display files inline if they can (images, PDFs). To force a download with the original filename, pass response-content-disposition as a query parameter on the presigned URL:

use std::collections::HashMap;

pub async fn download_file_as_attachment(
    State(state): State<AppState>,
    Path(file_id): Path<Uuid>,
) -> Result<impl IntoResponse, AppError> {
    let file = sqlx::query_as!(
        FileRecord,
        "SELECT object_key, original_name FROM files WHERE id = $1",
        file_id
    )
    .fetch_optional(&state.db)
    .await?
    .ok_or(AppError::NotFound("file not found".into()))?;

    let mut queries = HashMap::new();
    queries.insert(
        "response-content-disposition".to_string(),
        format!("attachment; filename=\"{}\"", file.original_name),
    );

    let url = state
        .bucket
        .presign_get(&file.object_key, 3600, Some(queries))
        .await
        .map_err(|e| AppError::Internal(format!("presign failed: {e}")))?;

    Ok(Redirect::temporary(&url))
}

Use attachment when the user explicitly clicks a download link. Use inline (or omit the header) when displaying an image or PDF in the browser.

Proxy handler for access-controlled files

Presigned URLs are convenient but have a limitation: once generated, anyone with the URL can access the file until it expires. For files that require per-request access control (private documents, paid content), proxy the file through your server instead.

use axum::{
    body::Body,
    http::{header, StatusCode},
    response::Response,
};

pub async fn serve_private_file(
    State(state): State<AppState>,
    Path(file_id): Path<Uuid>,
    user: AuthenticatedUser,
) -> Result<Response, AppError> {
    let file = sqlx::query_as!(
        FileRecord,
        "SELECT object_key, original_name, content_type, owner_id FROM files WHERE id = $1",
        file_id
    )
    .fetch_optional(&state.db)
    .await?
    .ok_or(AppError::NotFound("file not found".into()))?;

    // Check access
    if file.owner_id != user.id {
        return Err(AppError::Forbidden("not your file".into()));
    }

    let response = state
        .bucket
        .get_object(&file.object_key)
        .await
        .map_err(|e| AppError::Internal(format!("S3 get failed: {e}")))?;

    Ok(Response::builder()
        .status(StatusCode::OK)
        .header(header::CONTENT_TYPE, &file.content_type)
        .header(
            header::CONTENT_DISPOSITION,
            format!("inline; filename=\"{}\"", file.original_name),
        )
        .body(Body::from(response.to_vec()))
        .unwrap())
}

This approach loads the entire file into memory before sending it to the client. For large files behind access control, consider generating a short-lived presigned URL (60 seconds) after the access check passes, then redirecting. This gives you per-request authorisation without proxying the bytes.

Image thumbnails

For image-heavy applications (galleries, user avatars), serve resized thumbnails instead of full-size originals. Generate thumbnails at upload time and store them as separate S3 objects.

use image::imageops::FilterType;
use std::io::Cursor;

fn generate_thumbnail(data: &[u8], max_dimension: u32) -> Result<Vec<u8>, image::ImageError> {
    let img = image::load_from_memory(data)?;
    let thumb = img.resize(max_dimension, max_dimension, FilterType::Lanczos3);

    let mut buf = Vec::new();
    thumb.write_to(&mut Cursor::new(&mut buf), image::ImageFormat::WebP)?;
    Ok(buf)
}

Add the image crate to your dependencies:

[dependencies]
image = { version = "0.25", default-features = false, features = ["webp", "jpeg", "png"] }

Store the thumbnail alongside the original:

let thumb_data = generate_thumbnail(&data, 300)?;
let thumb_key = format!("uploads/thumb_{}.webp", file_id);

state
    .bucket
    .put_object_with_content_type(&thumb_key, &thumb_data, "image/webp")
    .await?;

WebP produces smaller files than JPEG at equivalent quality. Disable default features on the image crate and enable only the formats you need, as the full feature set pulls in decoders you won’t use.

For applications where thumbnail generation is slow or needs to handle many formats, move the processing to a background job via Restate and update the database record when the thumbnail is ready.

Deleting files

When a user deletes a record that has an associated file, delete the S3 object as well:

pub async fn delete_file(
    State(state): State<AppState>,
    Path(file_id): Path<Uuid>,
) -> Result<impl IntoResponse, AppError> {
    let file = sqlx::query_as!(
        FileRecord,
        "SELECT object_key FROM files WHERE id = $1",
        file_id
    )
    .fetch_optional(&state.db)
    .await?
    .ok_or(AppError::NotFound("file not found".into()))?;

    state
        .bucket
        .delete_object(&file.object_key)
        .await
        .map_err(|e| AppError::Internal(format!("S3 delete failed: {e}")))?;

    sqlx::query!("DELETE FROM files WHERE id = $1", file_id)
        .execute(&state.db)
        .await?;

    Ok(Redirect::to("/files"))
}

Delete the S3 object before the database record. If the S3 deletion fails, the database record remains and you can retry. If you delete the database record first and the S3 deletion fails, you have an orphaned object with no reference to it.

Database schema for file records

A minimal files table to track uploaded objects:

CREATE TABLE files (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    object_key TEXT NOT NULL,
    original_name TEXT NOT NULL,
    content_type TEXT NOT NULL,
    size_bytes BIGINT NOT NULL,
    owner_id UUID NOT NULL REFERENCES users(id),
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE INDEX idx_files_owner ON files(owner_id);

The object_key is the S3 path. The original_name is what the user uploaded. Keep both: the key for S3 operations, the name for display and download headers.

Production providers

The S3 API is a de facto standard. The Region::Custom configuration in rust-s3 means any compliant provider works without code changes.

Hetzner Object Storage is a good default for European deployments. EUR 4.99/month includes 1 TB of storage and 1 TB of egress. Three EU regions: Falkenstein (fsn1), Nuremberg (nbg1), and Helsinki (hel1). Endpoints follow the pattern https://{region}.your-objectstorage.com. Generate access keys from the Hetzner Cloud Console.

Other S3-compatible providers worth considering:

  • Cloudflare R2: No egress fees. Good for globally distributed read-heavy workloads.
  • Backblaze B2: Cheap storage at $6/TB/month. Free egress to Cloudflare via the Bandwidth Alliance.
  • AWS S3: The original. More expensive, but the widest feature set and the most mature tooling ecosystem.
  • DigitalOcean Spaces: Simple pricing, CDN included. $5/month for 250 GB.

Switching providers means changing four environment variables. No code changes required.

Gotchas

Set Content-Type on upload. If you upload without setting the content type, S3 defaults to application/octet-stream. Browsers then download the file instead of displaying it inline, even for images and PDFs. Always pass the content type from the upload form to put_object_with_content_type.

Generate unique object keys. Never use the original filename as the S3 key. Users upload files named document.pdf constantly. Use a UUID or similar unique identifier. This also prevents path traversal attacks where a crafted filename like ../../etc/passwd could cause problems.

Handle the 2 MB default body limit. Axum rejects request bodies larger than 2 MB by default. If your upload handler returns a 413 Payload Too Large error, you forgot to raise the limit with DefaultBodyLimit::max(). Apply the higher limit only to upload routes.

Clean up orphaned objects. Presigned URL uploads can be abandoned partway through. Failed server-side uploads might write to S3 but crash before recording the database entry. Set an S3 lifecycle rule to expire unconfirmed objects after 24 hours, or run a periodic cleanup job that compares S3 contents against database records.

Delete S3 objects before database records. If the S3 delete fails, the database record survives and you can retry. The reverse order leaves orphaned objects you can’t find.

Watch for CORS with presigned URLs. When using direct browser uploads, the S3 endpoint must return appropriate CORS headers. Configure CORS on the bucket to allow PUT requests from your application’s origin. RustFS and most S3 providers support bucket-level CORS configuration.

Don’t store files in the database. It’s tempting to store small files as BYTEA columns in PostgreSQL. Resist this. It bloats your database, makes backups slower, and prevents you from using CDN or S3 features like presigned URLs. Object storage exists for a reason.