Server-Sent Events (SSE) push data from server to browser over a single HTTP connection. The browser opens a persistent connection, the server writes events to it as they occur, and the browser receives them in real time. No polling, no WebSocket handshake, no bidirectional protocol negotiation. For the class of problems that dominate hypermedia applications (notifications, progress bars, live feeds, status updates), SSE is the right tool.
This section covers implementing SSE endpoints in Axum, using Valkey pub/sub as the event distribution layer, consuming events in the browser with the htmx SSE extension, and common patterns for real-time features.
SSE fundamentals
SSE uses a simple text-based protocol over HTTP. The server responds with Content-Type: text/event-stream and writes events as plain text:
event: notification
data: <div class="alert">New message from Alice</div>
event: status
data: <span class="status">Processing complete</span>
Each event has an optional event: name and a data: payload, separated by blank lines. The browser’s EventSource API connects to the endpoint, receives events, and dispatches them by name. If the connection drops, the browser reconnects automatically.
SSE vs WebSockets
SSE is unidirectional: server to client only. WebSockets are bidirectional. Pick based on the data flow:
| Use SSE when | Use WebSockets when |
|---|---|
| Server pushes updates to the browser | Client and server both send messages |
| Notifications, progress bars, live feeds | Chat, collaborative editing, gaming |
| HTML fragments for htmx to swap | Binary data or high-frequency bidirectional messaging |
| You want HTTP semantics (caching, auth, proxies) | You need a persistent bidirectional channel |
SSE works over standard HTTP, which means it passes through proxies, load balancers, and CDNs without special configuration. WebSockets require upgrade support at every layer. For HDA applications where the server renders HTML and pushes fragments to the browser, SSE is the natural fit.
SSE endpoints in Axum
Axum provides first-class SSE support through axum::response::sse. The key types are Sse (the response wrapper), Event (a single event), and KeepAlive (heartbeat configuration).
A minimal SSE endpoint
use axum::response::sse::{Event, KeepAlive, Sse};
use futures_util::stream::Stream;
use std::convert::Infallible;
async fn events() -> Sse<impl Stream<Item = Result<Event, Infallible>>> {
let stream = futures_util::stream::repeat_with(|| {
Event::default()
.event("heartbeat")
.data("alive")
})
.map(Ok)
.throttle(std::time::Duration::from_secs(5));
Sse::new(stream).keep_alive(KeepAlive::default())
}
The handler returns Sse<impl Stream<Item = Result<Event, E>>> where E: Into<BoxError>. Axum sets Content-Type: text/event-stream and Cache-Control: no-cache automatically.
Building events
Event uses a builder pattern:
// Named event with HTML data
Event::default()
.event("notification")
.data("<div class=\"alert\">New message</div>")
// Event with an ID (for reconnection tracking)
Event::default()
.event("update")
.id("42")
.data("<span>Updated value</span>")
// Retry interval hint (tells the browser how long to wait before reconnecting)
Event::default()
.retry(std::time::Duration::from_secs(5))
.data("connected")
Each setter (event, data, id, retry) can only be called once per Event. Calling it twice panics. The data method handles newlines in the payload correctly, splitting them across multiple data: lines per the SSE specification.
Keep-alive
Proxies and load balancers close idle HTTP connections. KeepAlive sends periodic comment lines (:keepalive\n\n) to keep the connection open:
Sse::new(stream).keep_alive(
KeepAlive::new()
.interval(std::time::Duration::from_secs(15))
.text("keepalive")
)
The default interval is 15 seconds. Always call .keep_alive() in production. Without it, connections through Nginx, Cloudflare, or other proxies will be silently dropped after their idle timeout.
Valkey pub/sub as the event bus
A single SSE endpoint connected to a single data source works for toy examples. In practice, events originate from many places (a background job finishes, another user edits a record, a workflow reaches a new stage) and each SSE client only cares about a subset of them. Valkey pub/sub provides the distribution layer, with per-resource channels as the organising principle.
Per-resource channels
Structure Valkey channels around the resources that generate events:
order:123 – status changes for order 123
project:456 – activity on project 456
user:789:notifications – notifications for user 789
task:abc:progress – progress updates for background task abc
Any part of your application publishes to the relevant channel. Each SSE connection subscribes only to the channels it needs. This maps naturally to how HDA pages work: a page showing order 123 opens an SSE connection that subscribes to order:123. A dashboard subscribes to several channels at once.
This design is also what Valkey performs best with. PUBLISH is O(N) where N is the number of subscribers on that specific channel. Many channels with a handful of subscribers each is fast. A single channel with thousands of subscribers makes every publish slow. The Valkey documentation and Redis creator are explicit on this point: prefer many fine-grained channels over a few broad ones.
The architecture
[Handler A] ──publish──▶ Valkey channel: order:123 ──subscribe──▶ [SSE Client 1]
[Handler B] ──publish──▶ Valkey channel: project:456 ──subscribe──▶ [SSE Client 2]
[Restate] ──publish──▶ Valkey channel: task:abc ──subscribe──▶ [SSE Client 1]
Each SSE connection opens its own Valkey pub/sub subscriber and subscribes to the specific channels the authenticated user is authorised to see. When the SSE client disconnects, the Valkey connection drops and the subscriptions are cleaned up automatically.
Dependencies
[dependencies]
axum = "0.8"
redis = { version = "1.0", features = ["tokio-comp"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
futures-util = "0.3"
async-stream = "0.3"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
The redis crate works with Valkey without modification. Valkey is API-compatible with Redis, so any Redis client library works as-is.
Application state
#[derive(Clone)]
pub struct AppState {
pub db: sqlx::PgPool,
pub valkey: redis::Client,
}
Store a redis::Client rather than a connection. The client is a lightweight handle that creates new connections on demand. Each SSE handler will create its own pub/sub connection from this client.
Publishing events
Any handler or background process publishes events to a resource-specific channel:
use redis::AsyncCommands;
pub async fn publish_event(
valkey: &redis::Client,
channel: &str,
event_type: &str,
html: &str,
) -> Result<(), redis::RedisError> {
let mut conn = valkey.get_multiplexed_async_connection().await?;
let payload = serde_json::json!({
"event_type": event_type,
"data": html
});
conn.publish(channel, payload.to_string()).await?;
Ok(())
}
// Example: order status changed
publish_event(
&state.valkey,
"order:123",
"status",
"<span class=\"badge\">Shipped</span>",
).await?;
The publisher uses a regular multiplexed connection. Multiplexed connections are shared across callers, so you do not need to manage a connection pool for publishing. The subscriber connection is separate because Valkey requires a dedicated connection for subscriptions (it cannot execute other commands while subscribed).
The SSE handler
Each SSE handler authenticates the user, checks authorisation for the requested resource, then opens a dedicated Valkey subscriber for that resource’s channel:
use async_stream::try_stream;
use axum::extract::{Path, State};
use axum::response::sse::{Event, KeepAlive, Sse};
use futures_util::stream::Stream;
use futures_util::StreamExt;
use std::convert::Infallible;
pub async fn order_events(
State(state): State<AppState>,
Path(order_id): Path<i64>,
user: AuthenticatedUser,
) -> Result<Sse<impl Stream<Item = Result<Event, Infallible>>>, AppError> {
// Verify the user has access to this order
let has_access = check_order_access(&state.db, user.id, order_id).await?;
if !has_access {
return Err(AppError::Forbidden);
}
let channel = format!("order:{order_id}");
let client = state.valkey.clone();
let stream = try_stream! {
// Open a dedicated pub/sub connection for this SSE client
let mut pubsub = client.get_async_pubsub().await.unwrap();
pubsub.subscribe(&channel).await.unwrap();
let mut messages = pubsub.into_on_message();
while let Some(msg) = messages.next().await {
let payload: String = msg.get_payload().unwrap();
if let Ok(event) = serde_json::from_str::<serde_json::Value>(&payload) {
let event_type = event["event_type"].as_str().unwrap_or("update");
let data = event["data"].as_str().unwrap_or("");
yield Event::default()
.event(event_type)
.data(data);
}
}
};
Ok(Sse::new(stream).keep_alive(KeepAlive::default()))
}
The authorisation check happens before the stream is created. If the user does not have access, the handler returns an error and no Valkey connection is opened. Once the SSE client disconnects (browser navigates away, tab closes, element removed from DOM), the stream is dropped, which drops the Valkey connection and automatically unsubscribes.
Subscribing to multiple channels
A page that needs events from several resources subscribes to all of them on a single connection:
pub async fn dashboard_events(
State(state): State<AppState>,
user: AuthenticatedUser,
) -> Result<Sse<impl Stream<Item = Result<Event, Infallible>>>, AppError> {
// Determine which resources this user should receive updates for
let channels = get_user_subscriptions(&state.db, user.id).await?;
let client = state.valkey.clone();
let stream = try_stream! {
let mut pubsub = client.get_async_pubsub().await.unwrap();
// Subscribe to all channels at once
for channel in &channels {
pubsub.subscribe(channel).await.unwrap();
}
let mut messages = pubsub.into_on_message();
while let Some(msg) = messages.next().await {
let channel_name = msg.get_channel_name().to_string();
let payload: String = msg.get_payload().unwrap();
if let Ok(event) = serde_json::from_str::<serde_json::Value>(&payload) {
let event_type = event["event_type"].as_str().unwrap_or("update");
let data = event["data"].as_str().unwrap_or("");
yield Event::default()
.event(event_type)
.data(data);
}
}
};
Ok(Sse::new(stream).keep_alive(KeepAlive::default()))
}
The get_user_subscriptions function queries your database for the resources the user has access to and returns channel names like ["project:12", "project:45", "user:789:notifications"]. A single Valkey pub/sub connection can subscribe to any number of channels.
Wiring it together
use axum::{routing::get, Router};
#[tokio::main]
async fn main() {
let valkey = redis::Client::open("redis://127.0.0.1:6379").unwrap();
let state = AppState {
db: pool,
valkey,
};
let app = Router::new()
.route("/events/orders/{id}", get(order_events))
.route("/events/dashboard", get(dashboard_events))
.with_state(state);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
No global background tasks needed. Each SSE connection manages its own Valkey subscription lifecycle.
Security
SSE connections carry the same security requirements as any other authenticated endpoint, with additional considerations for long-lived connections.
Authentication
SSE uses a regular HTTP GET request. The browser’s EventSource API automatically sends cookies, so cookie-based session authentication works without any extra configuration. This is the recommended approach for HDA applications.
EventSource does not support custom HTTP headers. If your application uses Authorization: Bearer tokens, you cannot pass them through EventSource. Workarounds exist (tokens in query parameters, fetch-based SSE libraries), but they introduce their own risks. Stick with session cookies.
Authorisation at subscription time
Verify access before opening the Valkey subscription. The handler should:
- Authenticate the user from the session cookie.
- Extract the resource identifier from the request (path parameter, query parameter).
- Check that the user has permission to view the resource.
- Only then open the Valkey subscriber and begin streaming.
The order_events handler above demonstrates this pattern. The authorisation check is a standard database query, the same check you would run on a regular page load for that resource.
Per-resource channels provide security by architecture. Each SSE connection only receives messages from channels it explicitly subscribed to, and subscription is gated by a server-side authorisation check. There is no filtering step that could be bypassed or implemented incorrectly.
Do not use a single broadcast channel where all events flow to all connections and rely on server-side filtering. Every message passes through every connection’s filter logic, and any bug in that filter leaks data to unauthorised users.
Long-lived connection re-authorisation
SSE connections can persist for hours. Permissions change: a user’s role is downgraded, a project is archived, a session expires. The SSE connection established before the change continues streaming events unless you actively close it.
Strategies for handling this:
- Periodic session validation. Check the user’s session in the stream loop every N minutes. If the session is expired or revoked, close the stream.
- Revocation events. When a permission change occurs, publish a control event (e.g., to channel
user:{id}:control) that the SSE handler listens for and uses to close the connection. - Short-lived connections. Set
sse-closeon a timer event and have the browser reconnect periodically. Each reconnection runs the full authorisation check.
The simplest approach for most applications is periodic session validation. Add it to the stream loop:
let stream = try_stream! {
let mut pubsub = client.get_async_pubsub().await.unwrap();
pubsub.subscribe(&channel).await.unwrap();
let mut messages = pubsub.into_on_message();
let mut last_auth_check = std::time::Instant::now();
loop {
// Re-check authorisation every 5 minutes
if last_auth_check.elapsed() > std::time::Duration::from_secs(300) {
let still_valid = check_order_access(&db, user_id, order_id).await;
if !still_valid.unwrap_or(false) {
break;
}
last_auth_check = std::time::Instant::now();
}
match tokio::time::timeout(
std::time::Duration::from_secs(30),
messages.next()
).await {
Ok(Some(msg)) => {
// ... yield event as before
}
Ok(None) => break,
Err(_) => continue, // Timeout, loop back to re-check auth
}
}
};
The tokio::time::timeout ensures the loop does not block indefinitely waiting for a message, giving the authorisation check a chance to run even when the channel is quiet.
CSRF
An attacker’s page can create an EventSource pointing at your SSE endpoint. The victim’s browser sends cookies automatically, so the attacker’s page receives the victim’s event stream.
Mitigate this with SameSite=Lax or SameSite=Strict on session cookies. SameSite=Lax is the browser default and prevents cookies from being sent on cross-origin sub-resource requests, which includes EventSource connections initiated from a different origin. If your session cookies already use SameSite=Lax (they should), this attack is blocked.
As a defence-in-depth measure, validate the Origin header on SSE endpoints and reject requests from unexpected origins.
Connection exhaustion
Each SSE connection consumes a TCP connection, a file descriptor, and a Tokio task. An attacker could open thousands of connections to exhaust server resources.
Rate-limit SSE connections per user and per IP at your reverse proxy layer. Caddy and Nginx both support connection limits. Also set a maximum connection duration server-side (close and let the browser reconnect after a reasonable period, e.g., 30 minutes).
Consuming events with htmx
The htmx SSE extension connects to an SSE endpoint and swaps event data into the DOM. The Interactivity with htmx section covers the basic setup. Here is the full pattern.
Connecting and swapping
Place hx-ext="sse" and sse-connect on a container element. Child elements with sse-swap receive the data from matching event names:
div hx-ext="sse" sse-connect="/events/orders/123" {
// Replaced when the server sends an event named "status"
div sse-swap="status" hx-swap="innerHTML" {
"Loading status..."
}
// Replaced on "activity" events
div sse-swap="activity" hx-swap="innerHTML" {
"No recent activity."
}
}
When the server sends event: status\ndata: <span>Shipped</span>\n\n, htmx takes the data payload and swaps it into the element with sse-swap="status". The event name in the SSE stream must exactly match the sse-swap attribute value (case-sensitive).
Using SSE events as triggers
Instead of swapping SSE data directly, use an event as a trigger for a standard htmx request. This is useful when the SSE event signals “something changed” but the actual content comes from a separate endpoint:
div hx-ext="sse" sse-connect="/events/orders/123" {
// When "updated" fires, fetch fresh order details
div hx-get="/orders/123/details"
hx-trigger="sse:updated"
hx-swap="innerHTML" {
"Loading order details..."
}
}
This pattern keeps the SSE payload minimal (just a signal) and lets the triggered request fetch exactly the content it needs.
Closing the connection
The sse-close attribute closes the EventSource when a specific event arrives:
div hx-ext="sse" sse-connect="/events/tasks/abc/progress" sse-close="complete" {
div sse-swap="progress" {
"Starting..."
}
}
When the server sends event: complete\ndata: done\n\n, the browser closes the SSE connection. Without sse-close, the connection stays open until the element is removed from the DOM or the page navigates away.
Reconnection behaviour
The htmx SSE extension reconnects automatically with exponential backoff. The default configuration starts at 500ms and backs off to a maximum of 60 seconds, with 30% jitter to avoid thundering herd reconnections. It attempts up to 50 reconnections before giving up.
The browser’s native EventSource also reconnects on its own, but the htmx extension’s backoff algorithm is more configurable and better suited to production use. Each reconnection is a new HTTP request, so it runs through authentication and authorisation again.
Patterns
Progress updates
A long-running operation (file upload, report generation, data import) publishes progress events to a task-specific channel. The browser shows a progress bar that updates in real time.
Server-side, publish progress from wherever the work happens:
pub async fn publish_progress(
valkey: &redis::Client,
task_id: &str,
percent: u32,
message: &str,
) -> Result<(), anyhow::Error> {
let html = format!(
r#"<div class="progress-bar" style="width: {percent}%">{percent}%</div>
<p>{message}</p>"#
);
publish_event(valkey, &format!("task:{task_id}"), "progress", &html).await?;
Ok(())
}
When the task finishes, publish a completion event:
publish_event(valkey, &format!("task:{task_id}"), "complete", "<p>Done.</p>").await?;
Client-side, connect to the task’s SSE endpoint:
fn progress_tracker(task_id: &str) -> Markup {
html! {
div hx-ext="sse"
sse-connect=(format!("/events/tasks/{task_id}/progress"))
sse-close="complete" {
div sse-swap="progress" {
div .progress-bar style="width: 0%" { "0%" }
p { "Starting..." }
}
}
}
}
The sse-close="complete" closes the connection when the task finishes. No lingering connections.
Notifications
A notification feed that updates across all open tabs for a user. The SSE connection subscribes to the user’s notification channel:
div #notifications hx-ext="sse" sse-connect="/events/notifications" {
div sse-swap="notification" hx-swap="afterbegin" {
// New notifications prepended here
}
}
The hx-swap="afterbegin" prepends each new notification at the top of the container rather than replacing the entire contents. Each notification event delivers a self-contained HTML fragment.
The handler for /events/notifications subscribes to user:{user_id}:notifications, determined from the authenticated session.
Live data feeds
A dashboard element that refreshes when underlying data changes. Rather than streaming the data itself, use SSE as a signal to re-fetch:
div hx-ext="sse" sse-connect="/events/dashboard" {
div hx-get="/dashboard/metrics"
hx-trigger="sse:metrics-updated"
hx-swap="innerHTML" {
(render_metrics(¤t_metrics))
}
}
This pattern is simpler than pushing full HTML through the SSE stream, and it works well when the triggered endpoint already exists for initial page load.
Connection management
One Valkey connection per SSE client
Each SSE connection opens its own Valkey pub/sub connection. This is the simplest correct architecture. Valkey handles thousands of concurrent connections without issue. For most applications (hundreds of concurrent SSE clients), no further optimisation is needed.
If you reach tens of thousands of concurrent SSE connections on a single server process and Valkey connection count becomes a concern, introduce a subscription manager that deduplicates: when multiple SSE clients on the same server need the same channel, the manager subscribes once and fans out in-process via a tokio::broadcast channel. This is an optimisation, not a starting point.
Browser connection limits
Browsers limit the number of concurrent HTTP connections per domain. For HTTP/1.1, this limit is typically 6 connections. Each SSE connection consumes one of those slots. If a user opens multiple tabs, they can exhaust the limit quickly.
HTTP/2 multiplexes streams over a single TCP connection, so this limit does not apply. Most modern deployments use HTTP/2. If your reverse proxy (Caddy, Nginx) terminates TLS and serves over HTTP/2, this is a non-issue.
For pages that need events from many resources, prefer a single SSE connection that subscribes to multiple Valkey channels (as shown in the dashboard example) over multiple SSE connections from the same page.
Scaling beyond a single server
Per-resource channels work naturally across multiple application servers. When a handler on server A publishes to order:123, every server with a subscriber on that channel receives the message and delivers it to its local SSE clients. No coordination between servers is required because Valkey handles the distribution.
Integration with Restate
Restate workflows can publish progress events to Valkey as they execute. The SSE infrastructure picks them up and delivers them to the browser. A Restate workflow handler publishes status updates at each stage:
// Inside a Restate workflow step
publish_event(
&valkey_client,
&format!("task:{workflow_id}"),
"progress",
"<p>Step 2 of 4: Processing data...</p>",
).await?;
The browser connects to /events/tasks/{workflow_id}/progress with sse-close="complete" and sees each step’s status in real time. The Background Jobs and Durable Execution with Restate section covers Restate in detail.
When SSE is not enough
SSE is unidirectional. If your application requires the client to send messages back to the server over the same connection (chat with typing indicators, collaborative cursors, multiplayer game state), you need WebSockets.
Axum supports WebSocket upgrades directly. The tokio-tungstenite crate provides the underlying WebSocket implementation, and Axum’s axum::extract::ws module wraps it with an ergonomic API. For most HDA applications, SSE covers the real-time requirements. Reach for WebSockets only when you have a genuinely bidirectional communication need.
Gotchas
Valkey pub/sub is fire-and-forget. Messages are not persisted. If a subscriber is disconnected when a message is published, that message is lost. If you need guaranteed delivery, use Valkey Streams instead of pub/sub, or design your application so that a missed SSE event triggers a full refresh on reconnection.
The subscriber connection is dedicated. A Valkey connection in subscribe mode cannot execute other commands (GET, SET, PUBLISH). Each SSE client uses its own dedicated subscriber connection. Publishing happens on a separate multiplexed connection.
Avoid pattern subscriptions at scale. PSUBSCRIBE (e.g., user:42:*) is convenient but has a global performance cost. Every PUBLISH to any channel pays O(M) where M is the total number of active pattern subscriptions across all clients. With hundreds of concurrent SSE connections each using pattern subscriptions, every publish slows down. Prefer explicit SUBSCRIBE to the specific channels each client needs.
Proxy timeouts can close idle connections. Nginx defaults to a 60-second proxy read timeout. Caddy and Cloudflare have their own defaults. KeepAlive sends heartbeat comments to prevent this, but verify your proxy configuration allows long-lived connections. For Nginx, set proxy_read_timeout to a high value (3600s or more) on SSE endpoints.
SSE connections count against server resources. Each connected client holds an open TCP connection, a Valkey connection, and a Tokio task. For hundreds of concurrent connections this is negligible. At tens of thousands, monitor memory and file descriptor usage on both your application server and Valkey.
Event names cannot contain newlines. The Event::event() method panics if the name contains \n or \r. Event names should be simple identifiers like status, progress, or updated.