Cache Poisoning in Axum (Rust)
Cache Poisoning in Axum with Rust — how this specific combination creates or exposes the vulnerability
Cache poisoning in Axum with Rust occurs when an attacker manipulates cache keys or cacheable responses so that malicious or incorrect data is served to other users. Axum routes are typically composed of layers, and if caching layers or application logic do not strictly isolate data by request context, one user’s response can be stored under a key that another user’s request will inadvertently match.
Because Axum is strongly typed and relies on Rust’s ownership model, developers may assume memory-safety guarantees alone prevent data leakage. However, logical flaws—such as using only path parameters for cache keys while neglecting query parameters, headers, or user identifiers—can cause cross-user contamination. For example, if an endpoint that includes an authenticated user’s ID in the response is cached based solely on the public path, a poisoned cache entry can be served to unrelated users, exposing private information or causing privilege confusion.
OpenAPI/Swagger specifications analyzed by middleBrick can highlight whether cache-control directives and route parameters are aligned with runtime behavior. When specs define caching rules but implementation does not enforce strict key segregation, the unauthenticated attack surface includes endpoints that should be private but are cached in a way that leaks data. MiddleBrick’s 12 security checks—including BOLA/IDOR, Property Authorization, and Data Exposure—run in parallel to detect these mismatches and map findings to frameworks such as OWASP API Top 10.
In Rust, even with lifetimes and borrowing ensuring memory correctness, developers must ensure that cache keys incorporate all dimensions that affect response uniqueness. Axum extractors like Query, State, and typed headers must be considered when constructing cache keys. Without explicit isolation, an attacker may leverage predictable or shared keys to poison the cache, leading to data exposure or incorrect business logic being served to multiple users.
Rust-Specific Remediation in Axum — concrete code fixes
To remediate cache poisoning in Axum with Rust, explicitly include all request dimensions that affect the response in the cache key. This includes path parameters, selected query parameters, authenticated user identifiers, and any tenant or session context. Below are concrete, realistic code examples that demonstrate secure cache-key construction and response handling in Axum.
use axum::{routing::get, Router, extract::Query, extract::State};
use std::collections::HashMap;
use std::sync::Arc;
// Shared cache structure: key is a String, value is the response body.
struct Cache(Arc<tokio::sync::Mutex<HashMap<String, String>>>);
// A request-specific context that must be part of the cache key.
#[derive(Clone)]
struct UserContext {
user_id: String,
tenant_id: String,
}
// Build a cache key that incorporates user and tenant context.
fn make_cache_key(path: &str, query: &HashMap<String, String>, ctx: &UserContext) -> String {
let mut key = format!("{}|tenant:{}|user:{}", path, ctx.tenant_id, ctx.user_id);
// Include relevant query parameters in the key to avoid collisions.
let mut params: Vec<&str> = query.keys().map(|k| k.as_str()).collect();
params.sort();
for p in params {
if let Some(val) = query.get(p) {
key.push_str(&format!("|{}={}", p, val));
}
}
key
}
async fn handler(
Query(query): Query<HashMap<String, String>>,
State(cache): State<Arc<Cache>>,
// In practice, user context would be extracted from auth middleware.
user_ctx: UserContext,
) -> String {
let key = make_cache_key("/api/data", &query, &user_ctx);
{
let guard = cache.0.lock().await;
if let Some(cached) = guard.get(&key) {
return cached.clone();
}
}
// Simulate generating a response that is unique per user/tenant.
let response = format!("data for {} in {}", user_ctx.user_id, user_ctx.tenant_id);
// In a real app, insert into cache with appropriate TTL.
let mut guard = cache.0.lock().await;
guard.insert(key, response.clone());
response
}
fn app(user_context: UserContext) -> Router {
let cache = Arc::new(Cache(Arc::new(tokio::sync::Mutex::new(HashMap::new()))));
Router::new()
.route("/api/data", get(handler))
.with_state(cache)
}
This example ensures cache keys are unique per user and tenant, preventing one user from receiving another user’s cached response. middleBrick’s scans can validate that caching-related headers and route definitions in your OpenAPI spec align with these runtime safeguards.
Additionally, when using middleware or layers for caching, apply the same principle: include authenticated identity and tenant context in the cache key. Avoid caching responses that contain user-specific data without incorporating user identifiers into the key. middleBrick’s CLI (middlebrick scan <url>) and GitHub Action can help detect mismatches between declared caching behavior and actual endpoint implementations, integrating checks into CI/CD pipelines to prevent regressions.