HIGH denial of serviceaxumapi keys

Denial Of Service in Axum with Api Keys

Denial Of Service in Axum with Api Keys — how this specific combination creates or exposes the vulnerability

Axum applications that use API keys for access control can still be exposed to Denial of Service (DoS) when key validation logic introduces resource contention or blocking behavior. In a typical Axum handler, keys are validated synchronously per request, often involving a database or cache lookup. If the lookup is unoptimized, uses a shared connection pool with limited capacity, or lacks short timeouts, an attacker can send many requests with invalid or missing keys, causing thread pool exhaustion or connection pool saturation. This shifts the DoS vector to the authentication layer: rather than overwhelming compute with heavy computation, the service becomes unavailable because every request waits on a slow or blocked key validation path.

Consider an Axum handler that performs a database query for each request to verify an API key. If the database becomes slow or returns errors, the handler awaits the query within the request-handling thread. Under high concurrency, the runtime’s worker threads can be exhausted, causing new requests to hang or be rejected. This is especially impactful when key validation lacks rate limiting or concurrency controls, effectively turning authentication into a bottleneck. Even with correct authentication, an attacker can probe many endpoints with valid keys but heavy downstream dependencies (e.g., external services called after key validation), exacerbating resource contention.

Another DoS scenario specific to the combination of Axum and API keys arises from per-request allocations and repeated expensive operations, such as cryptographic key verification or JWT parsing, without caching. If each request performs these operations independently, CPU usage can spike, increasing latency and reducing throughput. When such handlers are deployed behind a load balancer, uneven load across instances can trigger connection queueing or timeouts, manifesting as a service-wide DoS. The scanner’s Authentication and Rate Limiting checks can surface these risks by identifying missing rate controls and expensive auth-time operations, while the BFLA/Privilege Escalation checks ensure that key validation does not inadvertently grant unintended access paths that could be abused to sustain attacks.

Middleware choices in Axum also matter. For example, attaching key validation as a tower layer that is not configured with timeouts or circuit-breaker patterns can amplify DoS impact. If the validation layer blocks indefinitely on an external store, requests back up quickly. The scanner’s Input Validation and Data Exposure checks help identify whether key validation responses leak sensitive information in error messages, which can aid attackers in crafting targeted load patterns. Overall, DoS in this context is not just about traffic volume; it is about how API key validation interacts with concurrency, resource limits, and downstream dependencies within an Axum service.

Api Keys-Specific Remediation in Axum — concrete code fixes

To mitigate DoS risks when using API keys in Axum, reduce synchronous blocking and limit resource contention by moving expensive validation off the request path and adding concurrency controls. Use asynchronous, cached validation with short TTLs, and enforce per-client rate limits to bound load. Below is a concrete Axum example that shows a robust pattern: validate API keys with an async, cached lookup and apply rate limiting at the route level.

use axum::{
    async_trait,
    extract::{FromRequest, Request},
    http::StatusCode,
    response::IntoResponse,
    routing::get,
    Router,
};
use std::net::SocketAddr;
use std::sync::Arc;
use tokio::sync::RwLock;
use tower_http::rate_limit::{RateLimitLayer, RateLimitService, DefaultKeyGenerator};
use std::time::Duration;
use async_trait::async_trait;

// In-memory cache for key -> permissions (in practice use Redis or similar)
struct KeyCache {
    store: RwLock>>,
}

impl KeyCache {
    fn new() -> Self {
        Self { store: RwLock::new(std::collections::HashMap::new()) }
    }
    async fn get_permissions(&self, key: &str) -> Option> {
        let store = self.store.read().await;
        store.get(key).cloned()
    }
    async fn insert(&self, key: &str, perms: Vec) {
        let mut store = self.store.write().await;
        store.insert(key.to_string(), perms);
    }
}

#[derive(Clone)]
struct ApiKeyValidator {
    cache: Arc,
}

#[async_trait]
impl FromRequest for ApiKeyValidator
where
    S: Send + Sync,
{
    type Rejection = (StatusCode, &'static str);

    async fn from_request(req: Request, _state: &S) -> Result {
        let key = req
            .headers()
            .get("x-api-key")
            .and_then(|v| v.to_str().ok())
            .ok_or((StatusCode::UNAUTHORIZED, "missing api key"))?;

        let validator = ApiKeyValidator {
            cache: Arc::new(KeyCache::new()),
        };
        // Simulate async cached lookup; in production this would hit Redis or a DB with short TTL
        if validator.cache.get_permissions(key).await.is_some() {
            Ok(validator)
        } else {
            Err((StatusCode::UNAUTHORIZED, "invalid api key"))
        }
    }
}

async fn handler() -> &'static str {
    "ok"
}

#[tokio::main]
async fn main() {
    let cache = Arc::new(KeyCache::new());
    // Preload a key for demo
    cache.insert("valid_key_123", vec!["read".to_string(), "write".to_string()]).await;

    let app = Router::new()
        .route("/secure", get(handler))
        .layer(
            RateLimitLayer::new(
                tower_http::rate_limit::NoOpClock,
                DefaultKeyGenerator,
            )
            .with_rate_limit(100, Duration::from_secs(1)), // 100 req/s per client
        )
        .layer(
            tower_http::add_extension::AddExtensionLayer::new(cache),
        )
        .layer(
            tower_http::services::ServeDir::new("."),
        )
        .into_make_service();

    let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
    println!("Listening on {}", addr);
    axum::Server::bind(&addr)
        .serve(app)
        .await
        .unwrap();
}

This pattern ensures key validation is non-blocking and rate-limited, reducing the chance that slow or excessive key checks lead to thread or connection exhaustion. For production, replace the in-memory cache with a distributed cache like Redis and set TTLs aligned with your key rotation policy.

Additionally, configure timeouts on external calls made during validation (e.g., Redis client timeouts) and use tower middleware for circuit-breaking and request buffering. These steps reduce the likelihood that a spike in authentication traffic translates into a service-wide Denial of Service. The scanner’s BFLA/Privilege Escalation and Rate Limiting checks can verify that per-client limits are present and that key validation does not introduce privilege confusion.

Related CWEs: resourceConsumption

CWE IDNameSeverity
CWE-400Uncontrolled Resource Consumption HIGH
CWE-770Allocation of Resources Without Limits MEDIUM
CWE-799Improper Control of Interaction Frequency MEDIUM
CWE-835Infinite Loop HIGH
CWE-1050Excessive Platform Resource Consumption MEDIUM

Frequently Asked Questions

Can DoS attacks bypass authentication entirely in Axum APIs that use API keys?
Yes. If key validation is performed synchronously with expensive operations or shared limited resources (e.g., database connections), an attacker can saturate those resources without needing valid keys, effectively bypassing authentication by making the service unavailable. Mitigations include async non-blocking validation, caching, and per-client rate limits.
Do API keys alone prevent DoS in Axum services?
No. API keys control access but do not inherently protect against resource exhaustion. DoS can still occur if key validation introduces contention or lacks rate limiting. Combine API keys with concurrency controls, timeouts, caching, and rate limiting to reduce DoS risk.