Denial Of Service in Actix with Api Keys
Denial Of Service in Actix with Api Keys — how this specific combination creates or exposes the vulnerability
In Actix web applications, using API keys for authorization can introduce a Denial of Service (DoS) risk when key validation logic is performed synchronously or without proper rate limiting. Because middleBrick checks Authentication and Rate Liming in parallel, it can detect scenarios where an endpoint accepts API keys but does not enforce request caps or efficient key validation.
Consider an Actix handler that validates an API key on every request by performing a blocking database or cache lookup. An attacker can send many requests with invalid or valid keys, causing thread pool exhaustion or high latency. This is especially impactful when the key validation is done per request rather than via a fast in-memory filter. Even if the endpoint is otherwise lightweight, the overhead of key checks can become the bottleneck.
middleBrick’s Authentication check flags endpoints that accept API keys but lack evidence of rate controls. The Rate Liming check verifies whether there are mechanisms such as token buckets or sliding windows to limit request volume per key or client. Without these, an unauthenticated or low-cost attacker can saturate the service by flooding the endpoint with key-bearing requests, leading to availability loss for legitimate users.
Moreover, if API keys are passed in headers and the server performs expensive parsing or transformation on each request, the added computation compounds the DoS risk. The combination of per-request key validation and missing rate limiting is a common pattern that middleBrick surfaces through its runtime probes, focusing on observable behavior rather than internal implementation.
Api Keys-Specific Remediation in Actix — concrete code fixes
To reduce DoS risk when using API keys in Actix, apply rate limiting close to the entrypoint and optimize key validation to be fast and non-blocking. Below are concrete, realistic examples you can adapt.
1. Lightweight key existence check with rate limiting
Use a fast in-memory store (e.g., a cache) to validate key presence and enforce rate limits per key. Here is an example using actix-web, actix-rt, and a simple token bucket implementation via std::collections::HashMap protected by a mutex. For production, prefer a crate such as actix-web-httpauth for extractor-based auth and a robust rate limiter like governor.
{`use actix_web::{web, HttpRequest, HttpResponse, Responder};
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
struct TokenBucket {
tokens: f64,
last: Instant,
rate: f64,
capacity: f64,
}
impl TokenBucket {
fn new(rate: f64, capacity: f64) -> Self {
Self { tokens: capacity, last: Instant::now(), rate, capacity }
}
fn allow(&mut self, tokens: f64) -> bool {
let now = Instant::now();
let delta = now.duration_since(self.last).as_secs_f64();
self.tokens = (self.capacity).min(self.tokens + delta * self.rate);
self.last = now;
if self.tokens >= tokens {
self.tokens -= tokens;
true
} else {
false
}
}
}
type SharedBuckets = Arc>>;
async fn handler(
req: HttpRequest,
buckets: web::Data,
) -> impl Responder {
let key = match req.headers().get("X-API-Key") {
Some(v) => match v.to_str() {
Ok(s) => s.to_string(),
Err(_) => return HttpResponse::BadRequest().finish(),
},
None => return HttpResponse::Unauthorized().finish(),
};
let mut buckets = buckets.lock().unwrap();
let bucket = buckets.entry(key.clone()).or_insert_with(|| TokenBucket::new(10.0, 20.0));
if bucket.allow(1.0) {
HttpResponse::Ok().body("ok")
} else {
HttpResponse::TooManyRequests().finish()
}
}`}
2. Using extractor-based auth with middleware rate limiting
A cleaner approach is to define an extractor for API keys and apply rate limiting in middleware. This keeps handlers focused and ensures limits are applied before business logic runs.
{`use actix_web::{dev::ServiceRequest, Error, HttpMessage};
use actix_web_httpauth::extractors::bearer::BearerAuth;
use std::future::{ready, Ready};
// Example: attach parsed key to request extensions for downstream handlers
fn validate_key(key: &str) -> bool {
// Replace with fast lookup, e.g., cache or allowlist
key.starts_with("ak_")
}
async fn rate_limited_key_auth(
req: ServiceRequest,
) -> Result {
let auth_header = req.headers().get("Authorization");
// Simplified extraction; use actix-web-httpauth extractors in practice
if let Some(header) = auth_header.and_then(|v| v.to_str().ok()) {
if header.starts_with("Bearer ") {
let key = &header[7..];
if validate_key(key) {
// Attach key to request for handlers
req.extensions_mut().insert(key.to_string());
return Ok(req);
}
}
}
Err((actix_web::error::ErrorUnauthorized("invalid key"), req))
}`}
3. Middleware with governor for per-key rate limiting
Integrate the governor crate to enforce rate limits per API key with minimal overhead. This example shows how to wrap your Actix app with a rate-limiting middleware that checks a key before passing the request through.
{`use actix_web::{middleware::Next, HttpRequest, HttpResponse, dev::ServiceRequest, dev::ServiceResponse, Error};
use governor::{Quota, RateLimiter};
use std::num::NonZeroU32;
use std::sync::Arc;
use std::time::Duration;
let key_limiter = RateLimiter::direct(Quota::per_second(NonZeroU32::new(10).unwrap()).allow_burst(20));
async fn rate_middleware(
req: ServiceRequest,
next: F,
limiter: Arc, // key-specific or shared
) -> Result, Error>
where
F: FnOnce(ServiceRequest) -> Result, Error>,
{
if let Some(key) = req.headers().get("X-API-Key").and_then(|v| v.to_str().ok()) {
// In a real setup, use key to scope limiter state
if limiter.check_key(key, key).is_ok() {
next(req).await
} else {
Err(actix_web::error::ErrorTooManyRequests("rate limit exceeded"))
}
} else {
Err(actix_web::error::ErrorUnauthorized("missing key"))
}
}`}
By combining fast key validation and per-key rate limiting, you reduce the DoS surface while preserving the security benefits of API keys.
Related CWEs: resourceConsumption
| CWE ID | Name | Severity |
|---|---|---|
| CWE-400 | Uncontrolled Resource Consumption | HIGH |
| CWE-770 | Allocation of Resources Without Limits | MEDIUM |
| CWE-799 | Improper Control of Interaction Frequency | MEDIUM |
| CWE-835 | Infinite Loop | HIGH |
| CWE-1050 | Excessive Platform Resource Consumption | MEDIUM |