Race Condition in Actix with Bearer Tokens
Race Condition in Actix with Bearer Tokens — how this specific combination creates or exposes the vulnerability
A race condition in Actix when Bearer Tokens are involved typically arises when token validity checks and state-changing operations are not performed atomically under concurrent requests. Consider an API that first validates a Bearer Token (e.g., checks a revocation list or a rotating signing key) and then performs an action such as updating a user’s email or transferring funds. If the token’s state can change between validation and action—such as a token being revoked or rotated by another request or process—and the handler does not re-validate under a consistent synchronization point, an attacker can exploit the timing window.
For example, an authentication service might issue a short-lived Bearer Token and maintain a denylist in a shared cache. A handler in Actix might look up the token in the cache to verify it is not revoked, then proceed to modify account state. If a concurrent request revokes the token after the lookup but before the state change, the handler may still proceed because its local check was already completed. This TOCTOU (time-of-check-time-of-use) pattern becomes a race condition under asynchronous, multi-threaded Actix runtime where many requests interleave.
In more complex flows, the race can involve token metadata stored in a database or session store. If Actix handlers read token scopes or roles to authorize a specific endpoint, and those permissions are updated concurrently (for instance, an admin downgrades a token’s scope), a handler that has already read the old permissions can continue with elevated privileges. This is especially risky when handlers perform multiple steps without holding a lock or a transactional boundary across validation and execution, or when they rely on in-memory caches that are not strongly consistent across Actix worker threads.
Real-world analogs include scenarios where token binding or per-request nonces are not enforced. Without a nonce or a request-id tied to the token validation, two parallel requests with the same Bearer Token can lead to duplicated actions or inconsistent state. For instance, one request might refresh a token while another uses the old token to perform a sensitive operation, and Actix’s async runtime may schedule these in an order that violates intended sequencing.
Because middleBrick tests unauthenticated attack surfaces and checks for authorization and concurrency-related issues across its 12 security checks, such race conditions can be surfaced as BOLA/IDOR or Privilege Escalation findings when token-state inconsistencies are detectable. The scanner does not fix these issues but provides findings with remediation guidance to help developers design atomic validation and state checks within their Actix services.
Bearer Tokens-Specific Remediation in Actix — concrete code fixes
Remediation focuses on making token validation and state changes atomic and consistent. In Actix, prefer server-side sessions or tightly scoped tokens with short lifetimes, and re-validate critical permissions immediately before performing state-changing operations. Avoid relying on cached validation results across await points unless the cache is synchronized and invalidated correctly.
Example 1: Handler that re-validates the Bearer Token within a database transaction to ensure the token has not been revoked between check and action.
use actix_web::{web, HttpResponse, HttpRequest};
use sqlx::PgPool;
async fn change_email(
req: HttpRequest,
pool: web::Data,
body: web::Json,
) -> HttpResponse {
let token = match req.headers().get("authorization") {
Some(v) => v.to_str().unwrap_or("").strip_prefix("Bearer ").unwrap_or(""),
None => return HttpResponse::Unauthorized().finish(),
};
let user_id = get_user_id_from_token(token, &pool).await;
// Re-validate within the transaction so revocation or role changes are visible
let mut tx = pool.begin().await.unwrap();
let current_scopes: Vec = sqlx::query_scalar(
"SELECT scope FROM token_scopes WHERE token = $1 AND revoked = false",
)
.bind(token)
.fetch_all(&mut *tx)
.await
.unwrap_or_default();
if !current_scopes.contains(&"email:write".to_string()) {
return HttpResponse::Forbidden().finish();
}
sqlx::query("UPDATE users SET email = $1 WHERE id = $2")
.bind(body.email.clone())
.bind(user_id)
.execute(&mut *tx)
.await.unwrap();
tx.commit().await.unwrap();
HttpResponse::Ok().finish()
}
async fn get_user_id_from_token(token: &str, pool: &PgPool) -> i32 {
sqlx::query_scalar("SELECT user_id FROM tokens WHERE token = $1")
.bind(token)
.fetch_one(pool)
.await
.unwrap_or(0)
}
Example 2: Using Actix middleware to bind token usage to a request context, ensuring that per-request validation is fresh and that nonce or request-id is included to prevent replay across concurrent operations.
use actix_web::{dev::ServiceRequest, Error, middleware::Next};
use actix_web::http::header::HeaderValue;
pub async fn validate_bearer_token(req: ServiceRequest, next: Next) -> Result {
let auth = req.headers().get("authorization");
let token = match auth.and_then(|v| v.to_str().ok()).and_then(|s| s.strip_prefix("Bearer ")) {
Some(t) => t,
None => return Err(actix_web::error::ErrorUnauthorized("missing bearer")),
};
// Include a per-request nonce to bind this validation instance
let nonce = uuid::Uuid::new_v4().to_string();
req.extensions_mut().insert(nonce.clone());
// Perform fresh validation that includes revocation and scope checks
if !is_token_valid_for_request(token, &nonce).await {
return Err(actix_web::error::ErrorForbidden("invalid token"));
}
next.call(req).await
}
async fn is_token_valid_for_request(token: &str, nonce: &str) -> bool {
// Example: call an auth service or cache with nonce to ensure this validation instance is unique
true
}
Example 3: Avoid long-held in-memory caches for token state. Instead, use short TTL caches or database-backed checks with row-level locking where necessary, and ensure that authorization decisions are made immediately before the action.
When integrating with middleBrick, teams can use the CLI to scan endpoints for such issues: with middlebrick scan <url> from the terminal or add the GitHub Action to fail builds if risk scores degrade. The dashboard can track these findings over time, and the Pro plan supports continuous monitoring to detect regressions that could reintroduce race conditions.