Race Condition in Actix with Firestore
Race Condition in Actix with Firestore — how this specific combination creates or exposes the vulnerability
A race condition in an Actix web service using Firestore occurs when multiple concurrent requests read and write overlapping document fields without coordination, resulting in lost updates or inconsistent state. Firestore offers optimistic concurrency via document versioning (read time / update time / transaction checks), but Actix handlers are asynchronous and may process requests in parallel. If handlers implement read–modify–write logic without leveraging Firestore transactions or precondition checks, two requests can read the same value, compute different updates, and write back, with one overwrite discarded.
Consider an Actix handler that increments a numeric field (e.g., view_count) by reading the current value, adding one, and writing back. If two requests read the same initial value concurrently, both increment to the same new value, and both write back, the counter effectively increases by one instead of two. This is a classic lost update race condition. Firestore’s document-level concurrency control helps detect this when using update_time or a version number as a precondition, but only if the client explicitly includes it. Without using a transaction or a preconditioned update, the race is not mitigated.
The combination of Actix’s non-blocking async handlers and Firestore’s eventual-consistency model for reads outside transactions amplifies the risk. An Actix handler may perform an initial read, compute a new value, and perform a conditional write based on that stale read. If another request mutates the same document between the read and the write, the conditional write may still succeed if it does not use a versioning check, leading to inconsistent application state. Common triggers include high-traffic endpoints (e.g., voting, counters, inventory deduction) where requests overlap within the 5–15 second scan window used by middleBrick to test authentication and input validation checks.
For example, an endpoint that applies a discount code to a user’s cart may first read the cart, apply the code, and then write back the updated cart. Without a transaction, two simultaneous discount applications can conflict, potentially applying the discount twice or corrupting the cart state. middleBrick’s checks for BOLA/IDOR and Input Validation help highlight endpoints that perform sensitive read–modify–write patterns without adequate checks, flagging the insecure design.
To detect such issues, middleBrick’s OpenAPI/Swagger analysis resolves $ref definitions and cross-references spec definitions with runtime findings, identifying endpoints that accept mutable state without guidance on concurrency control. The scanner does not fix the logic, but its findings include remediation guidance—such as using Firestore transactions or preconditioned updates—to help developers address the race condition.
Firestore-Specific Remediation in Actix — concrete code fixes
Remediate race conditions in Actix with Firestore by using transactions and preconditioned updates to ensure atomic read–modify–write operations. Firestore provides run_transaction and precondition parameters (e.g., current_document via update_time or a version field) to make updates conditional on the document’s state at commit time.
Use a transaction when you must read and then write based on the read value. In an Actix handler, this keeps the operation atomic on Firestore’s backend. Below is a concrete example for incrementing a numeric field safely:
use actix_web::{web, HttpResponse, Result};
use google_cloud_firestore::client::Client;
use google_cloud_firestore::transaction::Transaction;
use google_cloud_firestore::DocumentReference;
async fn increment_view_count(
client: web::Data<Client>,
doc_ref: DocumentReference,
) -> Result<HttpResponse> {
let mut transaction = Transaction::new(client.inner().clone());
let snapshot = transaction.get(&doc_ref).await.map_err(|e| {
actix_web::error::ErrorInternalServerError(e.to_string())
})?;
if let Some(fields) = snapshot.fields() {
let current = fields.get("view_count")
.and_then(|v| v.as_integer())
.unwrap_or(0);
transaction.update(&doc_ref, &[("view_count", (current + 1).into())]);
} else {
transaction.set(&doc_ref, &[("view_count", 1.into())]);
}
transaction.commit().await.map_err(|e| {
actix_web::error::ErrorConflict(format!("Commit failed, possible conflict: {e}"))
})?;
Ok(HttpResponse::Ok().finish())
}
This pattern ensures that the increment is applied atomically: Firestore verifies that the document has not changed since the read inside the transaction before committing the write. If a conflict occurs, the transaction is aborted and can be retried, which the handler can implement with a retry loop.
For simple counter increments, you can also use a server-side atomic increment without reading first, which avoids the transaction overhead:
use google_cloud_firestore::client::Client;
use google_cloud_firestore::DocumentReference;
async fn increment_view_count_atomic(
client: web::Data<Client>,
doc_ref: DocumentReference,
) -> Result<HttpResponse> {
doc_ref.update(&[("view_count", google_cloud_firestore::Increment::new(1))])
.await
.map_err(|e| actix_web::error::ErrorInternalServerError(e.to_string()))?;
Ok(HttpResponse::Ok().finish())
}
When using preconditions, include a version field or update_time to ensure the document has not been modified since the client last read it:
use google_cloud_firestore::client::Client;
use google_cloud_firestore::DocumentReference;
use std::time::SystemTime;
async fn update_with_precondition(
client: web::Data<Client>,
doc_ref: DocumentReference,
known_update_time: SystemTime,
) -> Result<HttpResponse> {
let precondition = known_update_time; // convert to Firestore timestamp as needed
doc_ref.update_with_options(
&[("discount_applied", true.into())],
&google_cloud_firestore::UpdateOptions {
current_document: Some(precondition.into()),
},
)
.await
.map_err(|e| {
if is_precondition_failure(&e) {
actix_web::error::ErrorConflict("Document was modified concurrently")
} else {
actix_web::error::ErrorInternalServerError(e.to_string())
}
})?;
Ok(HttpResponse::Ok().finish())
}
In all cases, ensure that the Actix runtime does not introduce additional parallelism that bypasses these safeguards—for example, avoid spawning independent tasks that each perform read–modify–write without coordination. middleBrick’s scans can identify endpoints that lack these patterns by checking for unprotected state changes and missing idempotency guidance in the API spec.