Out Of Bounds Write in Axum with Firestore
Out Of Bounds Write in Axum with Firestore — how this specific combination creates or exposes the vulnerability
An Out Of Bounds Write occurs when an application writes data beyond the intended memory boundaries or, in a managed language like Rust with Firestore, beyond the intended data structure or API constraints. In an Axum application integrating with Google Cloud Firestore, this typically surfaces through unchecked user input used as document IDs, map keys, or array indices that Firestore operations then propagate to backend services. Although Firestore enforces its own schema and index constraints, Axum routes and request parsing become the choke point where oversized or malformed input can trigger unexpected behavior before reaching Firestore.
Consider an endpoint that accepts a user-supplied identifier to read or write a Firestore document. If Axum does not validate the identifier length or allowed characters, an attacker can craft a value that causes buffer-like effects at the integration layer, such as oversized batch writes or corrupted document paths. Firestore may reject the request, but the Axum handler might propagate incomplete errors or expose internal structures, creating a side channel. More critically, unbounded JSON or form fields mapped into structs can lead to oversized allocations that degrade performance or cause partial writes, where some fields succeed and others violate Firestore limits, resulting in inconsistent state.
The risk is compounded when Firestore listeners or streaming reads are used in Axum handlers. An attacker can send a crafted payload that triggers excessive document creation or updates across collections, exploiting the lack of strict size or boundary checks. Since Firestore indexes are built from document data, malicious input can generate anomalous index entries or strain composite index configurations, indirectly exposing write amplification. While Firestore itself will enforce its own per-document size limits (approximately 1 MiB) and property limits, Axum must enforce these boundaries upstream to prevent unnecessary errors and potential data corruption.
Real-world patterns include using path parameters directly as Firestore document IDs without normalization, or binding request bodies to Firestore document fields without validating array lengths or map key counts. For example, an Axum extractor that deserializes JSON into a struct mapped to a Firestore document may allow nested objects with unbounded arrays, leading to write attempts that exceed Firestore’s property limits. This combination of Axum’s flexible deserialization and Firestore’s strict operational boundaries creates a surface where out-of-bounds writes can manifest as failed operations, inconsistent writes, or information leakage through error messages.
Detection typically involves monitoring Firestore error logs for write failures with invalid argument errors, paired with Axum logs showing unusually large request bodies or malformed IDs. Security testing should include sending oversized strings as document IDs, deeply nested objects, and high-volume batch writes to observe how the Axum service handles backpressure and validation. MiddleBrick’s LLM/AI Security checks can identify whether prompt injection or data exfiltration probes trigger unexpected Firestore interactions, while its BFLA/Privilege Escalation and Input Validation checks help uncover missing boundary controls in the Axum-Firestore integration.
Firestore-Specific Remediation in Axum — concrete code fixes
Remediation focuses on strict validation of all user-controlled data before it reaches Firestore operations within Axum handlers. Document IDs should be normalized, length-limited, and restricted to safe character sets. Request bodies must be validated against defined bounds, and batch operations should enforce per-document size limits to avoid partial writes.
Below are concrete Axum handler examples with Firestore integration that demonstrate secure practices.
use axum::{routing::post, Router};
use firestore::*;
use serde::{Deserialize, Serialize};
use std::net::SocketAddr;
#[derive(Debug, Serialize, Deserialize)]
struct UserProfile {
#[serde(rename = "userName")]
user_name: String,
#[serde(rename = "email")]
email: String,
#[serde(rename = "tags")]
tags: Vec,
}
async fn create_profile(
db: firestore::FirestoreDb,
user_id: String,
profile: UserProfile,
) -> Result {
// Validate user_id length and characters
if user_id.len() < 3 || user_id.len() > 30 {
return Err("Invalid user ID length".into());
}
if !user_id.chars().all(|c| c.is_alphanumeric() || c == '-' || c == '_') {
return Err("User ID contains invalid characters".into());
}
// Validate tags size to stay within Firestore limits
if profile.tags.len() > 20 {
return Err("Too many tags".into());
}
for tag in &profile.tags {
if tag.len() > 100 {
return Err("Tag too long".into());
}
}
// Ensure email length is reasonable
if profile.email.len() > 254 {
return Err("Email too long".into());
}
let doc_ref = db.collection("profiles").document(&user_id);
firestore::create_document(&doc_ref, &profile).await.map_err(|e| e.to_string())?;
Ok(user_id)
}
#[tokio::main]
async fn main() {
let db = firestore::FirestoreDb::new("my-project-id").expect("Firestore init");
let app = Router::new()
.route("/profiles/:user_id", post(move |user_id: axum::extract::Path, profile: axum::extract::Json| {
let db = db.clone();
async move {
match create_profile(db, user_id.into_inner(), profile.into_inner()).await {
Ok(id) => format!("Created: {}", id),
Err(e) => format!("Error: {}", e),
}
}
}));
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
Key remediation steps embedded in the code:
- Document ID validation: Length and character checks before calling Firestore, preventing oversized or malformed IDs that could trigger backend anomalies.
- Field-level validation: Explicit checks on email length and tag count/length to respect Firestore property and size limits.
- Error handling: Returning clear error messages without exposing internal paths or stack traces, reducing information leakage.
For batch operations, validate each document individually before submitting to Firestore’s batch_write, and enforce per-document limits to avoid partial writes that can leave data in an inconsistent state. MiddleBrick’s Pro plan continuous monitoring can alert you when Firestore returns repeated invalid argument errors, indicating potential boundary issues in your Axum handlers.