HIGH token leakageaxumfirestore

Token Leakage in Axum with Firestore

Token Leakage in Axum with Firestore — how this specific combination creates or exposes the vulnerability

Token leakage in an Axum service that uses Google Cloud Firestore occurs when authentication material or session tokens are inadvertently exposed through API responses, logs, or error messages. Because Firestore operations in Axum often require a service account identity and may return rich document data, developers can mistakenly include tokens or keys in serialized responses or debug output.

In this stack, a common pattern is to authenticate to Firestore using Application Default Credentials (ADC) on the server and then construct Firestore DocumentReference paths from user-supplied identifiers. If request handling code passes an authentication bearer token into the Firestore client configuration or embeds it in a struct that is later serialized as JSON, the token can leak through API endpoints, webhooks, or server-sent events. For example, returning a struct that contains both Firestore document data and an internal access_token field will expose that token to any client that can read the API response.

Another leakage vector specific to Axum with Firestore is improper handling of Firestore metadata in error cases. When a Firestore read or write fails, Rust backtraces or custom error types might include token-like strings, service account identifiers, or full request payloads if developers accidentally include sensitive fields in debug formatting. Because Axum allows custom error layers that transform errors into HTTP responses, a poorly implemented error handler can serialize and return internal token information to the caller.

Middleware that logs requests for observability can also contribute to token leakage. If the logging layer captures full request or response bodies and those bodies contain authentication tokens used for Firestore authorization, the tokens are persisted in logs or monitoring systems. This is especially risky when Firestore security rules rely on token claims for authorization; leaking those claims can expose both identity and authorization context to unauthorized viewers.

Token leakage in this combination undermines the security model where Firestore rules restrict document access based on authenticated identity. If a token describing a user’s permissions is exposed, an attacker can use it to escalate access or to craft requests that bypass intended Firestore security rules. Because Firestore enforces rules at the document level, leaked tokens can lead to unauthorized reads or writes across user boundaries, effectively bypassing the intended isolation that Firestore provides.

To detect these patterns, middleBrick runs an unauthenticated scan against the endpoint, checking for exposed tokens in responses and evaluating how Axum routes interact with Firestore document paths. The LLM/AI Security checks specifically probe for system prompt leakage and output scanning to identify whether API responses inadvertently include credentials, API keys, or executable code. Findings include severity ratings and remediation guidance mapped to frameworks such as OWASP API Top 10 and compliance regimes like SOC2 and GDPR.

Firestore-Specific Remediation in Axum — concrete code fixes

Remediation focuses on ensuring tokens never appear in HTTP responses, logs, or error payloads while still allowing Firestore access via secure server-side identity. The server should use Application Default Credentials or a service account key file stored securely outside the request lifecycle, and never pass tokens through route parameters or response structures.

First, configure Firestore client initialization outside request handlers so tokens are not tied to individual requests. In Axum, this is typically done at startup and shared via state:

use firestore::FirestoreDb;
use std::sync::Arc;

struct AppState {
    db: Arc<FirestoreDb>,
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let db = FirestoreDb::new("my-project-id").await?;
    let state = Arc::new(AppState { db });

    // build and run Axum router with state.clone()
    Ok(())
}

Second, ensure response serialization never includes authentication fields. Define domain models that exclude tokens and use explicit serialization rather than dumping raw Firestore documents:

use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize)]
pub struct PublicProfile {
    pub user_id: String,
    pub display_name: String,
    pub email: String,
    // Do not include access_token, refresh_token, or service account keys
}

Third, sanitize errors before they reach Axum’s error layer. Map Firestore or internal errors to generic messages and avoid attaching request bodies or credentials to error logs:

use axum::{http::StatusCode, response::IntoResponse};
use thiserror::Error;

#[derive(Error, Debug)]
pub enum ApiError {
    #[error("request failed")]
    RequestFailed,
    #[error("not found")]
    NotFound,
}

impl IntoResponse for ApiError {
    fn into_response(self) -> (StatusCode, String) {
        (StatusCode::INTERNAL_SERVER_ERROR, "Internal error".to_string())
    }
}

Fourth, disable verbose debug formatting in production builds to prevent tokens from appearing in backtraces or log output. Use structured logging with explicit field selection instead of dumping entire structs:

use tracing::{info, Level};
use tracing_subscriber::FmtSubscriber;

let subscriber = FmtSubscriber::builder()
    .with_max_level(Level::INFO)
    .finish();
tracing::subscriber::set_global_default(subscriber).expect("setting default subscriber failed");

Finally, validate and restrict Firestore security rules on the server side and ensure that Axum routes do not rely on client-supplied tokens for Firestore authorization. Use Axum extractors to enforce authentication before Firestore calls, and keep token handling confined to server-side identity providers.

Frequently Asked Questions

How can I verify that my Axum endpoints are not leaking tokens when integrated with Firestore?
Use middleBrick to scan your endpoints; it checks responses for exposed credentials, API keys, and executable code, and provides prioritized findings with remediation guidance.
Does middleBrick test for token leakage in the context of LLM-enabled endpoints?
Yes, the LLM/AI Security checks include output scanning for PII, API keys, and system prompt leakage, which helps identify token exposure through language model endpoints.