Llm Data Leakage in Axum with Basic Auth
Llm Data Leakage in Axum with Basic Auth — how this specific combination creates or exposes the vulnerability
When an Axum service protected only by Basic Authentication exposes an LLM endpoint or routes requests containing sensitive user data into model calls, the combination can unintentionally leak credentials and application context into prompts, training traces, or model outputs. Basic Authentication embeds credentials in an Authorization header (Base64-encoded, not encrypted), which means any component that can observe or log that header—such as proxies, logging middleware, or instrumentation—can extract them. If the same request path or handler also forwards data to an LLM (for example via a tool call, function call, or streaming chat completion), the forwarded payload may include headers, cookies, or route parameters that contain secrets. An LLM security scanner that performs active prompt injection and output scanning may then detect system prompt leakage or exfiltrated credentials appearing in model responses, especially when crafted probes arrive with the same authentication token. In Axum, this often occurs when developers wire generic HTTP clients or middleware to call external AI services without first stripping or sanitizing the incoming request context. Because Axum’s extractor model gives handlers direct access to headers and parts, it is straightforward to forward authorization data inadvertently if filters are not explicit. The risk is not that Axum or Basic Auth are broken, but that the integration pattern does not separate authentication from model input, allowing sensitive material to travel into the AI processing pipeline where it may be surfaced through verbose error messages, retained in chat history, or exposed via unauthenticated LLM endpoints that do not validate the origin of the request.
Basic Auth-Specific Remediation in Axum — concrete code fixes
To prevent LLM data leakage in Axum when using Basic Authentication, ensure credentials are never forwarded to AI endpoints and that request parts intended for models are explicitly cleaned. Use dedicated extractors for authentication and do not propagate headers into outgoing client requests. Below are two concrete patterns: a safe extractor that consumes credentials without passing them along, and a guarded client builder that omits authorization headers when calling LLM endpoints.
1. Authenticate without forwarding credentials to the LLM client
Use State<AuthConfig> to hold shared configuration and a custom extractor that validates credentials but returns a sanitized request part. This keeps authentication separate from the data you send to external models.
use axum::{async_trait, extract::FromRequestParts, http::request::Parts, response::IntoResponse};
use std::sync::Arc;
struct AuthConfig { api_key: String } // for LLM, not Basic Auth
#[derive(Debug)]
struct BasicUser(String);
#[async_trait]
impl FromRequestParts<S> for BasicUser
where
S: Send + Sync,
{
type Rejection = (axum::http::StatusCode, &'static str);
async fn from_request_parts(parts: &mut Parts, _state: &S) -> Result {
let auth_header = parts.headers.get("authorization")
.ok_or((axum::http::StatusCode::UNAUTHORIZED, "missing authorization"))?;
let header_str = auth_header.to_str().map_err(|_| (axum::http::StatusCode::UNAUTHORIZED, "invalid header"))?;
if !header_str.starts_with("Basic ") {
return Err((axum::http::StatusCode::UNAUTHORIZED, "invalid auth type"));
}
// Validate credentials (e.g., decode and check against a store).
// Here we only demonstrate consumption; do not forward this header.
let _creds = general_purpose::STANDARD.decode(&header_str[6..])
.map_err(|_| (axum::http::StatusCode::UNAUTHORIZED, "decode error"))?;
// In practice, verify username/password against a store.
Ok(BasicUser("valid_user".to_string()))
}
}
async fn handler(
_user: BasicUser,
// Extract the business data you need for the LLM call explicitly:
Json(payload): Json<serde_json::Value>,
llm_client: Extension<reqwest::Client>,
) -> impl IntoResponse {
// Build the LLM request without copying headers from the incoming request.
let llm_req = serde_json::json!({
"model": "example",
"messages": [{"role": "user", "content": payload["content"].as_str().unwrap_or("")}]
});
let response = llm_client
.post("https://api.example.com/v1/chat/completions")
.bearer_auth(&llm_client_state.api_key)
.json(&llm_req)
.send()
.await
.map_err(|e| (axum::http::StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Handle response…
}
2. Guarded reqwest client and request builder
When constructing the HTTP client used for LLM calls, ensure the outgoing request never copies the Authorization header. You can enforce this by building requests explicitly and omitting sensitive headers. Combined with middleware that logs requests, this reduces the surface for inadvertent credential exposure.
use reqwest::Client;
use std::sync::Arc;
struct AppState {
llm_client: Client,
api_key: String,
}
async fn call_llm(&self, user_content: &str) -> Result<reqwest::Response, reqwest::Error> {
let llm_req = serde_json::json!({
"model": "example",
"messages": [{"role": "user", "content": user_content}]
});
// Build a fresh request without inheriting headers from the incoming request.
self.llm_client
.post("https://api.example.com/v1/chat/completions")
.bearer_auth(&self.api_key)
.json(&llm_req)
.send()
.await
}
// In your route setup:
let client = Client::new();
let state = Arc::new(AppState { llm_client: client, api_key: std::env::var("LLM_API_KEY").unwrap() });
let app = Router::new()
.route("/chat", post(handler))
.with_state(state);
Additional practices: avoid logging full request headers where credentials may appear, and configure your Axum middleware to strip or ignore Authorization when proxying to LLM endpoints. These steps reduce the chance that Basic Auth material reaches the model pipeline and appears in outputs flagged by LLM security scanning.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |