HIGH llm data leakageactixhmac signatures

Llm Data Leakage in Actix with Hmac Signatures

Llm Data Leakage in Actix with Hmac Signatures — how this specific combination creates or exposes the vulnerability

When integrating HMAC signatures into an Actix web service that exposes an LLM endpoint, implementation choices can inadvertently leak sensitive data through the LLM output. HMAC is typically used to verify the integrity and origin of a request, but if the application logs or surfaces signature-validation errors to the LLM client or includes sensitive payload fragments in error messages, the LLM may reflect those details in its responses.

Consider an Actix endpoint that accepts a request body, computes an HMAC using a shared secret, and forwards data to an LLM service. If a signature mismatch triggers a verbose error that includes the received payload or parts of the computed hash, and that error is passed to the LLM as context or returned directly to the caller, confidential information can be extracted. This becomes an LLM data leakage when the LLM’s response reveals internal state, request content, or cryptographic material through crafted prompts or benign queries.

Real-world patterns include:

  • Returning 401/HTTP errors that echo the signed payload or the expected vs actual HMAC values.
  • Including user-controlled data in debug logs that are accessible to the LLM via injected logging or trace endpoints.
  • Exposing an unauthenticated LLM endpoint where an attacker can probe with manipulated HMAC inputs to infer secret material or observe differential behavior based on signature validity.

For example, if an Actix handler deserializes JSON containing a signature field and passes the raw body to an LLM client library for enrichment, the LLM might be prompted with error text containing truncated hashes or secret-derived tokens. This aligns with the LLM/AI Security checks in middleBrick, which detect system prompt leakage and unsafe consumption patterns, flagging scenarios where implementation details bleed into model interactions.

Using middleBrick’s LLM/AI Security probes, such a misconfiguration would be identified through active prompt injection tests and output scanning for API keys or PII. The scanner checks whether unauthenticated endpoints expose behaviors that could enable an attacker to coax the LLM into revealing sensitive context, making the combination of HMAC handling and LLM endpoints a high-risk surface when not carefully isolated.

Hmac Signatures-Specific Remediation in Actix — concrete code fixes

To prevent LLM data leakage when using HMAC signatures in Actix, ensure that cryptographic validation errors are generic, that sensitive data never reaches the LLM context, and that endpoints are appropriately protected. Below are concrete, realistic code examples for secure HMAC handling in Actix with Rust.

1. Validating HMAC without leaking details

Use constant-time comparison and return a uniform error response that does not disclose whether the signature, payload, or any intermediate value was incorrect.

use actix_web::{post, web, HttpResponse, Error};
use hmac::{Hmac, Mac};
use sha2::Sha256;
use serde::{Deserialize, Serialize};

type HmacSha256 = Hmac;

#[derive(Deserialize)]
struct SignedRequest {
    payload: String,
    signature: String,
}

#[derive(Serialize)]
struct ApiResponse {
    status: &'static str,
}

#[post("/process")]
async fn process_signed(req: web::Json) -> Result<HttpResponse, Error> {
    let secret = std::env::var("HMAC_SECRET").expect("HMAC_SECRET must be set");
    let mut mac = HmacSha256::new_from_slice(secret.as_bytes())
        .map_err(|_| HttpResponse::InternalServerError().json(ApiResponse { status: "error" }))?;
    mac.update(req.payload.as_bytes());
    // Use constant-time verification to avoid timing leaks
    let computed = hex::encode(mac.finalize().into_bytes());
    let received = req.signature.trim_start_matches("0x");
    if subtle::ConstantTimeEq::ct_eq(computed.as_bytes(), received.as_bytes()).into() {
        // Safe to forward sanitized data to LLM; exclude signature and raw payload
        let safe_data = serde_json::json!({ "data": req.payload });
        // llm_client.send(safe_data).await; // example placeholder
        Ok(HttpResponse::Ok().json(ApiResponse { status: "ok" }))
    } else {
        // Generic error; no details about signature or payload
        Ok(HttpResponse::Unauthorized().json(ApiResponse { status: "unauthorized" }))
    }
}

2. Isolating LLM interactions from validation logic

Ensure that the LLM endpoint receives only necessary, non-sensitive data and that validation errors do not propagate into LLM prompts or responses.

async fn call_llm_with_safe_data(data: &str) -> Result<String, reqwest::Error> {
    let client = reqwest::Client::new();
    let llm_endpoint = "https://api.example.com/llm";
    // Send only non-sensitive, business-level data; exclude cryptographic material
    let body = serde_json::json!({ "input": data });
    client.post(llm_endpoint)
        .json(&body)
        .send()
        .await?
        .json::

3. Secure configuration for middleBrick scanning

When using the middleBrick CLI to validate your Actix endpoints, run scans against a staging URL to detect unintended data exposure. The CLI provides JSON output that can be integrated into CI/CD to enforce security gates before deployment.

# Example CLI usage
middlebrick scan https://staging.example.com/api/process

By combining constant-time verification, strict error handling, and isolation of LLM calls, you reduce the risk of LLM data leakage through HMAC validation paths. The GitHub Action can enforce a minimum security score, and the MCP Server allows you to scan API contracts directly from your IDE to catch regressions early.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Why does returning HMAC validation errors in an Actix handler risk LLM data leakage?
Because detailed errors may include the request payload or computed HMAC values. If these errors are exposed to the LLM or returned to the client, an attacker can use them to infer secret material or sensitive data through prompt injection or output scanning.
How does middleBrick help detect LLM data leakage in HMAC-protected Actix services?
middleBrick’s LLM/AI Security checks include active prompt injection probes and output scanning for PII, API keys, and code. Scanning your Actix endpoints with the middleBrick CLI or GitHub Action can identify unsafe error handling or exposure of cryptographic details that could lead to data leakage.