HIGH llm data leakageexpressbasic auth

Llm Data Leakage in Express with Basic Auth

Llm Data Leakage in Express with Basic Auth — how this specific combination creates or exposes the vulnerability

When an Express API uses HTTP Basic Authentication and also exposes endpoints that interact with LLM services, the combination can unintentionally expose credentials, prompts, or sensitive business logic in model inputs, outputs, or agent behaviors. Basic Auth transmits credentials on every request; if those requests are forwarded to an LLM endpoint or logged as part of application telemetry, credentials can leak into prompts, tool calls, or error messages.

Consider an Express route that authenticates a user with Basic Auth and then forwards the authenticated request body or headers to an LLM service for summarization or classification. If the application does not strip or redact the Authorization header before sending data to the LLM, the base64-encoded credentials may be included in the prompt content. Because LLM endpoints can be unauthenticated or weakly guarded in some configurations, an attacker who can influence prompt input might cause credential exfiltration via crafted outputs or tool calls.

LLM Data Leakage in this context includes several specific risks:

  • System prompt leakage: If error messages or debugging output from the LLM contain parts of the user’s request that include Authorization headers, the system prompt or inference details may reveal credentials or internal routing logic.
  • Prompt injection via Basic Auth values: An attacker may supply crafted input that causes the application to include the Authorization header in the LLM prompt, enabling injection or extraction attempts.
  • Output scanning gaps: Without explicit checks, LLM responses may echo back credentials or tokens that were present in the input, especially when the model reflects on prior conversation context.
  • Excessive agency patterns: If the Express backend delegates tool selection or function calling to an LLM and passes headers as tool parameters, credentials may be handed to the model as tool arguments, enabling unintended agency or data exfiltration via tool_call outputs.

middleBrick detects this combination through unauthenticated black-box scanning and LLM-specific probes. It checks whether Authorization headers or credential-like values are reflected in prompts sent to LLM endpoints and whether outputs contain sensitive patterns such as API keys or base64-like strings. By correlating runtime behavior with OpenAPI specs and active prompt injection probes (including system prompt extraction and DAN jailbreak attempts), middleBrick surfaces findings that indicate potential leakage of credentials or sensitive data through LLM interactions.

For example, a scan might detect that an endpoint which accepts Basic Auth and forwards body content to an LLM returns responses that include portions of the Authorization header. This would trigger findings in multiple LLM Security categories, including System Prompt Leakage and Unsafe Consumption, with remediation guidance to sanitize inputs and remove credentials before LLM interaction.

Basic Auth-Specific Remediation in Express — concrete code fixes

To prevent LLM data leakage when using Basic Auth in Express, you must ensure credentials are never forwarded to LLM endpoints and that sensitive values are stripped before any external call. Below are concrete remediation patterns and code examples.

1. Strip Authorization before LLM calls

Create a sanitization layer that removes or hashes sensitive headers before forwarding data to LLM services. Do not send the Authorization header or raw credentials to the model.

function sanitizeForLLM(req, body) {
  const { authorization, ...safeHeaders } = req.headers;
  return {
    headers: safeHeaders,
    body: typeof body === 'string' ? body : JSON.stringify(body)
  };
}

app.post('/api/summarize', (req, res) => {
  const safe = sanitizeForLLM(req, req.body);
  // send `safe.body` to LLM endpoint; do NOT include `authorization`
});

2. Validate and redact credentials in user input

Ensure user-supplied content does not contain credential-like patterns before inclusion in prompts. Use strict allowlists where possible and reject or redact suspicious values.

function containsCredentialLike(value) {
  const basicAuthPattern = /^Basic\s+[A-Za-z0-9+/=]+$/i;
  return basicAuthPattern.test(value);
}

app.use((req, res, next) => {
  if (containsCredentialLike(req.get('Authorization'))) {
    return res.status(400).json({ error: 'Invalid input: credentials are not allowed' });
  }
  next();
});

3. Use environment variables for service-to-service auth instead of forwarding user Basic Auth

When calling LLM services from Express, rely on server-side API keys stored in environment variables rather than passing user credentials. This keeps user authentication separate from model access.

const { LLM_API_KEY } = process.env;

async function callLLM(messages) {
  const response = await fetch('https://api.example.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${LLM_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ model: 'gpt-4o', messages })
  });
  return response.json();
}

app.post('/api/chat', async (req, res) => {
  const result = await callLLM(req.body.messages);
  res.json(result);
});

4. Enforce strict CORS and header policies

Ensure that browsers do not inadvertently expose Authorization headers to client-side code or LLM-origin requests by configuring CORS and header policies tightly.

const cors = require('cors');
app.use(cors({
  origin: 'https://your-trusted-frontend.com',
  exposedHeaders: [],
  credentials: false
}));

5. Audit and test with LLM security probes

Run active LLM security checks to verify that credentials are not surfaced in prompts or outputs. Use tools that perform system prompt extraction, DAN jailbreak attempts, and output scanning for secrets.

Check Safe Outcome Risk Outcome
Authorization header in LLM request Header stripped; request uses service identity Header forwarded; credential exposure risk
LLM output contains base64-like strings Output sanitized or blocked Potential credential leakage in response

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Does middleBrick fix the issues it finds in Express + Basic Auth + LLM setups?
middleBrick detects and reports findings with remediation guidance; it does not fix, patch, or block. You must implement code changes such as stripping Authorization headers and using server-side service identities to address reported issues.
Can I rely on Basic Auth alone to protect my Express API when using LLMs?
Basic Auth protects transport between client and Express, but it does not prevent credential leakage to LLMs if headers or user input containing credentials are forwarded to models. You must sanitize inputs and avoid sending Basic Auth values to LLM endpoints.