Llm Data Leakage in Fiber with Api Keys
Llm Data Leakage in Fiber with Api Keys — how this specific combination creates or exposes the vulnerability
In a Fiber-based API, embedding API keys directly in responses or logs can create a data leakage path that extends into LLM interactions. When an endpoint returns raw key material, either accidentally via verbose error messages or intentionally for debugging, that content may be captured by an LLM-enabled client or logging pipeline. If the same endpoint is later probed by an active LLM security check, the system can detect whether API keys, PII, or executable code appear in LLM outputs or side channels.
The risk is context-specific: a route that echoes a configuration object containing Authorization: Bearer {key} fragments or returns a header like x-api-key: {key} in plaintext can unintentionally expose keys. During automated probing, such outputs are scanned for patterns matching 27 regex signatures tailored to ChatML, Llama 2, Mistral, and Alpaca formats, as well as for generic API key entropy patterns. If a key is present in an LLM response that is retained or cached, the exposure becomes actionable because keys grant access to downstream services.
LLM-specific checks also examine whether endpoints are unauthenticated and whether outputs contain sensitive artifacts. For example, an unauthenticated route that returns JSON with embedded keys can be exercised by active prompt-injection test sequences, including system prompt extraction and data exfiltration probes. If the LLM-integrated client reuses these responses for context, keys can propagate into prompts, increasing the chance of unintended usage or leakage through AI tooling.
middleBrick scans for these conditions by correlating OpenAPI/Swagger definitions (2.0, 3.0, 3.1) with runtime behavior. Cross-referencing spec definitions against observed responses helps identify mismatches where key-bearing structures appear in fields not documented as credentials. This correlation is valuable because it highlights inconsistencies between declared schemas and actual data exposure, supporting compliance mapping to frameworks such as OWASP API Top 10 and SOC2.
Api Keys-Specific Remediation in Fiber — concrete code fixes
Remediation focuses on preventing keys from appearing in responses accessible to LLMs or logs. In Fiber, avoid placing raw keys in JSON bodies, headers, or error payloads. Instead, reference keys indirectly and enforce strict serialization rules. Below are concrete, working examples demonstrating secure handling in a Fiber service.
First, a vulnerable pattern that should be avoided:
const Fastify = require('fastify')();
Fastify.get('/debug', async (request, reply) => {
const apiKey = process.env.API_KEY;
// Vulnerable: returns the key directly in the response
return { key: apiKey };
});
This pattern risks exposing the key in logs, error traces, or LLM-intercepted outputs. A secure alternative uses opaque references and server-side validation:
const Fastify = require('fastify')();
const crypto = require('crypto');
Fastify.post('/use-key', async (request, reply) => {
const apiKey = process.env.API_KEY;
const requestId = crypto.randomUUID();
// Use the key server-side without echoing it
const result = await backend.call({ auth: { key: apiKey }, requestId });
return { requestId, status: 'processed' };
});
For configuration inspection endpoints, return only metadata that does not contain key material:
Fastify.get('/config/meta', async (request, reply) => {
return {
supportedAlgorithms: ['HS256', 'RS256'],
keyRotationEnabled: true,
// Do not include actual keys or key fingerprints
};
});
Additionally, sanitize error responses to avoid leaking keys in stack traces:
Fastify.setErrorHandler((error, request, reply) => {
// Avoid sending keys or internal values in error payloads
reply.code(500).send({ error: 'Internal server error' });
});
When integrating with external services, use short-lived tokens scoped to specific operations rather than long-lived API keys, and rotate them regularly. These practices reduce the window and impact of any potential LLM-mediated leakage.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |