Hallucination Attacks in Feathersjs with Api Keys
Hallucination Attacks in Feathersjs with Api Keys — how this specific combination creates or exposes the vulnerability
Hallucination in the context of LLM security refers to models generating plausible but incorrect, fabricated, or nonsensical responses. In FeathersJS applications that integrate LLM endpoints and rely on API keys for authorization, a specific attack surface emerges when keys are handled in server-side code that also interfaces with LLMs.
FeathersJS is a framework for real-time applications that often exposes REST and WebSocket endpoints. When an API key is used to authenticate requests to an LLM provider (e.g., OpenAI, Anthropic), and that key is embedded or improperly scoped in server-side service logic, two issues can align:
- Improper key scope or logging may expose the key in application logs or error messages that an attacker can read via other vulnerabilities (e.g., information disclosure or insecure direct object references).
- Server-side code that dynamically builds prompts from user input without strict validation or sanitization can be tricked into generating prompts that cause the LLM to hallucinate, revealing internal instructions, key identifiers, or operational details in the model’s output.
Consider a FeathersJS service that calls an LLM to summarize user-provided content and uses an API key stored in environment variables. If the application does not sanitize user input before constructing the prompt, an attacker can submit carefully crafted text that triggers the model to echo back system instructions or key-related metadata. This is a prompt injection vector that leverages weak input validation and overly permissive prompt assembly. Because the API key is used server-side, the risk is not direct key exfiltration from the client, but rather that the key’s presence and usage patterns become observable through LLM outputs, enabling reconnaissance for further attacks.
Additionally, if the FeathersJS application logs full requests and responses for debugging and those logs include LLM outputs, keys referenced indirectly (e.g., in naming conventions or metadata) can be exposed. The combination of a permissive authorization model (API keys with broad scopes) and insufficient input validation creates conditions where an attacker can indirectly infer sensitive configuration details through repeated interactions that elicit hallucinated responses containing key identifiers or usage context.
Real-world examples include scenarios where an attacker uses verbose or error-prone prompts to coax the model into revealing the format of authorization headers or the structure of backend calls. While the API key itself may not appear verbatim, the model’s behavior can disclose timing, retry patterns, or endpoint paths that, when correlated with other vulnerabilities, elevate risk.
Api Keys-Specific Remediation in Feathersjs — concrete code fixes
Remediation centers on strict input validation, controlled prompt construction, and minimizing the exposure of key-related artifacts in logs and outputs. Below are concrete patterns for a FeathersJS service that calls an LLM using an API key.
1. Secure API key storage and usage
Store API keys in environment variables and reference them at runtime. Avoid hardcoding keys in service files.
// src/environment.js
const apiKey = process.env.LLM_API_KEY;
if (!apiKey) {
throw new Error('LLM_API_KEY is required');
}
module.exports = { apiKey };
2. Input validation and prompt sanitization
Use a validation library to sanitize user input before building prompts. Ensure that user data never directly influences system instructions or key metadata.
// src/hooks/validatePrompt.js
const { sanitize } = require('validator');
module.exports = function validatePrompt() {
return async context => {
const userInput = context.data?.content || '';
const cleanInput = sanitize(userInput.trim(), { stripTags: true, escape: true });
if (!cleanInput) {
throw new Error('Invalid input');
}
context.data.cleanContent = cleanInput;
return context;
};
};
3. Controlled LLM invocation with static prompts
Construct prompts server-side using static templates and inject only sanitized user content into designated slots. Do not concatenate raw user input into system messages.
// src/services/llm-summary/index.js
const { apiKey } = require('../../environment');
const axios = require('axios');
const { validatePrompt } = require('./hooks/validatePrompt');
module.exports = function setupLlmService(app) {
app.use('/llm/summary', validatePrompt());
app.service('/llm/summary').hooks({
before: {
create: [async context => {
const userContent = context.data.cleanContent;
const promptTemplate = `Summarize the following user content:\n\"\"\"${userContent}\"\"\"\nGuidelines: Keep the summary factual and concise.`;
try {
const response = await axios.post('https://api.example.com/v1/completions', {
model: 'text-davinci-003',
prompt: promptTemplate,
max_tokens: 150
}, {
headers: { Authorization: `Bearer ${apiKey}` }
});
context.data.result = response.data.choices[0].text.trim();
} catch (error) {
// Log generic errors without exposing API key or raw prompts
app.log.error('LLM request failed');
throw new Error('Unable to process request');
}
return context;
}]
},
after: {
create: [async context => {
// Ensure no key-like strings appear in the response before sending to client
const safeResponse = context.data.result.replace(/\b[A-Za-z0-9]{32,}\b/g, '[REDACTED]');
context.data.result = safeResponse;
return context;
}]
}
});
};
4. Logging and error handling hygiene
Configure logging to exclude sensitive headers and keys. Use structured logging with explicit field allowlists.
// src/utils/logger.js
const pino = require('pino');
const logger = pino({
level: 'info',
redact: ['headers.authorization', 'query.api_key', 'body.api_key']
});
module.exports = logger;
5. Runtime protection via middleware
Add middleware to reject requests that contain known attack patterns (e.g., repeated system prompt probes).
// src/hooks/throttle-prompt-injection.js
module.exports = function promptInjectionShield() {
return async context => {
const userContent = (context.data?.content || '').toLowerCase();
const suspiciousPatterns = ['system:', 'ignore previous', 'act as', 'api key', 'sk-', 'openai'];
const matches = suspiciousPatterns.filter(p => userContent.includes(p));
if (matches.length > 2) {
throw new Error('Suspicious input detected');
}
return context;
};
};
Apply this hook alongside your validation hook to provide layered protection.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |