Llm Data Leakage in Adonisjs with Basic Auth
Llm Data Leakage in Adonisjs with Basic Auth — how this specific combination creates or exposes the vulnerability
When an AdonisJS application uses HTTP Basic Authentication and exposes endpoints that return or process language model (LLM) related data, there is potential for sensitive information to leak to an unauthorized LLM service or through LLM outputs. This can occur when requests include credentials in headers and responses contain prompts, system instructions, or generated text that may be captured by an external LLM endpoint or logged in a way that exposes sensitive content.
In the context of middleBrick’s LLM/AI Security checks, an unauthenticated scan can detect scenarios where an endpoint returns data that resembles prompts intended for an LLM, such as system messages or user instructions that include credentials, tokens, or other sensitive context. Even when Basic Auth protects the endpoint, if the application logic embeds authentication details or sensitive business logic into request bodies, query parameters, or response payloads, those artifacts may be exposed to LLM inference paths or training data collection.
For example, an endpoint that accepts user input to generate a system prompt for an LLM might inadvertently echo the Authorization header or session-derived values into the prompt. If the response is forwarded to an LLM service, the credentials or internal logic could be included in the LLM’s context, creating a leakage path. middleBrick’s system prompt leakage detection uses 27 regex patterns tailored to ChatML, Llama 2, Mistral, and Alpaca formats to identify such exposures in responses, while active prompt injection tests probe for system prompt extraction and data exfiltration through LLM endpoints.
Additionally, Basic Auth credentials are base64-encoded but not encrypted; if the application logs raw request headers or if LLM tooling captures output containing these values, the credentials can be exfiltrated. The tool checks for PII, API keys, and executable code in LLM responses, and flags outputs that contain authentication tokens or internal identifiers. This is especially relevant when AdonisJS routes pass user-controlled data into LLM generation pipelines without sanitizing sensitive context.
Another vector involves excessive agency patterns where an AdonisJS service integrates with LangChain-like agent frameworks or tool-calling workflows. If an LLM endpoint is reachable without authentication or if the application exposes internal function schemas, the LLM may attempt to invoke unintended operations, escalating the impact of a data leak. middleBrick’s LLM/AI Security checks include excessive agency detection and unauthenticated LLM endpoint detection to surface these risks.
Because middleBrick performs black-box scanning without credentials, it can identify whether an AdonisJS endpoint that uses Basic Auth still leaks sensitive constructs in LLM-facing outputs, enabling developers to understand exposure paths that are not visible through traditional API testing alone.
Basic Auth-Specific Remediation in Adonisjs — concrete code fixes
To reduce LLM data leakage risk when using Basic Authentication in AdonisJS, ensure credentials and sensitive context are never embedded in responses, logs, or LLM prompts. Use middleware to strip or sanitize headers before they reach LLM-related logic, and avoid passing raw Authorization values into prompt templates.
Below are concrete code examples for secure Basic Auth handling in AdonisJS.
1. Secure Basic Auth middleware that validates credentials without exposing them
// start/hooks.ts
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
import { Exception } from '@poppinss/utils'
export default class AuthMiddleware {
public async handle({ request, response }: HttpContextContract, next: () => Promise) {
const authHeader = request.headers().authorization
if (!authHeader || !authHeader.startsWith('Basic ')) {
response.unauthorized({ message: 'Missing or invalid authorization header' })
return
}
const base64 = authHeader.split(' ')[1]
const decoded = Buffer.from(base64, 'base64').toString('utf-8')
const [username, password] = decoded.split(':')
// Validate credentials against your user model
const user = await User.findBy('username', username)
if (!user || user.password !== hashPassword(password)) {
response.unauthorized({ message: 'Invalid credentials' })
return
}
// Attach minimal user context, never attach raw credentials
request.authUser = { id: user.id, username: user.username }
await next()
}
}
2. Sanitizing data before LLM generation to prevent leakage
// app/Controllers/Http/llm_controller.ts
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
import { sanitizeForLlm } from '../../Utils/LlmSanitizer'
export default class LlmController {
public async generate({ request, response }: HttpContextContract) {
const userInput = request.input('prompt')
const user = request.authUser
// Remove or mask sensitive values before sending to LLM
const safeInput = sanitizeForLlm({
prompt: userInput,
userId: user?.id,
username: user?.username,
})
// Call your LLM service with sanitized input only
const llmResponse = await callLlmService(safeInput.prompt)
return response.ok({ response: llmResponse })
}
}
// app/Utils/LlmSanitizer.ts
export function sanitizeForLlm(data: { prompt: string; userId?: number; username?: string }) {
// Remove or replace sensitive fields
const { prompt, userId, username, ...rest } = data
// Do not include user identifiers in the prompt sent to LLM
return {
prompt: prompt.replace(new RegExp(username || '', 'g'), '[REDACTED]'),
...rest,
}
}
3. Disabling LLM logging of sensitive headers
// app/Middleware/DisableLlmHeaderLogging.ts
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
export default class DisableLlmHeaderLogging {
public async handle({ request, response }: HttpContextContract, next: () => Promise) {
// Prevent logging of Authorization header in LLM-related routes
if (request.url().startsWith('/api/llm')) {
request.log = request.log.clone()
request.log.meta = { ...request.log.meta, skipHeaders: true }
}
await next()
}
}
These patterns ensure that Basic Auth credentials are validated server-side, never echoed into LLM prompts, and are excluded from logs or outputs that might be inspected by an LLM service. They align with middleBrick’s findings by addressing system prompt leakage, PII in LLM outputs, and unsafe consumption patterns specific to authenticated API integrations.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |