Llm Data Leakage in Adonisjs with Jwt Tokens
Llm Data Leakage in Adonisjs with Jwt Tokens — how this specific combination creates or exposes the vulnerability
When AdonisJS applications expose JWT token handling endpoints or debugging routes to unauthenticated LLM interfaces, sensitive data can leak through model outputs. This occurs when route handlers, error messages, or introspection endpoints return token contents, signing keys, or user identifiers in a format that an LLM can extract or infer. middleBrick’s LLM/AI Security checks specifically test for system prompt leakage and output scanning, detecting whether JWT-related data appears in responses accessible without authentication.
In AdonisJS, developers often use packages like adonisjs/jwt to generate and verify tokens for authentication. If a route such as /debug/token echoes the decoded payload in verbose error messages or stack traces, an LLM-powered attacker can coax the endpoint into revealing claims like user IDs or roles via prompt injection techniques. For example, crafted prompts can trigger verbose error responses that include token payloads, especially when debug mode is enabled or when custom exception handlers propagate raw token data into JSON responses.
The risk is compounded when OpenAPI specs or runtime endpoints expose token metadata, because middleBrick’s OpenAPI/Swagger analysis correlates spec definitions with runtime findings. If the spec documents a token introspection route without proper authentication requirements, and the runtime returns JWT payloads in clear text, this becomes a high-severity data exposure finding. The scanner’s active prompt injection probes—such as system prompt extraction and data exfiltration—can trigger these leaks by requesting token details through adversarial inputs that bypass intended access controls.
Additionally, improper logging practices in AdonisJS can cause JWT contents to be written to application logs, which may be exposed through log aggregation interfaces or error reporting tools accessible to LLM-based tooling. middleBrick’s output scanning checks for PII and API keys in LLM responses, identifying whether token values, signing secrets, or user identifiers appear in chat completions or tool outputs. Real-world patterns include routes that return full token payloads for troubleshooting, or middleware that attaches decoded tokens to request objects in a way that becomes visible through AI-assisted debugging sessions.
To illustrate, consider an AdonisJS route that decodes a JWT and returns claims for diagnostic purposes. If this route is reachable without authentication and returns structured JSON containing the payload, an LLM can extract sensitive information through simple conversational probing. middleBrick’s LLM/AI Security checks run sequential probes including instruction override and DAN jailbreak attempts to determine whether token data can be exfiltrated. Findings are mapped to frameworks such as OWASP API Top 10 and GDPR, highlighting the need to restrict debug endpoints and ensure tokens are never reflected in responses accessible to untrusted parties.
Jwt Tokens-Specific Remediation in Adonisjs — concrete code fixes
Remediation focuses on ensuring JWT tokens and their payloads are never exposed in API responses or logs accessible to LLMs. In AdonisJS, use built-in guards and response filters to prevent accidental leakage. Always require authentication for routes that handle or introspect tokens, and avoid returning decoded payloads in debug or error responses.
First, configure protected routes using AdonisJS authentication guards so that token introspection or debug endpoints are inaccessible without a valid session. For example, apply the auth middleware to ensure only authenticated requests can access sensitive routes:
// start/routes.ts
Route.get('/debug/token', async ({ auth }) => {
const user = await auth.authenticate()
// Only return non-sensitive metadata, never the full token
return { userId: user.id, role: user.role }
}).middleware(['auth'])
Second, sanitize error messages and avoid echoing token contents in exception handlers. Customize the exception handler to strip sensitive fields before logging or returning errors:
// app/Exceptions/Handler.ts
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
import { ExceptionHandler } from 'core/exceptions/exception_handler'
export default class Handler extends ExceptionHandler {
public async handle(error: any, ctx: HttpContextContract) {
// Remove token details from error responses
if (error.message && error.message.includes('jwt')) {
ctx.response.status(error.status).send({ error: 'Authentication error' })
return
}
await super.handle(error, ctx)
}
}
Third, ensure JWT payloads are not included in logs that could be accessed by LLM tooling. Use structured logging that excludes sensitive claims:
// app/Controllers/Http/TokenController.ts
import { logger } from '@ioc:Adonis/Core/Logger'
export default class TokenController {
public async introspect({ request }) {
const token = request.headers().authorization?.replace('Bearer ', '')
// Log only metadata, never the token or payload
logger.info('Token introspection attempted', { hasToken: !!token })
return { valid: !!token }
}
}
Finally, validate and restrict input to prevent prompt injection attempts that could coerce token leakage. Use strict schema validation on incoming requests and avoid dynamic code execution that might expose token handling logic to LLMs:
// app/Validators/TokenValidator.ts
import { schema } from '@ioc:Adonis/Core/Validator'
export const tokenSchema = schema.create({
token: schema.string({}, [ 'required', 'format:jwt' ])
})
By combining authentication guards, sanitized error handling, careful logging, and input validation, AdonisJS applications can mitigate LLM data leakage risks associated with JWT tokens. These practices align with the findings and remediation guidance provided by tools like middleBrick Pro, which supports continuous monitoring and integration with CI/CD pipelines to enforce secure token handling across deployments.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |