Prompt Injection in Adonisjs with Mutual Tls
Prompt Injection in Adonisjs with Mutual Tls
AdonisJS is a Node.js web framework commonly used to build API and web applications. When an AdonisJS application exposes an endpoint that forwards user input to an LLM, and also uses Mutual TLS (mTLS) for client authentication, the combination can expose prompt injection risks that are specific to the application’s routing and authentication layer.
In this scenario, mTLS ensures that only clients possessing a valid certificate are allowed to reach the route handler. The server validates the client certificate during the TLS handshake and typically maps the certificate’s subject or serial number to an identity (e.g., a user or service). If the developer then directly uses trusted request metadata—such as the certificate-derived identity or headers set by the mTLS layer—as context for the LLM prompt, an attacker who can present a valid certificate may still inject instructions through controllable input fields (e.g., query parameters, JSON body, or headers that are not covered by mTLS).
For example, an mTLS-protected route in AdonisJS might read the client certificate information from the request socket and embed it into the system prompt. A user-controlled query parameter could then be concatenated into the user message without proper sanititization or separation, enabling an attacker with a valid certificate to shift the model behavior via crafted input. This is a classic prompt injection vector that leverages trusted metadata to appear authorized while still manipulating the LLM through the application’s own logic. The risk is not in breaking mTLS, but in the application’s handling of combined trusted and untrusted signals when constructing prompts.
Consider an endpoint that builds a prompt like: system prompt includes the authenticated client’s organization (from the mTLS certificate), and the user message is taken directly from request input. If the user message is not isolated from the system instructions, an injection attack can cause the model to ignore or rewrite the organization context, potentially leading to unauthorized data access or incorrect processing logic. Because mTLS operates at the transport layer, developers may mistakenly assume that all downstream inputs are safe, which increases the likelihood of insecure prompt design.
The LLM/AI Security checks provided by middleBrick detect such patterns by analyzing the unauthenticated attack surface and identifying places where user input may influence LLM instructions, even when strong transport-layer authentication like mTLS is in place. These checks include active prompt injection testing—such as system prompt extraction and instruction override probes—as well as output scanning for sensitive data and detection of excessive agency patterns. By correlating these findings with an OpenAPI/Swagger spec analysis, the scanner can highlight risky integrations between authenticated routes and LLM calls, helping developers understand how prompt injection can manifest in an mTLS-enabled AdonisJS application.
Mutual Tls-Specific Remediation in Adonisjs
To reduce prompt injection risk in an AdonisJS application using mTLS, design the prompt architecture so that trusted metadata and user-controlled input are strictly separated and never composed into a single prompt stream without clear boundaries and sanitization. Apply the principle of least privilege to certificate-based identity usage, and avoid directly embedding certificate attributes into system instructions.
Below are concrete code examples showing how to implement mTLS in AdonisJS and structure prompts safely.
Mutual TLS setup in AdonisJS
Configure the HTTPS server to request and validate client certificates. The example below uses the built-in HTTPS server with an AdonisJS provider.
// start/hooks.ts
import { HttpServerHook } from '@adonisjs/core/types'
export const serverHooks: HttpServerHook = {
async beforeRequest({ request, response }) {
const tlsSocket = request.socket as any
if (tlsSocket.authorized) {
const cert = tlsSocket.getPeerCertificate()
request.ctx.clientCert = {
subject: cert.subject,
serialNumber: cert.serialNumber,
}
} else {
response.status(400).send('Client certificate required')
}
},
}
In server.ts, enable client certificate verification:
// server.ts
import { defineConfig } from '@adonisjs/core/app'
import { HttpServer } from '@adonisjs/core/server'
import { serverHooks } from './start/hooks'
export default defineConfig({
https: {
key: '/path/to/server.key',
cert: '/path/to/server.crt',
ca: '/path/to/ca.crt',
requestCert: true,
rejectUnauthorized: true,
},
hooks: {
beforeRequest: serverHooks.beforeRequest,
},
})
Prompt construction best practices
Keep the system prompt deterministic and avoid concatenating raw user input into instructions. Use user input only in the user message segment and enforce strict role separation.
// Example controller method
import { HttpContext } from '@adonisjs/core'
export default class ChatController {
async generate({ request, ctx }: HttpContext) {
const userMessage = request.input('message')
const clientIdentity = ctx.clientCert?.subject
// Safe: system prompt uses trusted metadata, user message is isolated
const systemPrompt = `You are assisting client ${clientIdentity}. Follow policies and do not reveal internal details.`
const userPrompt = `Answer the following: ${userMessage}`
// Call LLM with separated prompts (pseudo-API)
const result = await llm.complete({
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
})
return { reply: result.content }
}
}
Additionally, apply input validation and output scanning to detect any signs of prompt manipulation or data leakage. middleBrick’s LLM/AI Security checks can surface these issues early by testing prompt injection vectors and scanning LLM responses for sensitive information.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |