HIGH llm data leakageadonisjsmutual tls

Llm Data Leakage in Adonisjs with Mutual Tls

Llm Data Leakage in Adonisjs with Mutual Tls — how this specific combination creates or exposes the vulnerability

AdonisJS applications that terminate TLS at the framework level while also exposing LLM-related endpoints can unintentionally leak sensitive data when mutual TLS (mTLS) authentication is not fully enforced or is misconfigured. Even though AdonisJS does not perform certificate validation by default, mTLS requires both the client and server to present valid certificates. If an LLM endpoint is accessible without proper client certificate verification, an authenticated but unauthorized client may send crafted prompts or data intended for the LLM, and the server’s response may include sensitive information such as system prompts, PII, or even API keys embedded in model outputs.

LLM data leakage in this context occurs when the application does not validate the client certificate chain before processing the request, or when the LLM integration reuses a shared runtime context across requests. For example, if an AdonisJS route that calls an LLM does not explicitly verify the presented client certificate, an attacker with a valid but low-privilege certificate might exploit overly verbose error messages or debug endpoints to infer details about the system prompt or extract training data remnants from model responses. This risk is compounded if the LLM endpoint also exposes features like function calling or tool use, where structured output may inadvertently reveal internal schema or authorization logic.

The interaction between mTLS and LLM security is nuanced. Mutual TLS ensures transport-layer identity, but it does not automatically enforce application-level authorization for LLM actions. An AdonisJS route that checks for a client certificate but does not validate scopes, roles, or certificate metadata (such as CN or O fields) may still process malicious prompts designed to trigger jailbreaks, prompt injection, or cost exploitation. If the LLM response includes sensitive data and the server does not sanitize or restrict output based on the client’s authorization context, data leakage occurs. This is especially relevant when using middleware that passes certificate details to the controller without enforcing least-privilege access per LLM operation.

Real-world attack patterns include probing unauthenticated LLM endpoints in AdonisJS apps that mistakenly assume mTLS alone is sufficient. Using the five sequential LLM security probes — system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation — an attacker can test whether the server returns model internals or private data when client certificates are present but not properly scoped. Findings from such scans often highlight missing property authorization and unsafe consumption of LLM outputs, where the application trusts model responses without validating whether the data should be exposed to the given client identity.

To detect these issues in practice, scanning tools that support LLM security testing can exercise these attack paths against an AdonisJS endpoint protected by mTLS but lacking fine-grained authorization. Such scans verify whether client certificates are strictly validated, whether LLM responses are inspected for PII or credentials, and whether the system prompt remains confidential. Remediation requires combining strict mTLS configuration with explicit authorization checks around LLM inputs and outputs, ensuring that certificate metadata is used to enforce context-aware access controls rather than relying on transport security alone.

Mutual Tls-Specific Remediation in Adonisjs — concrete code fixes

Securing an AdonisJS application with mutual TLS while preventing LLM data leakage requires explicit certificate validation, strict request-scoped authorization, and careful handling of LLM outputs. Below are concrete, syntactically correct examples demonstrating how to configure mTLS and integrate authorization checks for LLM endpoints.

First, configure the AdonisJS HTTPS server to request and verify client certificates. Use the built-in HTTPS module with requestCert and rejectUnauthorized set to true. This ensures that only clients with a trusted certificate chain can reach any route, including those that invoke LLM services.

// server.ts
import { httpsServer } from '@adonisjs/core'
import { join } from 'path'

export const server = httpsServer.create({
  key: join(__dirname, '../cert/server.key'),
  cert: join(__dirname, '../cert/server.crt'),
  ca: join(__dirname, '../cert/ca.crt'),
  requestCert: true,
  rejectUnauthorized: true,
})

Next, implement a route-specific authorization check that reads the client certificate from the request socket and validates required fields before allowing the LLM call to proceed. This prevents unauthorized clients with valid but insufficient certificates from accessing sensitive LLM functionality.

// routes/llm.ts
import Route from '@ioc:Adonis/Core/Route'
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'

Route.post('/llm/query', async ({ request, response }: HttpContextContract) => {
  const socket = request.socket
  if (!socket.authorized) {
    return response.unauthorized({ error: 'Client certificate required' })
  }

  const cert = socket.getPeerCertificate()
  if (!cert.subject || !cert.issuer) {
    return response.forbidden({ error: 'Invalid client certificate' })
  }

  // Enforce least privilege: require specific CN or extended key usage
  const allowedCommonNames = ['llm-client-prod', 'llm-client-staging']
  if (!allowedCommonNames.includes(cert.subject.CN)) {
    return response.forbidden({ error: 'Unauthorized client identity' })
  }

  // Proceed with LLM call only after authorization checks
  const prompt = request.input('prompt')
  // Call your LLM service here, ensuring output is sanitized
  const result = await callLlmService(prompt)
  return response.ok({ result })
})

async function callLlmService(prompt: string): Promise {
  // Placeholder: integrate with your LLM provider
  return '[sanitized response]'
}

Finally, apply output-level controls when consuming LLM responses in AdonisJS. Even when mTLS and route authorization are correctly implemented, you must sanitize model outputs to prevent leakage of system prompts, PII, or API keys. Use strict schema validation and avoid exposing raw model responses directly to the client.

// services/llm.ts
export class LlmService {
  async generate(messages: Array<{ role: string; content: string }>): Promise {
    const raw = await this.callProvider(messages)
    // Apply output scanning: remove potential secrets, PII, or internal instructions
    return this.sanitize(raw)
  }

  private sanitize(text: string): string {
    // Simple redaction pattern for API keys and internal markers
    return text
      .replace(/\b[A-Z0-9]{32,}\b/g, '[REDACTED]')
      .replace(/<system>.*?<\/system>/gs, '[SYSTEM PROMPT HIDDEN]')
  }

  private async callProvider(messages: Array<{ role: string; content: string }>): Promise {
    // Integration code here
    return 'model output'
  }
}

These examples demonstrate how to combine transport-layer mTLS with application-level authorization and output handling to reduce the risk of LLM data leakage in AdonisJS. Remember that middleBrick scans can help verify whether your endpoints require client certificates and whether LLM responses expose sensitive information, providing prioritized findings and remediation guidance aligned with frameworks such as OWASP API Top 10.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Does mutual TLS alone prevent LLM data leakage in AdonisJS?
No. Mutual TLS ensures client authentication at the transport layer but does not enforce application-level authorization for LLM operations. Without explicit certificate validation and scope checks, an authorized client can still trigger data leakage through overly verbose responses or unsafe consumption of model outputs.
How can I verify my AdonisJS LLM endpoints are not leaking system prompts or PII?
Use scanning tools that include LLM security probes, such as active prompt injection tests and output scanning for PII and API keys. Combine these with mTLS configuration reviews in AdonisJS to ensure client certificates are validated and that route-level authorization is enforced before invoking LLM services.