Llm Data Leakage in Aspnet with Mutual Tls
Llm Data Leakage in Aspnet with Mutual Tls
LLM data leakage in an ASP.NET context with mutual TLS (mTLS) occurs when an application that uses client certificates to authenticate and encrypt traffic inadvertently exposes sensitive data to an LLM endpoint or receives unsafe LLM output. Even though mTLS protects transport integrity, the application layer can still leak information if it forwards user input, system prompts, or API responses to an LLM without proper safeguards. For example, an ASP.NET Core endpoint that accepts user queries, appends a system prompt, and calls an unauthenticated LLM endpoint may leak internal instructions or private data through the prompt or through the LLM’s response.
With mTLS, the server verifies the client certificate and the client verifies the server certificate, reducing the risk of on-path attackers reading traffic. However, mTLS does not prevent the application from constructing malicious or overly permissive prompts, nor does it prevent the LLM from returning credentials, PII, or executable code. A typical scenario: an ASP.NET service uses mTLS to call an internal AI gateway, but the gateway forwards the request to an external LLM. If the gateway does not sanitize inputs or restrict the LLM’s capabilities, the LLM may reveal the system prompt via injection or produce sensitive data in its output. This is especially risky when the LLM endpoint is unauthenticated or when the application uses features like tool calls and function calling that increase the LLM’s agency.
An attack chain might involve an authenticated user submitting input that includes prompt injection payloads aimed at extracting the system prompt, overriding instructions, or exfiltrating data via cost exploitation. Because the application trusts the LLM response, it may render the LLM’s output directly to the user, exposing API keys, PII, or code snippets. This risk is amplified in ASP.NET apps that integrate LLMs for chat completions or code suggestions without validating or scanning LLM responses for PII, API keys, or executable code.
To illustrate, consider an ASP.NET Core controller that builds a completion request and sends it to an LLM. If the controller does not validate user input, it may allow an attacker to inject a series of probes that trick the LLM into revealing its instructions. Even with mTLS securing the channel between the ASP.NET app and the LLM host, the application must still implement input validation, output scanning, and strict prompt design to prevent leakage. The presence of mTLS should not create a false sense of security; it secures the pipe, not the content or the AI behavior.
Defenses include strict input validation, prompt sandboxing, disabling unnecessary LLM agency features like tool calls when not required, and scanning LLM outputs for sensitive patterns before returning them to users. For ASP.NET developers, this means treating LLM integration as an untrusted downstream service, applying the same rigor to prompt and response handling as to any other external API, and using mTLS to protect transport while implementing application-layer controls to prevent data leakage.
Mutual Tls-Specific Remediation in Aspnet
Remediation focuses on configuring mTLS correctly in ASP.NET, hardening how the application interacts with LLMs, and ensuring that prompt and output handling do not leak data. Below are concrete steps and code examples for enabling mTLS and securing LLM integration.
Configure mTLS in ASP.NET Core
In Program.cs, configure Kestrel to require client certificates and validate them:
var builder = WebApplication.CreateBuilder(args);
builder.WebHost.ConfigureKestrel(serverOptions =>
{
serverOptions.ListenAnyIP(5001, listenOptions =>
{
listenOptions.UseHttps(httpsOptions =>
{
httpsOptions.ServerCertificate = new X509Certificate2("server.pfx", "password");
httpsOptions.ClientCertificateMode = ClientCertificateMode.RequireCertificate;
httpsOptions.AllowedCipherSuites = new[]
{
TlsCipherSuite.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
TlsCipherSuite.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
};
httpsOptions.Protocols = SslProtocols.Tls12 | SslProtocols.Tls13;
});
});
});
var app = builder.Build();
app.Use(async (context, next) =>
{
var cert = context.Connection.ClientCertificate;
if (cert == null)
{
context.Response.StatusCode = 400;
await context.Response.WriteAsync("Client certificate required.");
return;
}
// Optionally validate certificate thumbprint or extended properties
if (!cert.Thumbprint.Equals("EXPECTED_THUMBPRINT", StringComparison.OrdinalIgnoreCase))
{
context.Response.StatusCode = 403;
await context.Response.WriteAsync("Invalid client certificate.");
return;
}
await next();
});
app.Run(context => context.Response.WriteAsync("Hello with mTLS."));
app.Run();
On the client side, provide the client certificate when making requests:
var handler = new HttpClientHandler();
handler.ClientCertificates.Add(new X509Certificate2("client.pfx", "password"));
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
// Ensure remote certificate validation in production
handler.ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator; // Replace with proper validation
using var client = new HttpClient(handler);
var response = await client.GetAsync("https://localhost:5001/");
Console.WriteLine(await response.Content.ReadAsStringAsync());
For Linux environments, ensure certificate stores are configured and files have appropriate permissions. In production, use a proper certificate validation callback instead of accepting any certificate.
Harden LLM Interactions
When calling LLMs from ASP.NET, avoid forwarding raw user input as system prompts. Use parameterized prompts and validate/sanitize all inputs:
string BuildPrompt(string userInput)
{
// Validate and sanitize userInput to remove unexpected newlines and control characters
var sanitized = System.Text.RegularExpressions.Regex.Replace(userInput, @"[\x00-\x1F\x7F]", "");
return $"You are a helpful assistant. Answer the question concisely. User: {sanitized}";
}
Do not enable tool calls or function calling unless necessary, and if used, restrict the functions to safe, well-scoped operations. Scan LLM outputs for PII, API keys, and code before returning them to the client. For example:
bool ContainsSensitiveData(string content)
{
// Simple pattern checks — extend with libraries or regexes for API keys, emails, etc.
return System.Text.RegularExpressions.Regex.IsMatch(content, @"\b[A-Za-z0-9+/=]{40}\b") // API key-like
|| System.Text.RegularExpressions.Regex.IsMatch(content, @"\b\w+@\w+\.\w+\b"); // Email
}
if (ContainsSensitiveData(llmResponse))
{
// Handle safely: log, redact, or return a generic message
}
Use the middleBrick CLI to scan your API endpoints for common misconfigurations and to verify that your application’s attack surface is properly constrained. For teams integrating into CI/CD, the GitHub Action can enforce security score thresholds before deployment. In development, the MCP Server allows you to scan APIs directly from your AI coding assistant, helping you maintain secure patterns while building integrations.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |