HIGH prompt injectionaspnetbasic auth

Prompt Injection in Aspnet with Basic Auth

Prompt Injection in Aspnet with Basic Auth — how this specific combination creates or exposes the vulnerability

Prompt injection in an ASP.NET API context refers to an attacker influencing the behavior of an LLM endpoint by injecting crafted text into inputs that are later included in prompts. When an ASP.NET API uses Basic Authentication, credentials are typically sent in the Authorization header as a base64-encoded string (e.g., Authorization: Basic base64(username:password)). While Basic Auth itself does not directly interact with LLM prompts, its usage patterns can inadvertently contribute to an expanded attack surface that facilitates prompt injection in two ways.

First, if the ASP.NET application logs or echoes the Authorization header for debugging or telemetry, an attacker may leverage a prompt injection probe to exfiltrate those credentials by embedding them into LLM outputs. For example, a crafted request might include a payload designed to trick the LLM into repeating sensitive headers in its response, thereby exposing the base64-encoded credentials or derived information. This aligns with middleBrick’s active prompt injection testing, which includes system prompt extraction and data exfiltration probes.

Second, Basic Auth is commonly used for unauthenticated or weakly authenticated endpoints in development or legacy services. If such endpoints also expose an LLM interface without proper isolation, an attacker can submit malicious inputs that reach the LLM processing layer. Because the endpoint does not enforce strong authentication, the LLM may operate with elevated assumptions about trust, making it more susceptible to jailbreak attempts and instruction overrides. middleBrick’s LLM/AI security checks detect unauthenticated LLM endpoints and test for system prompt leakage, instruction override, DAN jailbreak, and data exfiltration, highlighting how weak authentication can amplify prompt injection risks.

In an ASP.NET implementation that integrates LLM capabilities, failing to sanitize user-supplied input before constructing prompts can allow an attacker to manipulate the prompt structure. For instance, user-controlled data concatenated directly into a prompt template might include delimiters or instructions that alter the intended behavior. When combined with Basic Auth, where identity is static and easily decoded, the impact of a successful prompt injection can be more severe, as attackers may explore paths to escalate privileges or infer internal system behavior through the LLM’s responses.

Basic Auth-Specific Remediation in Aspnet — concrete code fixes

To mitigate prompt injection risks in ASP.NET when using Basic Authentication, focus on strict input validation, avoiding credential leakage, and isolating LLM interactions from authentication logic. The following code examples demonstrate secure practices that align with the remediation guidance provided by middleBrick’s findings.

Secure Basic Authentication Header Parsing

Always validate and sanitize any data derived from headers before including it in prompts. Do not log or echo the Authorization header. Instead, extract the credentials securely and discard them after authentication.

using System;
using System.IdentityModel.Tokens.Jwt;
using System.Security.Claims;
using System.Text.Encodings.Web;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

public class BasicAuthOptions : AuthenticationSchemeOptions { }

public class BasicAuthHandler : AuthenticationHandler<BasicAuthOptions>
{
    protected override async Task<AuthenticateResult> HandleAuthenticateAsync()
    {
        if (!Request.Headers.ContainsKey("Authorization"))
            return AuthenticateResult.Fail("Missing Authorization Header");

        var authHeader = Request.Headers["Authorization"].ToString();
        if (!authHeader.StartsWith("Basic ", StringComparison.OrdinalIgnoreCase))
            return AuthenticateResult.Fail("Invalid Authorization Header");

        var token = authHeader.Substring("Basic ".Length).Trim();
        var credentialBytes = Convert.FromBase64String(token);
        var credentials = System.Text.Encoding.UTF8.GetString(credentialBytes).Split(':', 2);
        var username = credentials[0];
        var password = credentials.Length > 1 ? credentials[1] : string.Empty;

        // Validate credentials against your user store (example omitted)
        if (!IsValidUser(username, password))
            return AuthenticateResult.Fail("Invalid Credentials");

        var claims = new[] { new Claim(ClaimTypes.Name, username) };
        var identity = new ClaimsIdentity(claims, Scheme.Name);
        var principal = new ClaimsPrincipal(identity);
        var ticket = new AuthenticationTicket(principal, Scheme.Name);

        // Important: Do not log authHeader or token
        return AuthenticateResult.Success(ticket);
    }

    private bool IsValidUser(string username, string password)
    {
        // Implement secure credential validation
        return username == "validUser" && password == "validPass";
    }
}

Isolating LLM Prompt Construction from Authentication

Ensure that user input provided to LLM endpoints is sanitized and never directly concatenated with authentication-derived data. Use parameterized prompts and treat all external input as untrusted.

using System;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;

public class LlmService
{
    private readonly HttpClient _httpClient;

    public LlmService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    public async Task<string> GetLlmResponseAsync(string userInput)
    {
        // Sanitize and validate userInput before using it in a prompt
        var safeInput = SanitizeInput(userInput);

        var prompt = $"Analyze the following input and provide a summary: {safeInput}";
        var payload = new { prompt };
        var json = JsonSerializer.Serialize(payload);
        var content = new StringContent(json, Encoding.UTF8, "application/json");

        // Call LLM endpoint securely
        var response = await _httpClient.PostAsync("https://api.example.com/llm", content);
        response.EnsureSuccessStatusCode();

        var responseBody = await response.Content.ReadAsStringAsync();
        // Process response, ensuring no PII or secrets are exposed
        return responseBody;
    }

    private string SanitizeInput(string input)
    {
        // Implement input validation and stripping of dangerous characters
        return input.Replace("\n", " ").Replace("\r", " ").Trim();
    }
}

Operational Practices

  • Do not include Authorization headers or Basic Auth tokens in logs or LLM prompts.
  • Apply middleBrick’s LLM/AI security checks during development to detect system prompt leakage and jailbreak attempts.
  • Use middleware to reject requests with malformed Authorization headers before they reach LLM processing code.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Can Basic Auth headers be exposed through LLM outputs?
Yes, if the application logs or echoes headers into prompts. Always sanitize and avoid including Authorization data in LLM inputs or outputs; use middleBrick’s data exfiltration tests to detect this risk.
Does middleBrick test for prompt injection when Basic Auth is present?
middleBrick’s active prompt injection testing includes system prompt extraction and data exfiltration probes. It checks whether authentication context influences LLM behavior and reports findings with remediation guidance.