HIGH prompt injectionactixbasic auth

Prompt Injection in Actix with Basic Auth

Prompt Injection in Actix with Basic Auth — how this specific combination creates or exposes the vulnerability

When an Actix-web service that uses HTTP Basic Auth exposes an unauthenticated LLM endpoint, the combination can amplify prompt injection risks. Basic Auth protects the HTTP layer, but if the Actix handler forwards user-controlled input directly into a prompt sent to an LLM, the authentication boundary does not stop malicious payloads from reaching the model. An attacker can inject instructions via query parameters, headers, or JSON fields that the Actix route accepts, and those inputs may be concatenated into system or user messages without validation.

In this scenario, the LLM security check for prompt injection becomes relevant. middleBrick runs active probes designed to extract system prompts, override instructions, execute DAN-style jailbreaks, attempt data exfiltration, and probe for cost exploitation. If the Actix endpoint concatenates user data into the prompt—for example, building a system message like system: "You are a helpful assistant. " + user_input—the injected text can shift the model behavior, causing it to ignore prior instructions or reveal internal logic. Even when Basic Auth is required to reach the handler, if the handler does not treat authenticated context as part of the trust boundary for prompt construction, the injection remains effective because the vulnerability is in how the prompt is assembled, not in transport protection.

Real-world patterns include placing user input into few-shot examples or chain-of-thought prompts where injected steps can reorder or omit safety checks. An attacker might supply a header like X-Role: System and have the Actix code include that header value in the next message role, effectively changing the persona the model adopts. Because middleBrick tests for system prompt extraction and instruction override, an Actix service that does not sanitize or strictly scope user-supplied fields used in prompts may allow an attacker to coerce the model into leaking training data or bypassing intended constraints. This is especially relevant when the service uses unstructured user input to dynamically generate few-shot demonstrations or to personalize system messages, as the prompt becomes a mutable artifact shaped by external data.

Additionally, if the Actix application exposes an endpoint that returns structured completions or tool calls, prompt injection can lead to excessive agency or unsafe output consumption. For instance, by injecting instructions that direct the model to produce code, URLs, or credentials, the resulting response may contain PII or API keys that middleBrick’s output scanning flags. Because the LLM security checks include output scanning for PII, API keys, and executable code, the combination of Basic Auth on the transport and unchecked prompt assembly can still result in findings related to data exposure and unsafe consumption. Proper input validation, strict separation of authenticated context from prompt variables, and schema constraints on user-controlled fields reduce the attack surface without changing the fundamental design of the endpoint.

Basic Auth-Specific Remediation in Actix — concrete code fixes

To mitigate prompt injection in Actix while using Basic Auth, ensure that authenticated identity is never directly interpolated into prompts and that user-controlled inputs are validated and sanitized before inclusion in any LLM request. Below are concrete code examples that demonstrate a secure pattern and an insecure pattern to avoid.

Insecure pattern to avoid

The following Actix handler concatenates a user-supplied query parameter directly into a system message after verifying Basic Auth credentials. This creates a prompt injection surface because the system prompt becomes mutable by the attacker, even though access is gated by authentication.

use actix_web::{web, HttpResponse, Responder};
use actix_http::header::HeaderValue;

async fn vulnerable_chat(
    credentials: Option>,
    query: web::Query>,
) -> impl Responder {
    // Authentication check only gates access, not prompt construction
    if credentials.is_none() {
        return HttpResponse::Unauthorized().body("missing auth");
    }
    let user_input = query.get("message").cloned().unwrap_or_default();
    // Vulnerable: user input injected into system prompt
    let system_prompt = format!("You are a helpful assistant. {}", user_input);
    // Call LLM with system_prompt (pseudo-code)
    let _response = call_llm(&system_prompt).await;
    HttpResponse::Ok().body("done")
}

Secure remediation pattern

In this corrected version, the handler validates credentials, treats the authenticated role as separate from prompt assembly, and uses a strict schema to allow only safe user input into the user message role. No user data is interpolated into system instructions, and the system prompt remains constant regardless of authentication or input.

use actix_web::{web, HttpResponse, Responder};
use actix_http::header::Authorization;
use serde::{Deserialize, Serialize};

#[derive(Deserialize)]
struct ChatRequest {
    message: String,
}

#[derive(Serialize)]
struct ChatResponse {
    content: String,
}

async fn secure_chat(
    auth: Option>,
    req: web::Json,
) -> impl Responder {
    // Authentication still required, but prompt is fixed
    if auth.is_none() {
        return HttpResponse::Unauthorized().body("missing auth");
    }
    // Validate and sanitize user input
    let user_message = sanitize_input(&req.message);
    // System prompt is constant and never includes user data
    let system_prompt = "You are a helpful assistant.";
    // User message is sent as a separate user role, not injected into system instructions
    let response = call_llm_with_role(system_prompt, &user_message).await;
    HttpResponse::Ok().json(ChatResponse { content: response })
}

fn sanitize_input(input: &str) -> String {
    // Basic sanitation example: trim and limit length
    input.trim().chars().take(500).collect()
}

async fn call_llm_with_role(system: &str, user: &str) -> String {
    // Pseudo-code for sending a structured conversation to an LLM
    format!("{}: {}", system, user)
}

These patterns ensure that Basic Auth controls access to the endpoint while prompt assembly remains deterministic and isolated from user-controlled data. For production use, pair this approach with schema validation and output scanning to detect any residual risks related to data exposure or unsafe consumption that middleBrick’s checks are designed to surface.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Does using Basic Auth prevent prompt injection in Actix?
No. Basic Auth secures the HTTP endpoint but does not prevent prompt injection if user-controlled input is interpolated into prompts. Injection occurs at the prompt construction layer, not the transport layer.
What is the key mitigation for prompt injection in Actix with Basic Auth?
Keep system prompts constant and never include authenticated context or user input in them. Validate and sanitize all user data, and send it as a separate user message role rather than injecting it into system instructions.