HIGH prompt injectionecho gobasic auth

Prompt Injection in Echo Go with Basic Auth

Prompt Injection in Echo Go with Basic Auth — how this specific combination creates or exposes the vulnerability

Prompt injection becomes more impactful when an API uses HTTP Basic Authentication in an Echo Go service. Basic Auth typically relies on a username and password passed in the Authorization header as base64-encoded credentials. If the API endpoint that consumes these credentials also passes user-controlled input into an LLM call without strict validation or isolation, an attacker can manipulate the effective prompt. In Echo Go, a route might extract the credentials via context.Request().Header.Get("Authorization"), decode them, and then forward user-supplied parameters to an LLM endpoint. Because the credentials are handled separately from the LLM prompt, developers may mistakenly trust the context-derived values and concatenate or interpolate them into system or user prompts. This trust boundary enables an attacker to include crafted text in a parameter that shifts the intended instruction hierarchy, causing the model to ignore earlier directives or reveal system instructions.

Consider a scenario where an Echo Go handler authenticates the request via Basic Auth, extracts the username, and uses it to personalize a user query sent to an LLM. If the handler builds a prompt like system: You are assisting user {username}. {user_input}, an attacker controlling user_input can attempt to inject new system instructions or alter the role behavior. Even when authentication succeeds, the LLM may treat the injected text as a higher-level instruction if the prompt structure does not enforce strict separation. The LLM/AI Security check in middleBrick specifically tests for system prompt extraction and instruction override via sequential probes, which can surface such weaknesses in this combined setup. Because Basic Auth does not inherently protect prompt integrity, the authentication layer and the LLM input layer must be treated as distinct security domains. middleBrick’s unauthenticated scan can detect whether an endpoint echoes credentials or behaves differently based on authentication context, revealing paths where prompt injection is feasible.

In practice, the risk is not that Basic Auth is broken, but that its presence can create a false sense of security. Developers may assume that requiring a username and password is sufficient to control access to LLM functionality, while neglecting input validation and output encoding for the LLM prompt itself. This can lead to unintended model behaviors such as ignoring original instructions, exposing system prompts, or executing unintended actions based on maliciously crafted user input. By combining runtime findings from authenticated and unauthenticated perspectives, middleBrick’s checks highlight where user-controlled data intersects with LLM instructions, even when Basic Auth is in place.

Basic Auth-Specific Remediation in Echo Go — concrete code fixes

To mitigate prompt injection in Echo Go when using Basic Auth, separate authentication from prompt construction and treat user input as untrusted data. Do not embed raw credentials or derived context directly into system prompts. Instead, use the authentication information strictly for access control and keep it out of LLM instructions. The following patterns illustrate secure handling in an Echo Go service.

Example: Secure Basic Auth extraction without prompt contamination

package main

import (
    "encoding/base64"
    "net/http"
    "strings"

    "github.com/labstack/echo/v4"
)

// authenticate extracts and validates Basic Auth credentials.
// It returns the username if valid, or an error.
func authenticate(next echo.HandlerFunc) echo.HandlerFunc {
    return func(c echo.Context) error {
        auth := c.Request().Header.Get("Authorization")
        if auth == "" {
            return echo.ErrUnauthorized
        }
        parts := strings.Split(auth, " ")
        if len(parts) != 2 || strings.ToLower(parts[0]) != "basic" {
            return echo.ErrUnauthorized
        }
        payload, err := base64.StdEncoding.DecodeString(parts[1])
        if err != nil {
            return echo.ErrUnauthorized
        }
        // Expecting "username:password"
        creds := strings.SplitN(string(payload), ":", 2)
        if len(creds) != 2 || creds[0] == "" || creds[1] == "" {
            return echo.ErrUnauthorized
        }
        // At this point, credentials are validated.
        // Store only the username for non-prompt use, if needed.
        c.Set("username", creds[0])
        return next(c)
    }
}

// handler demonstrates safe usage: username is used for context,
// but not inserted into system instructions sent to the LLM.
func handler(c echo.Context) error {
    username, ok := c.Get("username").(string)
    if !ok {
        return echo.ErrUnauthorized
    }
    userInput := c.FormValue("query")
    if userInput == "" {
        return echo.NewHTTPError(http.StatusBadRequest, "query is required")
    }

    // Do NOT build system prompts with raw credentials or username directly.
    // Instead, use a fixed system instruction and pass user input as separate user message.
    systemPrompt := "You are a helpful assistant. Respond concisely."
    userMessage := "User query: " + userInput

    // Example call to an LLM client (implementation-specific).
    // llmResp, err := callLLM(systemPrompt, userMessage)
    // For illustration, we simulate a safe approach.
    c.Response().Header().Set("X-Context-User", username)
    return c.JSON(http.StatusOK, map[string]string{
        "status": "processed",
        "user":   username,
    })
}

Key remediation practices

  • Validate and decode Basic Auth early, then discard credentials rather than propagating them into downstream prompts.
  • Use a fixed system prompt and treat user input strictly as a user message, avoiding concatenation with authentication-derived strings.
  • Apply output validation and redaction to ensure LLM responses do not inadvertently echo credentials or sensitive context.
  • Combine these practices with input validation libraries and strict content-type checks to reduce injection surfaces.

By decoupling authentication from LLM instruction logic, you reduce the risk that an attacker can manipulate prompts through user-controlled parameters, even when Basic Auth is used. middleBrick’s scans can help verify that endpoints do not leak system instructions or behave differently based on injected authentication context.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Can Basic Auth alone prevent prompt injection in Echo Go services?
No. Basic Auth provides identity verification for requests but does not protect the structure or integrity of prompts sent to an LLM. Prompt injection defenses must be implemented at the prompt engineering layer, including input validation, output encoding, and strict separation between authentication context and LLM instructions.
What does middleBrick check for when testing prompt injection in endpoints using Basic Auth?
middleBrick runs active prompt injection probes, including system prompt extraction and instruction override attempts, against the endpoint regardless of authentication. It also checks for system prompt leakage in LLM responses and evaluates whether user-controlled data influences the effective prompt, even when Basic Auth is present.