Llm Data Leakage in Echo Go with Basic Auth
Llm Data Leakage in Echo Go with Basic Auth — how this specific combination creates or exposes the vulnerability
LLM Data Leakage occurs when an application unintentionally exposes sensitive information in responses generated by language models, such as API keys, PII, or internal logic. When combined with Basic Authentication in Echo Go, the risk profile changes in subtle but important ways.
Echo Go is a lightweight HTTP framework commonly used to build APIs. Basic Auth in Echo Go is typically implemented by validating credentials on each request using middleware. While this protects the endpoint from unauthenticated access, it does not prevent authenticated users from interacting with an LLM-enabled handler that may leak data. If a handler invokes an LLM—such as for code generation, natural language responses, or function calling—and that LLM echoes back user input or system prompts, credentials or other sensitive data can be exposed in the output.
For example, if a developer embeds an API key in a system prompt to guide LLM behavior, and the LLM is configured to reflect on its instructions during a jailbreak or data exfiltration probe, the key can be extracted. middleBrick’s LLM/AI Security checks detect this by testing for system prompt leakage using 27 regex patterns tailored to formats like ChatML, Llama 2, Mistral, and Alpaca. In an Echo Go service using Basic Auth, an authenticated user who has passed middleware checks may still trigger these leaks if input validation is weak or if the LLM is allowed to reflect on privileged instructions.
Additionally, output scanning for PII, API keys, and executable code is essential. Even with Basic Auth enforcing identity, an Echo Go handler might pass user-controlled data into an LLM and return a response that includes secrets. For instance, a user could supply a payload designed to coax the LLM into repeating a stored API key embedded in a tool call or function schema. middleBrick’s active prompt injection testing—covering system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation—simulates these attacks against authenticated endpoints.
Excessive agency detection is another critical layer. If the Echo Go service exposes an LLM endpoint that supports tool calls or function_call patterns—common in LangChain-style workflows—an attacker may leverage Basic Auth–verified access to drive the LLM into performing unauthorized actions. middleBrick identifies these patterns and flags them even when authentication is in place, because the vulnerability lies in how the LLM is used, not in credential handling.
Unauthenticated LLM endpoint detection also applies. In some configurations, an Echo Go route might expose an LLM interface without enforcing auth at the handler level, assuming Basic Auth middleware covers all routes. If a route is accidentally omitted from middleware application, the LLM becomes accessible without credentials, compounding the risk of data leakage. middleBrick flags such endpoints to ensure every LLM interaction is authenticated and monitored.
Basic Auth-Specific Remediation in Echo Go — concrete code fixes
Securing an Echo Go API that uses Basic Auth and interacts with LLMs requires precise handling of credentials, input, and LLM behavior. Below are concrete remediation steps and code examples.
1. Enforce Basic Auth on all routes, including LLM handlers
Ensure that middleware is applied globally or to specific routes that handle LLM interactions. Do not skip middleware on any endpoint that accepts user input or invokes an LLM.
package main
import (
"net/http"
"strings"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)
func main() {
e := echo.New()
// Apply Basic Auth globally
e.Use(middleware.BasicAuth(func(username, password string) bool {
// Validate credentials securely, e.g., against a hashed store
return username == "admin" && password == "securePass123"
}))
// LLM handler with auth enforced
e.GET("/llm/respond", llmHandler)
e.Logger.Fatal(e.Start(":8080"))
}
2. Sanitize and validate input before sending to LLM
Never forward raw user input to an LLM. Strip or encode patterns that could trigger prompt injection or echo behavior. Use strict allowlists where possible.
func sanitizeInput(userInput string) string {
// Remove or escape sequences commonly used in prompt injection
re := regexp.MustCompile(`(?i)(system|prompt|instruction|jailbreak|DAN)`)
return re.ReplaceAllString(userInput, "")
}
func llmHandler(c echo.Context) error {
userInput := c.QueryParam("text")
cleanInput := sanitizeInput(userInput)
// Call LLM with clean input only
response, err := callLLM(cleanInput)
if err != nil {
return c.JSON(http.StatusInternalServerError, map[string]string{"error": err.Error()})
}
return c.JSON(http.StatusOK, map[string]string{"response": response})
}
3. Avoid embedding secrets in system prompts or tool schemas
Do not include API keys or credentials in prompts that may be reflected by the LLM. If configuration is needed, use environment variables and reference them indirectly without exposing content to the model.
func callLLM(userInput string) (string, error) {
apiKey := os.Getenv("LLM_API_KEY")
// Do not include apiKey in prompt or tool definition
prompt := "Respond to the user query safely."
// Example call to an LLM client (implementation-specific)
resp, err := llmClient.ChatCompletion(prompt, userInput)
if err != nil {
return "", err
}
return resp, nil
}
4. Limit LLM capabilities to reduce agency
Disable or restrict tool calls and function_call features unless strictly required. If enabled, validate and sandbox each tool’s usage to prevent unauthorized actions driven by the LLM.
type LLMClient struct {
DisableTools bool
}
func (c *LLMClient) ChatCompletion(prompt, userInput string) (string, error) {
// When DisableTools is true, do not process tool_calls or function_call
if c.DisableTools {
// Use a simplified completion path
}
// ...
}
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |