Prompt Injection in Gorilla Mux with Cockroachdb
Prompt Injection in Gorilla Mux with Cockroachdb — how this specific combination creates or exposes the vulnerability
Prompt injection becomes particularly relevant when an API built with Gorilla Mux routes requests to backend services that use CockroachDB as a data store and also expose or integrate with LLM endpoints. In this stack, user-controlled input that reaches both the routing layer and database queries can be inadvertently passed into prompts sent to an LLM, enabling injection via crafted inputs.
Gorilla Mux provides flexible pattern matching and variable extraction from URLs and headers. If route variables or query parameters that originate from user input are used to construct prompts—such as dynamic table names, filter values, or query context—those values can alter the intended behavior of the LLM prompt. For example, a route like /api/query/{table} might inject the {table} value directly into a system or user prompt that instructs the LLM how to interact with CockroachDB. An attacker could supply a table name like users; -- or a carefully crafted string designed to change instruction flow.
Because CockroachDB is often used in distributed, high-concurrency scenarios, APIs frequently execute multiple SQL statements or dynamic queries constructed from request inputs. If those inputs are also reflected in prompts, an attacker may attempt to extract schema information, manipulate prompt behavior through injected instructions, or cause the LLM to produce unintended outputs that leak metadata about the CockroachDB instance (such as table structures or error messages). This risk is amplified when the API exposes an unauthenticated LLM endpoint or returns verbose error details that reveal prompt content or execution context.
The LLM/AI Security checks in middleBrick specifically target these scenarios by probing for system prompt extraction, instruction override, and data exfiltration attempts across routes that may interface with databases like CockroachDB. The scanner evaluates whether user-controlled data influences prompts and whether outputs expose sensitive information, providing findings with severity and remediation guidance tailored to this architecture.
Cockroachdb-Specific Remediation in Gorilla Mux — concrete code fixes
Remediation focuses on strict separation between data layer inputs and prompt content, ensuring that user-controlled values never directly alter LLM instructions. Use explicit parameterization for database operations and keep prompts static or derived from trusted sources only.
1. Avoid injecting route variables into prompts
Do not use Gorilla Mux variables in prompt text. Instead, treat them strictly as identifiers for selecting the correct database logic.
// Unsafe: injecting user-controlled variable into prompt
r.HandleFunc("/api/query/{table}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
table := vars["table"]
prompt := fmt.Sprintf("Run a SELECT on the {table} table and return results.", table)
llmResp, _ := callLLM(prompt)
fmt.Fprint(w, llmResp)
})
// Safe: use variable only for routing logic, keep prompt static
r.HandleFunc("/api/query/{table}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
table := vars["table"]
if !isValidTable(table) {
http.Error(w, "invalid table", http.StatusBadRequest)
return
}
rows, err := crdbQuery(table)
if err != nil {
http.Error(w, "database error", http.StatusInternalServerError)
return
}
// Use a fixed prompt that does not incorporate raw table name
llmResp, _ := callLLM("Summarize the results from the database query.")
fmt.Fprint(w, llmResp)
})
func isValidTable(name string) bool {
allowed := map[string]bool{"users": true, "orders": true, "products": true}
return allowed[name]
}
func crdbQuery(table string) (*sql.Rows, error) {
db, _ := sql.Open("cockroachdb", "your-dsn")
stmt := fmt.Sprintf("SELECT id, name FROM %s WHERE deleted = false", pq.QuoteIdentifier(table))
return db.Query(stmt)
}
2. Use parameterized queries and prepared statements with CockroachDB
Always use placeholders for values passed to CockroachDB to prevent SQL injection and avoid reflecting user input into prompts.
// Safe: parameterized query, no prompt pollution
r.HandleFunc("/api/user/{id}", func(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
id := vars["id"]
var name string
// Use ? placeholders; CockroachDB driver handles parameterization
err := db.QueryRow("SELECT name FROM users WHERE id = ?", id).Scan(&name)
if err != nil {
http.Error(w, "not found", http.StatusNotFound)
return
}
// Prompt remains independent of user input
llmResp, _ := callLLM(fmt.Sprintf("User name retrieved: %s. Confirm account status.", name))
fmt.Fprint(w, llmResp)
})
3. Validate and sanitize inputs before any LLM interaction
Apply strict allowlists and sanitize data before it reaches either the database or the LLM layer. Log suspicious patterns for review but avoid exposing internal details in responses.
// Example validation helper
func sanitizeInput(input string) (string, bool) {
re := regexp.MustCompile(`^[a-zA-Z0-9_]+$`)
if !re.MatchString(input) {
return "", false
}
return input, true
}
4. Separate LLM endpoints from database endpoints
Consider hosting LLM-facing routes on a distinct path with additional security controls, ensuring that database-related variables are never propagated into prompt templates. middleBrick can validate this separation by scanning both API surfaces and LLM endpoints when applicable.
5. Monitor outputs for PII and secrets
Ensure LLM responses are scanned for accidental exposure of database credentials, schema details, or sensitive data. Use output filters and strict content policies to prevent leakage back to the client.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |