Llm Data Leakage in Fiber (Go)
Llm Data Leakage in Fiber with Go — how this specific combination creates or exposes the vulnerability
When building APIs with Fiber in Go, integrating LLM capabilities (e.g., calling external LLM endpoints or exposing an LLM endpoint) can unintentionally leak sensitive data through prompts, model inputs, and outputs. LLM Data Leakage occurs when proprietary or personally identifiable information (PII) is exposed to an external service or reflected in responses, which may violate compliance requirements and introduce confidentiality risks.
In a Fiber-based Go service, common patterns such as forwarding user input directly to an LLM, logging request/response payloads, or using context variables in prompts can create leakage vectors. For example, concatenating user-supplied data into a prompt without validation or sanitization may result in system prompt exposure via prompt injection or data exfiltration. Similarly, if the application streams LLM responses to clients without filtering, sensitive data embedded in model outputs—such as API keys or internal notes—can be exposed.
The risk is heightened when unauthenticated or weakly authenticated endpoints are used, as attackers may probe for LLM interfaces that do not enforce strict input validation or output controls. Inadequate handling of tool calls and function calling patterns can also lead to excessive agency, where an LLM is triggered to perform unintended actions or disclose information across chained calls. Because LLM interactions often involve structured text formats (e.g., ChatML), regular expression–based leakage detection targeting these formats becomes essential to identify inadvertent disclosures before they reach production environments.
middleBrick’s LLM/AI Security checks are designed to detect these scenarios in Fiber services implemented in Go by testing for system prompt leakage across common formats, validating that prompts cannot be overridden via injected instructions, and scanning outputs for PII, API keys, and executable code. The scanner also flags endpoints that expose LLMs without authentication and identifies patterns associated with tool abuse or cost exploitation, providing findings with severity levels and remediation guidance tailored to the runtime behavior of the service.
Go-Specific Remediation in Fiber — concrete code fixes
To mitigate LLM Data Leakage in Fiber applications written in Go, apply strict input validation, output filtering, and controlled prompt construction. Avoid directly interpolating user input into LLM prompts, and ensure that any data sent to external models is sanitized and scoped to the minimum required context.
Below is a secure Fiber handler example that demonstrates how to safely invoke an LLM endpoint while minimizing leakage risk. It uses environment variables for secrets, validates and sanitizes inputs, and ensures responses are inspected before being returned to the client.
//go
package main
import (
"context"
"fmt"
"net/http"
"os"
"regexp"
"strings"
"github.com/gofiber/fiber/v2"
"github.com/gofiber/fiber/v2/middleware/logger"
)
// sanitizeInput removes potentially sensitive substrings and limits length.
func sanitizeInput(input string) (string, error) {
// Reject control characters and excessive length.
if len(input) > 1024 {
return "", fmt.Errorf("input too long")
}
// Remove patterns that resemble API keys or internal identifiers.
re := regexp.MustCompile(`\b[a-zA-Z0-9/+]{30,}==?\b`)
clean := re.ReplaceAllString(input, "[REDACTED]")
// Strip newlines and extra whitespace.
clean = strings.ReplaceAll(clean, "\n", " ")
clean = strings.TrimSpace(clean)
return clean, nil
}
// safePrompt builds a prompt using predefined instructions and sanitized user content.
func safePrompt(userText string) string {
base := "You are a helpful assistant. Answer concisely. Do not disclose internal procedures. "
return base + userText
}
func llmHandler(c *fiber.Ctx) error {
// Retrieve and sanitize user input.
raw := c.FormValue("message")
if raw == `` {
return c.Status(http.StatusBadRequest).JSON(fiber.Map{"error": "message is required"})
}
clean, err := sanitizeInput(raw)
if err != nil {
return c.Status(http.StatusBadRequest).JSON(fiber.Map{"error": err.Error()})
}
// Construct prompt safely; do not concatenate untrusted data into system messages.
prompt := safePrompt(clean)
// Example call to an external LLM endpoint (pseudocode client).
// In production, use a configured client with timeouts and transport restrictions.
resp, err := callLLMEndpoint(c.Context(), prompt)
if err != nil {
return c.Status(http.StatusInternalServerError).JSON(fiber.Map{"error": "LLM request failed"})
}
// Filter response for PII, keys, or code before sending to client.
filtered := filterSensitive(resp)
return c.JSON(fiber.Map{"response": filtered})
}
// callLLMEndpoint is a stub for an actual HTTP request to an LLM service.
func callLLMEndpoint(ctx context.Context, prompt string) (string, error) {
// Implement with proper authentication, e.g., API key in header, TLS, and restricted networking.
return "Filtered model output", nil
}
// filterSensitive removes or masks sensitive patterns in model output.
func filterSensitive(text string) string {
// Remove potential API keys.
reKey := regexp.MustCompile(`\b[a-zA-Z0-9]{32,}`)
text = reKey.ReplaceAllString(text, "[REDACTED]")
// Remove email-like patterns.
reEmail := regexp.MustCompile(`\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b`)
text = reEmail.ReplaceAllString(text, "[EMAIL]")
return text
}
func main() {
app := fiber.New()
app.Use(logger.New())
app.Post("/chat", llmHandler)
// Start server on a controlled interface.
if err := app.Listen(":3000"); err != nil {
panic(err)
}
}
Additional remediation steps include using middleware to enforce authentication where applicable, rotating secrets stored in environment variables, and integrating continuous scanning with the middleBrick CLI to validate that endpoints remain compliant across changes. For teams using the Pro plan, enabling continuous monitoring and GitHub Action integration can help catch regressions before deployment, ensuring that LLM endpoints in Fiber services remain secure and aligned with OWASP API Top 10 and other relevant compliance frameworks.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |