HIGH ssrf server sideecho gocockroachdb

Ssrf Server Side in Echo Go with Cockroachdb

Ssrf Server Side in Echo Go with Cockroachdb — how this specific combination creates or exposes the vulnerability

Server-Side Request Forgery (SSRF) occurs when an application is tricked into making unintended internal or external network requests from the server side. In an Echo Go service that uses CockroachDB, SSRF can arise when user-supplied URLs or host parameters are used to construct database connection strings, HTTP client requests, or internal service calls without adequate validation.

Consider an endpoint that accepts a database hostname or a proxy URL from a request to dynamically form a CockroachDB connection. If that input is passed directly to sql.Open or used to seed an HTTP client without restricting the target, an attacker can supply an internal address such as http://169.254.169.254 (AWS metadata service) or a CockroachDB node meant only for internal traffic. Because the scan tests unauthenticated attack surfaces, middleBrick will flag this as an SSRF-related finding during its parallel checks, highlighting risks like internal metadata exposure or unintended database interactions.

Echo Go applications that expose database or HTTP client configuration via user input increase the attack surface. For example, a handler that builds a CockroachDB URI from query parameters and then calls db.Ping() can allow an attacker to probe internal endpoints. The scanner’s checks for SSRF, Input Validation, and Unsafe Consumption are designed to detect patterns where external inputs influence network destinations without proper allowlisting or sanitization.

middleBrick’s LLM/AI Security checks are also relevant if the API interacts with AI services that may forward user-supplied URLs. Even in non-AI contexts, the detection of SSRF is valuable because it surfaces misconfigurations that could lead to data exposure or lateral movement within a cluster. The tool cross-references the OpenAPI spec with runtime behavior, so if your spec describes a parameter as a free-form URL but runtime tests show it reaches internal services, the finding will include severity, impact, and remediation guidance.

Cockroachdb-Specific Remediation in Echo Go — concrete code fixes

Remediation focuses on strict input validation, allowlisting, and avoiding dynamic construction of sensitive endpoints. When working with CockroachDB in Echo Go, prefer fixed connection parameters and avoid passing raw user input into database or HTTP configurations.

Example: Safe CockroachDB Connection in Echo Go

package main

import (
	"context"
	"database/sql"
	"net/http"
	"os"

	"github.com/labstack/echo/v4"
	_ "github.com/lib/pq"
)

func main() {
	e := echo.New()

	// Use a fixed, environment-provided DSN; do not build it from user input.
	dsn := os.Getenv("COCKROACH_DSN")
	if dsn == "" {
		panic("COCKROACH_DSN environment variable is required")
	}

	db, err := sql.Open("postgres", dsn)
	if err != nil {
		panic(err)
	}
	defer db.Close()

	// Validate user input against an allowlist if it must influence behavior.
	e.GET("/query/:table", func(c echo.Context) error {
		table := c.Param("table")
		allowed := map[string]bool{
			"users":  true,
			"items":  true,
			"logs":   true,
		}
		if !allowed[table] {
			return echo.NewHTTPError(http.StatusBadRequest, "invalid table")
		}

		var count int
		// Use parameterized queries to avoid injection; do not interpolate table names.
		// For dynamic table names, whitelist as above and use backticks safely.
		query := `SELECT count(*) FROM ` + table
		if err := db.QueryRow(context.Background(), query).Scan(&count); err != nil {
			return echo.NewHTTPError(http.StatusInternalServerError, err.Error())
		}
		return c.JSON(http.StatusOK, map[string]int{"count": count})
	})

	// Do not expose a parameter that allows arbitrary URL or host specification.
	// If you must accept a target, validate it strictly.
	e.GET("/external", func(c echo.Context) error {
		url := c.QueryParam("url")
		if !isAllowedHost(url) {
			return echo.NewHTTPError(http.StatusBadRequest, "url not allowed")
		}
		// Perform a controlled request, with timeouts and no internal proxy usage.
		resp, err := http.Get(url)
		if err != nil {
			return echo.NewHTTPError(http.StatusBadRequest, err.Error())
		}
		defer resp.Body.Close()
		return c.JSON(http.StatusOK, map[string]string{"status": resp.Status})
	})

	_ = e.Start(":8080")
}

func isAllowedHost(url string) bool {
	// Allow only specific, safe endpoints; reject internal IPs and metadata addresses.
	// This is a simplified example; use a robust URL parser in production.
	if url == "" {
		return false
	}
	// Reject common SSRF targets
	internalPatterns := []string{"127.0.0.1", "169.254.169.254", "localhost", "10.", "192.168.", "172.16.", "172.17.", "172.18.", "172.19.", "172.20.", "172.21.", "172.22.", "172.23.", "172.24.", "172.25.", "172.26.", "172.27.", "172.28.", "172.29.", "172.30.", "172.31."}
	for _, p := range internalPatterns {
		if len(url) >= len(p) && url[:len(p)] == p {
			return false
		}
	}
	return true
}

Key remediation practices:

  • Do not construct DSN or connection strings from user input; use environment variables or fixed configuration.
  • Validate and allowlist any user-controlled strings that influence database object names or target hosts.
  • When calling external services, reject internal IPs and metadata endpoints to prevent SSRF, as detected by middleBrick’s checks for Data Exposure and Input Validation.
  • Use context with timeouts and avoid automatic proxy environment variables that could redirect traffic internally.

By combining these practices with middleBrick’s scans, you can identify risky parameter usage and ensure that CockroachDB connectivity remains restricted to intended, validated endpoints.

Frequently Asked Questions

How does middleBrick detect SSRF risks in an Echo Go service using CockroachDB?
middleBrick runs unauthenticated checks that include Input Validation, Data Exposure, and Unsafe Consumption. It tests whether user-influenced inputs can direct network requests to internal endpoints, such as metadata services or internal CockroachDB nodes, and reports findings with severity and remediation guidance.
Can middleBrick’s LLM/AI Security checks help if my API interacts with external AI services?
Yes. middleBrick’s LLM/AI Security checks include system prompt leakage detection, active prompt injection testing, output scanning for PII or API keys, and detection of excessive agency patterns. These are useful if your API forwards user input to LLM endpoints or integrates AI tooling.