Ssrf Cloud Metadata in Axum
How SSRF Cloud Metadata Manifests in Axum
In Axum web applications, Server-Side Request Forgery (SSRF) vulnerabilities that target cloud metadata endpoints often arise when user-controlled input is used to construct outgoing HTTP requests without adequate validation. Axum’s async, extractor-based architecture can inadvertently expose these risks in handlers that proxy requests or integrate with external services. For example, consider an Axum handler that accepts a url query parameter to fetch remote resources:
use axum::extract::Query;
use axum::response::IntoResponse;
use reqwest::Client;
async fn fetch_resource(Query(params): Query>) -> impl IntoResponse {
let url = params.get("url").cloned().unwrap_or_default();
let client = Client::new();
match client.get(&url).send().await {
Ok(resp) => resp.text().await.unwrap_or_else(|_| "Error".to_string()),
Err(e) => format!("Request failed: {}", e),
}
}
If deployed in a cloud environment (e.g., AWS EC2, GCP Compute Engine, Azure VM), an attacker could supply http://169.254.169.254/latest/meta-data/ (AWS) or http://metadata.google.internal/computeMetadata/v1/ (GCP) to retrieve sensitive instance metadata, including IAM roles, service account tokens, or project identifiers. Axum’s default behavior does not restrict outbound connections, making such SSRF trivial when user input flows directly into reqwest::Client.
Another common pattern involves middleware or tower layers that rewrite or forward requests based on headers like X-Forwarded-For or custom routing logic. If these headers are trusted without validation, they can be manipulated to point to internal metadata services. For instance, a reverse proxy built with Axum might use the Host header to determine backend targets:
use axum::extract::State;
use axum::http::Request;
use axum::middleware::Next;
use axum::response::Response;
async fn proxy_middleware(State(state): State, mut req: Request, next: Next) -> Result {
let host = req.headers().get("host").and_then(|v| v.to_str().ok()).unwrap_or("");
if host.contains("internal") {
*req.uri_mut() = format!("http://{}", host).parse().unwrap();
}
Ok(next.run(req).await)
}
An attacker could set Host: 169.254.169.254 to bypass internal network restrictions and access AWS metadata. These patterns are particularly dangerous in Axum applications that act as API gateways, webhooks, or service meshes where outbound requests are common.
Axum-Specific Detection
Detecting SSRF to cloud metadata in Axum requires analyzing both code paths and runtime behavior. Since middleBrick performs unauthenticated, black-box scanning, it identifies potential SSRF by injecting payloads targeting known cloud metadata endpoints and monitoring for successful retrieval of sensitive data. For Axum applications, middleBrick’s scanner checks for responses containing metadata-specific strings (e.g., ami-id, instance-id, project-id) after probing parameters like url, callback, redirect, or headers such as X-Forwarded-Host and Host.
During a scan, middleBrick sends sequential probes:
- Parameter fuzzing: Replaces values in query strings and JSON bodies with metadata URLs (AWS, GCP, Azure).
- Header manipulation: Tests
Host,X-Forwarded-Host, andX-Hostheaders with metadata IPs. - Path traversal via URL encoding: Attempts bypasses using
%2E%2E%2For%0D%0Ainjections where Axum’s routing might misinterpret paths.
roleArn or serviceAccount), it flags the finding under the SSRF check with high severity. The resulting report includes the exact vector (e.g., ?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/) and the snippet of metadata returned, enabling developers to confirm the issue without guesswork.
Importantly, middleBrick does not require source code access or configuration. It works by observing how the Axum application responds to external stimuli, making it ideal for detecting SSRF in staging or production APIs where internal tooling may be limited. For Axum developers, integrating middleBrick into CI via the GitHub Action ensures that SSRF risks to cloud metadata are caught early, especially when deploying services that accept user-defined URLs or headers.
Axum-Specific Remediation
Fixing SSRF vulnerabilities in Axum applications involves validating and sanitizing user input before it influences outbound requests, leveraging Axum’s type system and middleware capabilities. The most effective strategy is to avoid passing raw user input directly to HTTP clients. Instead, use an allow-list of permitted domains or paths. For example, rewrite the earlier fetch_resource handler to validate the URL against a safe list:
use axum::extract::Query;
use axum::response::IntoResponse;
use reqwest::Client;
use url::Url;
async fn fetch_resource_safe(Query(params): Query>) -> impl IntoResponse {
let allowed_hosts = ["api.example.com", "cdn.trusted.net"];
let url_str = params.get("url").cloned().unwrap_or_default();
let url = match Url::parse(&url_str) {
Ok(u) => u,
Err(_) => return "Invalid URL".into_response(),
};
if !allowed_hosts.contains(&url.host_str().unwrap_or("")) {
return "Host not allowed".into_response();
}
let client = Client::new();
match client.get(url.as_str()).send().await {
Ok(resp) => resp.text().await.unwrap_or_else(|_| "Error".to_string()),
Err(e) => format!("Request failed: {}", e),
}
}
This approach uses the url crate to parse and validate the host, ensuring only predefined domains are accessible. For Axum applications that need to proxy requests (e.g., API gateways), consider using tower middleware to enforce policies centrally. The tower::ServiceBuilder can layer validation logic:
use tower::ServiceBuilder;
use tower::util::ServiceBuilderExt;
let app = Router::new()
.route("/fetch", get(fetch_resource))
.layer(
ServiceBuilder::new()
.layer_fn(move |service| {
tower::service_fn(move |mut req| {
if let Some(host) = req.uri().host() {
if host == "169.254.169.254" || host.contains("metadata.google.internal") {
return Ok(axum::response::Response::builder()
.status(403)
.body(axum::body::Body::from("Forbidden"))
.unwrap());
}
}
Ok(service.call(req).await)
})
})
);
This middleware blocks requests to known metadata IP addresses and hostnames at the edge. Additionally, Axum’s extractors can be used to create custom validation types. Define a SafeUrl extractor that only accepts URLs from an allow-list:
use axum::extract::FromRequest;
use axum::http::request::Parts;
#[derive(Debug)]
struct SafeUrl(url::Url);
#[async_trait]
impl FromRequest for SafeUrl
where
url::Url: Send + Sync,
{
type Rejection = (axum::http::StatusCode, String);
async fn from_request(req: Parts, _state: &S) -> Result {
let query = uri::query::extract::<_, std::collections::HashMap>(req)
.await
.map_err(|_| (StatusCode::BAD_REQUEST, "Invalid query".to_string()))?;
let url_str = query.get("url").ok_or_else(|| (StatusCode::BAD_REQUEST, "Missing url".to_string()))?;
let url = url::Url::parse(url_str).map_err(|_| (StatusCode::BAD_REQUEST, "Invalid URL".to_string()))?;
let allowed = ["api.example.com"];
if !allowed.contains(&url.host_str().ok_or_else(|| (StatusCode::BAD_REQUEST, "No host".to_string()))?) {
return Err((StatusCode::FORBIDDEN, "Host not allowed".to_string()));
}
Ok(SafeUrl(url))
}
}
// Usage in handler:
async fn fetch_with_extractor(SafeUrl(url): SafeUrl) -> impl IntoResponse {
let client = Client::new();
client.get(url.as_str()).send().await.map_err(|e| format!("Failed: {}", e))?.text().await
}
By encapsulating validation in extractors or middleware, Axum applications maintain clean handler logic while enforcing SSRF protections consistently. Always combine input validation with outbound network controls (e.g., security groups, egress firewalls) for defense in depth, but remember that middleBrick’s role is to detect and report these issues so remediation can be applied at the code level.
Frequently Asked Questions
Can middleBrick detect SSRF to cloud metadata in Axum applications that use TLS or self-signed certificates for outbound requests?
Does using Axum’s built-in <code>Json</code> or <code>Form</code> extractors prevent SSRF to cloud metadata, or is additional validation still needed?
Json or Form extractors alone does not prevent SSRF. These extractors only ensure the incoming payload is correctly formatted (e.g., valid JSON or URL-encoded form data). They do not validate the semantic content of fields like url or callback. If such fields are later used to construct outbound HTTP requests without validation, SSRF to cloud metadata remains possible. Additional validation—such as allow-listing domains or blocking known metadata IPs—is required regardless of the input extraction method.