Path Traversal in Axum with Mongodb
Path Traversal in Axum with Mongodb — how this specific combination creates or exposes the vulnerability
Path Traversal occurs when an attacker manipulates a file path or identifier to access resources outside the intended directory or scope. In an Axum application using MongoDB as the backend, the risk typically arises from unsafe handling of user-supplied input that influences file paths, object IDs, or lookup keys before data is stored or retrieved from MongoDB. Even though MongoDB itself does not operate with a traditional filesystem path model, Axum routes and request handling can introduce traversal-like behaviors if input is not validated or sanitized.
Consider an endpoint that accepts a document identifier or a logical grouping name as a URL parameter, for example /api/data/{group}/{document_id}. If the application directly concatenates or uses these values to construct MongoDB document keys or collection names without validation, an attacker may supply sequences like ../../etc/passwd or crafted ObjectId patterns to probe for unintended access. While MongoDB queries rely on BSON structures rather than raw filesystem paths, unsafe string handling in Axum route extraction can still lead to unauthorized data access, information leakage, or unexpected query targeting.
Additionally, if Axum serves static assets or constructs file-system paths for logs or attachments based on user input, unchecked traversal characters can escape the intended base directory. This is particularly relevant when integrating Axum with storage abstractions or when generating dynamic references that ultimately resolve to filesystem operations downstream. The combination of Axum’s flexible routing and MongoDB’s document-oriented model increases the importance of strict input validation to avoid indirect path traversal through logically related identifiers.
Another scenario involves the use of user-controlled fields within MongoDB documents that are later interpreted as paths by auxiliary tooling or export utilities. For instance, storing user-supplied metadata with embedded sequences like ../ may not affect the initial query in Axum + MongoDB, but could be exploited later during data export, backup scripts, or log processing. Therefore, treating all user input as potentially malicious is essential, even when the immediate backend is a structured database like MongoDB.
In security scans performed by middleBrick, such misconfigurations are flagged under checks like Input Validation and Property Authorization. The scanner analyzes unauthenticated attack surfaces and cross-references OpenAPI specifications with observed runtime behavior to highlight risky patterns. For Axum services integrated with MongoDB, ensuring strict schema validation, using typed extractors, and avoiding raw string concatenation are foundational controls that reduce the likelihood of traversal-related issues.
Mongodb-Specific Remediation in Axum — concrete code fixes
To mitigate path traversal risks in an Axum application using MongoDB, apply strict validation and canonicalization at the boundaries where user input enters the system. Prefer typed extractors and schema-driven parsing instead of raw string manipulation. Below are concrete patterns and code examples for secure handling.
1. Validate and sanitize route parameters
Use strongly typed extractors and reject unexpected characters. For identifiers, prefer MongoDB’s ObjectId type to avoid string-based traversal risks.
use axum::{routing::get, Router};
use mongodb::{bson::oid::ObjectId, Client};
use std::net::SocketAddr;
async fn get_document(
Path(id): Path<String>,
client: <State>Client,
) -> Result<impl IntoResponse, (StatusCode, String)> {
// Reject paths containing traversal-like sequences
if id.contains("..") || id.contains("/") || id.contains("\\") {
return Err((StatusCode::BAD_REQUEST, "Invalid identifier").into());
}
let oid = ObjectId::parse_str(&id).map_err(|_| (StatusCode::BAD_REQUEST, "Invalid ObjectId"))?;
let db = client.database("mydb");
let collection = db.collection("docs");
let doc = collection.find_one(doc! { "_id": oid }, None).await?;
Ok(Json(doc.ok_or((StatusCode::NOT_FOUND, "Not found"))?))
}
#[tokio::main]
async fn main() {
let client = Client::with_uri_str("mongodb://localhost:27017").await.unwrap();
let app = Router::new()
.route("/api/data/:id", get(get_document))
.with_state(client);
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
axum::Server::bind(&addr)
.serve(app.into_make_service())
.await
.unwrap();
}
2. Control collection and database names
Avoid deriving collection names directly from user input. If dynamic collection references are necessary, map them through a strict allowlist.
fn safe_collection_name(input: &str) -> Option<<str>> {
match input {
"logs" | "users" | "events" => Some(input),
_ => None,
}
}
async fn write_log(
State(client): State<Client>,
Json(payload): Json<serde_json::Value>,
) -> impl IntoResponse {
let collection_name = match safe_collection_name("logs") {
Some(name) => name,
None => return (StatusCode::BAD_REQUEST, "Invalid collection").into_response(),
};
let db = client.database("audit");
let collection = db.collection(collection_name);
collection.insert_one(payload, None).await.unwrap();
(StatusCode::CREATED, "Logged").into_response()
}
3. Sanitize metadata and document fields
If documents contain user-controlled fields that might be used in downstream processing, normalize or escape traversal sequences before storage or export.
use mongodb::bson::{doc, Document};
fn sanitize_field(value: String) -> String {
value.replace("..", "_DOT_").replace("/", "_SLASH_")
}
async fn store_sanitized(
State(client): State<Client>,
Json(mut payload): Json<Document>,
) -> impl IntoResponse {
if let Some(val) = payload.get_mut("note") {
if let Some(s) = val.as_str() {
*val = doc!("$value": sanitize_field(s.to_string())).into();
}
}
let db = client.database("mydb");
let collection = db.collection("entries");
collection.insert_one(payload, None).await.unwrap();
StatusCode::CREATED.into_response()
}
4. Apply schema validation in MongoDB
Use MongoDB JSON Schema validation to restrict the structure and content of stored documents, reducing the impact of malicious input that reaches the database.
use mongodb::bson::doc;
let validator = doc! {
"$jsonSchema": {
"bsonType": "object",
"required": ["username"],
"properties": {
"username": {
"bsonType": "string",
"pattern": "^[a-zA-Z0-9_-]+$"
},
"path": {
"bsonType": "string",
"pattern": "^[^/\\\\..]+$"
}
}
}
};
let cmd = doc! {
"collMod": "entries",
"validator": validator,
"validationLevel": "moderate",
"validationAction": "error"
};
client.database("mydb").run_command(cmd, None).await?;
These practices align with middleBrick’s checks for Input Validation and Property Authorization. By combining typed routing, allowlists, field sanitization, and schema rules, you reduce the attack surface for path traversal and related issues in an Axum + MongoDB stack.
Related CWEs: inputValidation
| CWE ID | Name | Severity |
|---|---|---|
| CWE-20 | Improper Input Validation | HIGH |
| CWE-22 | Path Traversal | HIGH |
| CWE-74 | Injection | CRITICAL |
| CWE-77 | Command Injection | CRITICAL |
| CWE-78 | OS Command Injection | CRITICAL |
| CWE-79 | Cross-site Scripting (XSS) | HIGH |
| CWE-89 | SQL Injection | CRITICAL |
| CWE-90 | LDAP Injection | HIGH |
| CWE-91 | XML Injection | HIGH |
| CWE-94 | Code Injection | CRITICAL |