Graphql Batching in Actix
Actix-Specific Remediation
To remediate unsafe GraphQL batching in Actix, developers must enforce a maximum batch size at the request deserialization layer. Actix provides built-in tools for this via its extractor system and middleware. The solution involves replacing a naive web::Json<Vec<T>> with a custom extractor that validates batch size before deserialization.
First, define a wrapper type with a size limit:
use actix_web::{web, FromRequest, HttpRequest};
use async_graphql::Request as GraphQLRequest;
pub struct LimitedBatch(pub Vec<GraphQLRequest>);
impl LimitedBatch {
pub const MAX_SIZE: usize = 10; // Configurable limit
}
impl<T> FromRequest for LimitedBatch
where
Vec<T>: FromRequest,
{
type Error = <Vec<T> as FromRequest>::Error;
type Future = std::pin::Pin<Box<dyn std::future::Future<Output = Result<Self, Self::Error>>>;
fn from_request(req: &HttpRequest, payload: &mut actix_web::dev::Payload) -> Self::Future {
let fut = Vec::<GraphQLRequest>::from_request(req, payload);
Box::pin(async move {
let batch = fut.await?;
if batch.len() > LimitedBatch::MAX_SIZE {
return Err(actix_web::error::ErrorBadRequest(
format!("Batch size exceeds limit of {}", LimitedBatch::MAX_SIZE)
));
}
Ok(LimitedBatch(batch))
})
}
}
Then use it in your Actix route:
async fn graphql_handler(
schema: web::Data<Schema>,
batch: web::Json<LimitedBatch>
) -> impl Responder {
let mut responses = Vec::new();
for request in batch.0 .0 {
let resp = schema.execute(request).await;
responses.push(resp);
}
web::Json(responses)
}
This ensures that any batch exceeding 10 operations is rejected with a 400 Bad Request before processing begins. The limit can be tuned based on your API’s expected usage and backend capacity. Additionally, consider applying rate limiting via Actix middleware (e.g., actix-web-ratelimit) to further mitigate abuse. This approach aligns with OWASP API4:2023 guidance on restricting resource consumption through input validation.