HIGH memory leakaxum

Memory Leak in Axum

How Memory Leaks Manifest in Axum

Memory leaks in Axum applications typically occur through improper resource management in async request handlers. When requests hold onto resources longer than necessary, they prevent garbage collection and gradually consume more memory over time.

A common pattern involves Tokio tasks that spawn but never complete. For example, a handler that starts a background task to process data but fails to await or cancel it properly:

async fn leaky_handler() -> impl IntoResponse {
    tokio::spawn(async {
        // Background processing that never completes
        while true {
            // Do work
            tokio::time::sleep(Duration::from_secs(1)).await;
        }
    });
    
    // Handler returns immediately, but spawned task continues
    // running and holding memory
    Ok("Processing started")
}

Another frequent issue occurs with streaming responses that don't properly terminate. Axum's stream response type requires careful management:

async fn streaming_leak() -> impl IntoResponse {
    let (mut tx, rx) = tokio::sync::mpsc::channel(10);
    
    // Spawn a task that never stops sending
    tokio::spawn(async move {
        loop {
            tx.send("data").await.unwrap();
            tokio::time::sleep(Duration::from_millis(10)).await;
        }
    });
    
    // Client disconnects but sender continues running
    // and holding memory
    rx
}

Middleware that captures request state without proper cleanup also causes leaks. Using extract_state incorrectly can lead to reference cycles:

async fn middleware_leak(
    State(state): State<Arc<Mutex<Vec<String>>>>,
    Json(payload): Json<Request>,
) -> Result<impl IntoResponse> {
    // Capture state in async closure without proper lifetime management
    let handle = tokio::spawn(async move {
        // Holds reference to state, preventing cleanup
        state.lock().await.push(payload.data.clone());
    });
    
    // Don't await handle - task continues running
    Ok("Processing")
}

Axum-Specific Detection

Detecting memory leaks in Axum requires monitoring both application behavior and runtime metrics. The most effective approach combines automated scanning with runtime observation.

middleBrick's API security scanner can identify memory leak patterns through its Property Authorization checks. When scanning Axum endpoints, it examines:

  • Response size limits and potential unbounded growth
  • Streaming endpoint configurations that might not terminate properly
  • Background task spawning patterns that lack proper cleanup
  • State management across async boundaries

For runtime detection, implement Tokio's RuntimeMetrics to monitor task counts and memory usage:

use tokio::runtime::RuntimeMetrics;

#[tokio::main]
async fn main() {
    let rt = tokio::runtime::Builder::new_current_thread()
        .enable_all()
        .build()
        .unwrap();
    
    let metrics = RuntimeMetrics::new(&rt);
    
    // Monitor task count growth
    let task_count = metrics.total_current_tasks();
    
    // Check for increasing memory usage over time
    let memory_usage = metrics.memory_usage();
}

Another detection strategy involves using Axum's Extension extractor to track request lifecycle:

use axum::extract::Extension;
use std::sync::atomic::{AtomicU64, Ordering};

struct RequestCounter(AtomicU64);

async fn monitor_requests(
    Extension(counter): Extension<RequestCounter>,
) -> impl IntoResponse {
    let current = counter.0.fetch_add(1, Ordering::SeqCst);
    
    // If request count grows without bound, indicates potential leaks
    if current % 1000 == 0 {
        log::info!("Processed {} requests", current);
    }
    
    Ok("OK")
}

Axum-Specific Remediation

Fixing memory leaks in Axum requires proper async resource management and cleanup patterns. The most critical remediation is always awaiting spawned tasks or using abort_handle for cancellable operations.

Instead of fire-and-forget spawning, use abort_handle to ensure cleanup:

use tokio::time::AbortHandle;

async fn safe_handler() -> impl IntoResponse {
    let (abort_handle, abort_registration) = AbortHandle::new_pair();
    
    // Spawn with proper cancellation
    tokio::spawn(async move {
        tokio::select! {
            _ = abort_registration => {
                // Task was cancelled, clean up resources
                log::info!("Background task cancelled");
            }
            _ = process_data() => {
                // Task completed normally
            }
        }
    });
    
    // Return abort handle to caller if needed
    // or store in request context for later cancellation
    Ok("Processing started with cancellation support")
}

For streaming responses, always implement proper termination:

async fn safe_streaming() -> impl Stream + Send {
    let (mut tx, rx) = tokio::sync::mpsc::channel(10);
    
    // Use abortable stream that terminates when client disconnects
    let stream = tokio_stream::wrappers::ReceiverStream::new(rx);
    
    // Process with timeout to prevent infinite hanging
    tokio::spawn(async move {
        for i in 0..100 {
            // Limited to 100 iterations
            tx.send("data").await.unwrap();
            tokio::time::sleep(Duration::from_millis(10)).await;
        }
        
        // Explicitly close channel
        drop(tx);
    });
    
    stream
}

Middleware should use axum::extract::State with proper lifetime management:

use axum::extract::{State, Json};
use std::sync::Arc;

async fn safe_middleware(
    State(state): State<Arc<Mutex<Vec<String>>>>,
    Json(payload): Json<Request>,
) -> Result<impl IntoResponse> {
    // Use scoped tasks that complete within request lifetime
    let result = tokio::task::spawn_blocking(move || {
        // Process synchronously, return result
        process_payload(payload.data.clone())
    }).await??;
    
    // Update state and return
    state.lock().await.push(result);
    Ok("Processing complete")
}

Frequently Asked Questions

How can I tell if my Axum application has a memory leak?
Monitor your Tokio runtime metrics for growing task counts and memory usage. Use middleBrick's Property Authorization scan to check for unbounded response sizes and improper streaming configurations. Look for patterns where background tasks spawn without proper cancellation or where streaming responses don't terminate.
What's the best way to handle background tasks in Axum without causing leaks?
Always use abort_handle for cancellable tasks, await spawned tasks when possible, or implement proper cleanup callbacks. Avoid fire-and-forget spawning patterns. For long-running operations, consider using tokio::task::spawn_blocking with proper result handling, or store cancellation handles in request-scoped state that gets cleaned up when the request completes.