Out Of Bounds Read in Actix with Bearer Tokens
Out Of Bounds Read in Actix with Bearer Tokens — how this specific combination creates or exposes the vulnerability
An Out Of Bounds Read occurs when an application reads memory beyond the intended buffer, often returning adjacent data or causing crashes. In Actix web applications, this can arise when byte or string operations rely on unchecked offsets or lengths derived from request inputs. When combined with Bearer Token handling, the risk is compounded if token parsing, validation, or storage uses unsafe slicing or indexing on headers and payloads.
Consider an Actix service that extracts a Bearer token from the Authorization header and performs manual byte-level slicing to decode a substring (for example, isolating the token value after the Bearer prefix). If the service trusts a length value derived from user input or an incomplete header check, it may attempt to read beyond the actual buffer size. This can expose adjacent memory, leading to information disclosure or instability. Even when the framework manages the header as a high-level structure, custom middleware or handlers that bypass safe abstractions reintroduce the possibility.
During a black-box scan, middleBrick runs checks that include Input Validation and Unsafe Consumption, which test how your API handles malformed or oversized Authorization headers. For instance, a probe may send an Authorization header with an extremely long Bearer token or one with embedded null bytes. If the application uses unchecked indexing or unsafe conversions, these probes can trigger out-of-bounds behavior that middleBrick detects as a finding. Because the scanner tests unauthenticated endpoints, it can surface this class of issue without requiring credentials, highlighting places where token handling intersects with memory safety in Actix routes.
Real-world patterns matter: an out-of-bounds read does not always cause a crash. It can quietly leak stack contents or internal tokens when debug or logging logic also depends on the same unsafe buffers. MiddleBrick’s LLM/AI Security checks do not apply here, but the scanner’s Inventory Management and Data Exposure checks look for risky data flows that might expose sensitive information through token-related operations. The interplay of header parsing, byte manipulation, and token usage is subtle and often hidden in custom extractor implementations.
Example of unsafe code that can lead to out-of-bounds reads:
use actix_web::{web, HttpRequest, HttpResponse};
async fn unsafe_token_slice(req: HttpRequest) -> HttpResponse {
// Naive extraction: assumes header format and length are safe
if let Some(auth) = req.headers().get("Authorization") {
if let Ok(auth_str) = auth.to_str() {
if auth_str.starts_with("Bearer ") {
// Unsafe: direct indexing without bounds verification
let token = &auth_str[7..15]; // fixed slice — dangerous if header is shorter or longer
return HttpResponse::Ok().body(token.to_string());
}
}
}
HttpResponse::Unauthorized().body("No token")
}
In this snippet, the fixed slice 7..15 assumes the token length and header format are guaranteed. A short or malformed Authorization header results in an out-of-bounds read. Even dynamic slicing based on string indices can be unsafe if the indices are derived from unchecked user input. MiddleBrick’s checks for Input Validation and Property Authorization are designed to surface these issues by probing with varied header values and inspecting how the application responds.
Bearer Tokens-Specific Remediation in Actix — concrete code fixes
Remediation focuses on safe parsing, explicit bounds checks, and leveraging Actix’s built-in extractor patterns. Avoid manual slicing; instead, use structured extraction and validation to ensure indices and lengths stay within the actual buffer.
Safe approach using split and validation:
use actix_web::{web, HttpRequest, HttpResponse};
fn extract_bearer_token(auth_header: &str) -> Option<&str> {
const PREFIX: &str = "Bearer ";
if auth_header.starts_with(PREFIX) {
let token = &auth_header[PREFIX.len()..];
// Ensure token is non-empty and does not rely on unchecked indices
if !token.is_empty() {
return Some(token);
}
}
None
}
async fn safe_token_handler(req: HttpRequest) -> HttpResponse {
match req.headers().get("Authorization") {
Some(hdr) => {
if let Ok(hdr_str) = hdr.to_str() {
if let Some(token) = extract_bearer_token(hdr_str) {
// token is safely bounded within hdr_str
return HttpResponse::Ok().body(token.to_string());
}
}
HttpResponse::Unauthorized().body("Invalid Authorization header")
}
None => HttpResponse::Unauthorized().body("Missing Authorization header"),
}
}
This pattern avoids fixed indexing by using prefix length to derive the slice end and checks that the resulting token is non-empty. It also cleanly separates parsing logic from the handler, making it easier for middleBrick’s checks during the scan to validate data flows and authorization handling.
For more complex scenarios, use dedicated extractors that enforce format and length constraints:
use actix_web::{dev::Payload, Error, FromRequest, HttpMessage, HttpRequest};
use futures_util::future::{ok, Ready};
use std::pin::Pin;
struct BearerToken(String);
impl FromRequest for BearerToken {
type Config = ();
type Error = Error;
type Future = Ready>;
fn from_request(req: &HttpRequest, _: &mut Payload) -> Self::Future {
const PREFIX: &str = "Bearer ";
let ok = req.headers()
.get("Authorization")
.and_then(|v| v.to_str().ok())
.filter(|s| s.starts_with(PREFIX))
.map(|s| &s[PREFIX.len()..])
.filter(|s| !s.is_empty());
ok.map(BearerToken).or(Err(actix_web::error::ErrorUnauthorized("invalid bearer")))
}
}
async fn extractor_handler(token: BearerToken) -> HttpResponse {
HttpResponse::Ok().body(token.0)
}
This extractor centralizes validation and ensures that any use of the token value operates on a properly bounded string. By relying on iterator-based parsing and explicit filters, you eliminate off-by-one and out-of-bounds risks that manual slicing introduces. MiddleBrick’s scans, including checks for Authentication and Property Authorization, can more reliably verify these patterns because token extraction follows safe, idiomatic practices.
Additional remediation guidance:
- Always validate header presence, format, and length before slicing.
- Prefer high-level string methods (e.g.,
strip_prefix) over numeric indices. - Log and handle parse failures without exposing internal buffer details.
- In CI/CD, use middleBrick’s GitHub Action to fail builds if scans detect input validation or authentication weaknesses related to token handling.