Out Of Bounds Write in Adonisjs with Api Keys
Out Of Bounds Write in Adonisjs with Api Keys — how this combination creates or exposes the vulnerability
An Out Of Bounds Write occurs when an application writes data beyond the allocated memory boundary, which can corrupt adjacent memory and lead to arbitrary code execution or crashes. In AdonisJS, this risk can emerge when handling API keys in request processing pipelines, especially if input validation or buffer management is incomplete. AdonisJS is a Node.js framework that encourages structured routing and validation; however, if API key values are accepted from untrusted sources and used to index arrays, buffers, or typed arrays without proper bounds checks, an attacker can supply a key that maps to an invalid index.
Consider a scenario where an API key is expected to reference a per-request context stored in an in-memory array. If the key is parsed as an integer and used directly as an array index, a key such as 999999 can cause an out-of-bounds write when the code attempts to assign a value to context[999999] = payload. Even if the intention is to store metadata, the runtime may expand sparse arrays in ways that expose memory-like behavior in JavaScript engines, and adjacent objects or function metadata could be affected. This is particularly dangerous when the same process handles sensitive operations, such as authentication or rate-limiting state, because corruption can bypass expected guards.
Another vector involves buffer handling. AdonisJS applications that interface with native addons or perform binary data processing (e.g., for token encoding) might allocate fixed-size buffers for API key material. If an API key longer than expected is supplied, a write can overflow the buffer and overwrite adjacent memory, including return addresses or function pointers. Even without native code, careless use of Buffer.allocUnsafe and subsequent writes based on untrusted key lengths can create similar conditions. The framework does not automatically sanitize key length or enforce boundaries, so developers must treat API keys as untrusted input.
In the context of the 12 security checks run by middleBrick, an unauthenticated scan can detect patterns that suggest out-of-bounds write risks, such as missing length validation on API key fields, use of unchecked integer conversions, or unsafe buffer operations. The scanner cross-references the OpenAPI/Swagger specification, resolving $ref definitions to identify where API key schemas lack maximum length constraints or where runtime data diverges from the spec. Findings are reported with severity and remediation guidance, noting that the scanner detects and reports but does not fix or block execution.
Notably, this combination does not imply that AdonisJS itself is flawed; it highlights how framework features can be misused when input handling is incomplete. The risk is elevated when API keys are used both for routing decisions and as array indices or buffer offsets. Developers should ensure that all API key usage is validated for type, length, and range before being used in memory-sensitive operations.
Api Keys-Specific Remediation in Adonisjs — concrete code fixes
Remediation focuses on strict validation, bounded data structures, and safe handling of API key material. Always treat API keys as opaque strings and avoid using them as numeric indices. If you must map keys to structured data, use a Map or object with controlled key sets and validate presence before access.
Example 1: Validate length and type before using API key metadata.
import { schema, rules } from '@ioc:Adonis/Core/Validator'
const apiKeySchema = schema.create({
api_key: schema.string({ trim: true }, [
rules.minLength(32),
rules.maxLength(64),
rules.hex(),
]),
})
export async function validateApiKey(ctx: HttpContextContract) {
const payload = await apiKeySchema.validate(ctx.request.body())
// Safe to use payload.api_key as a string; no numeric conversion
return payload.api_key
}
This ensures the key conforms to expected length and character set, preventing excessively long values that could trigger buffer-like behavior in downstream processing.
Example 2: Avoid array indexing with API keys; use Map for lookups.
import { ApiKeyEntity } from 'App/Models/ApiKey'
const keyStore = new Map()
// Populate map from trusted source at startup
// keyStore.set('valid-hex-key-123', { scope: 'read', userId: 1 })
export function getByKey(ctx: HttpContextContract): void {
const provided = ctx.request.input('api_key')
if (!provided) {
ctx.response.badRequest({ error: 'missing_key' })
return
}
const entry = keyStore.get(provided)
if (!entry) {
ctx.response.unauthorized({ error: 'invalid_key' })
return
}
ctx.response.json(entry)
}
Using a Map avoids numeric conversion and enforces key-based retrieval without risky index arithmetic. It also makes it explicit which keys are valid, simplifying audits.
Example 3: Safe buffer handling when interfacing with binary formats.
import { Buffer } from 'buffer'
export function processToken(apiKey: string): Buffer {
const normalized = apiKey.replace(/-/g, '').toLowerCase()
if (normalized.length > 32) {
throw new Error('api_key_too_long')
}
// Fixed-size buffer of 32 bytes; excess input is rejected
const buf = Buffer.alloc(32)
const source = Buffer.from(normalized, 'hex')
source.copy(buf, 0, 0, source.length)
return buf
}
Here, length is checked before allocation, and copy is bounded by explicit offsets, preventing overflow. The buffer size is fixed, and input that exceeds limits is rejected rather than truncated silently.
These patterns align with the checks provided by middleBrick, which can identify missing validations and unsafe operations in your API definitions. The platform supports continuous monitoring and CI/CD integration (e.g., GitHub Action) to enforce thresholds and fail builds when risky patterns are detected. For teams requiring deeper coverage, the Pro plan offers 100 APIs, continuous scanning, and compliance mapping to frameworks such as OWASP API Top 10 and SOC2.