Heap Overflow with Jwt Tokens
How Heap Overflow Manifests in Jwt Tokens
A JSON Web Token consists of three base64url‑encoded parts separated by dots: header.payload.signature. When a service parses a JWT it typically decodes each part, allocates a buffer based on the decoded length, and then copies the data into that buffer. If the length supplied by the attacker‑controlled token is not validated before the allocation or copy, an excessively long part can cause a heap overflow.
In native JWT libraries (e.g., libjwt or custom OpenSSL‑based verification code) the flow often looks like:
size_t part_len = base64url_decode_len(part_str);
unsigned char *buf = malloc(part_len); // allocation based on attacker length
base64url_decode(part_str, buf, part_len); // copy without further bounds check
If part_str is crafted to be megabytes long, malloc may succeed but the subsequent decode routine may write past the end of buf due to an internal length miscalculation, overwriting heap metadata. This can lead to arbitrary code execution or denial‑of‑service when the corrupted heap is later used.
Even in higher‑level languages, unsafe native extensions can introduce the same risk. For example, a Node.js addon that calls EVP_DecodeBase64 from OpenSSL without checking the input length can overflow the OpenSSL‑managed heap.
Jwt Tokens-Specific Detection
Detecting a heap‑overflow‑prone JWT parser requires sending malformed tokens that expose the unsafe length handling. A scanner such as middleBrick can automate this by:
- Generating a JWT with an abnormally large
header,payload, orsignaturesegment (e.g., 10 MB of base64url characters). - Submitting the token to unauthenticated endpoints that accept JWTs (authentication, authorization, or API‑gateway routes).
- Monitoring for abnormal responses: crashes, 502/504 errors, sudden latency spikes, or memory‑exhaustion signals in the service logs.
- Checking for error messages that reveal internal buffer sizes (e.g., "input too large" or "invalid length") which indicate the service attempted to process the oversized segment.
Because middleBrick performs unauthenticated, black‑box testing, it does not need source code or credentials; it simply observes the external behavior. A typical CLI command looks like:
middlebrick scan https://api.example.com/auth --header "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.${large_payload}.${signature}"If the service returns a 500 error or the connection drops after a few seconds, middleBrick flags the finding under the "Input Validation" category with a severity of "high", providing the offending token length and the observed symptom.
Jwt Tokens-Specific Remediation
The fix is to validate and limit the size of each JWT segment before any decoding or buffer allocation occurs. Use the native safety features of well‑maintained JWT libraries and avoid manual base64 handling.
Node.js (jsonwebtoken)
const jwt = require('jsonwebtoken');
const MAX_JWT_SEGMENT = 4 * 1024; // 4 KB per segment is more than enough for legitimate tokens
function safeVerify(token, secret) {
const parts = token.split('.');
if (parts.length !== 3) {
throw new Error('Invalid JWT format');
}
for (const p of parts) {
if (Buffer.from(p, 'base64url').length > MAX_JWT_SEGMENT) {
throw new Error('JWT segment too large');
}
}
return jwt.verify(token, secret);
}
// usage
try {
const payload = safeVerify(req.headers.authorization.split(' ')[1], process.env.JWT_SECRET);
// proceed
} catch (err) {
res.status(401).send({ error: 'Invalid token' });
}
Python (PyJWT)
import jwt
MAX_SEGMENT_BYTES = 4 * 1024
def safe_decode(token, key, algorithms=['HS256']):
parts = token.split('.')
if len(parts) != 3:
raise jwt.InvalidTokenError('Invalid JWT structure')
for part in parts:
# base64url decode length check
import base64
decoded = base64.urlsafe_b64decode(part + '==')
if len(decoded) > MAX_SEGMENT_BYTES:
raise jwt.InvalidTokenError('JWT segment exceeds allowed size')
return jwt.decode(token, key, algorithms=algorithms)
# in a Flask view
try:
payload = safe_decode(request.headers.get('Authorization').split()[1], SECRET_KEY)
except jwt.PyJWTError as e:
return jsonify({'error': str(e)}), 401
C (using libjwt safely)
#include <jwt.h>
#include <string.h>
#define MAX_SEGMENT 4096
int verify_jwt(const char *token, const char *key) {
char *parts[3];
int n = jwt_split(token, '.', parts, 3);
if (n != 3) return -1;
for (int i = 0; i < 3; ++i) {
size_t len = strlen(parts[i]);
if (len > MAX_SEGMENT) { /* reject oversized segment */
jwt_free_split(parts, n);
return -2;
}
}
int rc = jwt_verify(token, JWT_ALG_HS256, (unsigned char *)key, strlen(key));
jwt_free_split(parts, n);
return rc;
}
By enforcing a reasonable upper bound on each segment before any decoding, the application eliminates the path where an attacker‑supplied length can cause a heap allocation that is later overrun. This mitigation works regardless of whether the JWT processing happens in a managed language runtime or a native library, and it aligns with the OWASP API Security Top 10 item A1:2023 – Broken Object Level Authorization, which includes insufficient input validation on tokens.