Integer Overflow in Cassandra
How Integer Overflow Manifests in Cassandra
Apache Cassandra stores numeric values in columns typed as int, bigint, counter, or user‑defined types that wrap these primitives. When an application accepts user‑supplied numbers without bounds checking and passes them directly to a CQL INSERT or UPDATE, the value can exceed the native type’s range. For a signed 32‑bit int the limits are –2,147,483,648 to 2,147,483,647; for a signed 64‑bit bigint (used by counter columns) the limits are –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
An attacker can trigger an overflow by providing a value just beyond the maximum, causing the driver to wrap the number to a negative or unexpectedly small positive value. In Cassandra this can lead to:
- Incorrect counter increments – a
countercolumn meant to track request counts may wrap to a negative value, breaking metrics and billing logic. - Bypassing business logic – e.g., a price field stored as
intcents could wrap from a large legitimate price to a tiny number, enabling price‑tampering attacks. - Denial‑of‑service – extremely large values may cause the coordinator node to allocate oversized buffers when serializing the mutation, increasing memory pressure and slowing down the cluster.
These issues appear in code paths where raw request parameters are bound to prepared statements without validation, such as a REST endpoint that extracts a JSON field quantity and directly passes it to a DAO method that executes:
session.execute(
preparedStatement.bind(quantity, userId));
If quantity is user‑controlled and not checked, an overflow can occur.
Cassandra‑Specific Detection
Detecting integer‑overflow risk in a Cassandra‑backed API relies on observing where numeric inputs flow into CQL statements without explicit range checks. middleBrick’s Input Validation check performs black‑box probing of each endpoint, sending values that exceed the typical limits for int and bigint (e.g., 2,147,483,648 for int and 9,223,372,036,854,775,808 for bigint) and examines the responses for signs of unexpected behavior.
The scanner looks for:
- Changes in returned numeric fields that suggest wrapping (e.g., a submitted large positive value returning a negative number).
- Error messages or stack traces that indicate driver‑level overflow exceptions (though many drivers silently wrap).
- Performance anomalies such as increased latency when huge numbers are sent, hinting at buffer‑allocation stress.
Because middleBrick works unauthenticated and only needs the base URL, it can be run against a staging or production endpoint to surface these issues before they are exploited. The findings are reported with severity, the affected parameter name, and a short remediation hint, allowing developers to prioritize fixes.
Example of a finding in the middleBrick dashboard:
| Parameter | Sent Value | Observed Effect | Severity |
|---|---|---|---|
| quantity | 2147483648 | Returned count shows –2147483648 | High |
| price_cents | 9223372036854775808 | Price stored as –9223372036854775808 | High |
These results give concrete evidence that the API lacks proper bounds checking on numeric inputs bound to Cassandra columns.
Cassandra‑Specific Remediation
The fix is to validate numeric inputs on the application side before they are sent to Cassandra, ensuring they fit within the target column’s type. Because Cassandra does not provide built‑in check constraints for integer ranges (prior to version 5.0), the validation must happen in the service layer.
Below is a realistic Java example using the DataStax Java Driver 4.x. The method receives a JSON payload, extracts the quantity field, validates it against the int range, and only then executes a prepared statement.
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.PreparedStatement;
public class OrderService {
private final CqlSession session;
private final PreparedStatement insertOrder;
public OrderService(CqlSession session) {
this.session = session;
this.insertOrder = session.prepare(
"INSERT INTO orders (order_id, user_id, quantity) VALUES (?, ?, ?)");
}
public void placeOrder(UUID orderId, UUID userId, long quantity) {
// ---- Validation start ----
if (quantity < Integer.MIN_VALUE || quantity > Integer.MAX_VALUE) {
throw new IllegalArgumentException(
"Quantity must be within 32‑bit signed integer range: " + quantity);
}
// ---- Validation end ----
session.execute(
insertOrder.bind(orderId, userId, (int) quantity));
}
}
For columns defined as bigint (or counter), the validation checks the 64‑bit range:
if (value < Long.MIN_VALUE || value > Long.MAX_VALUE) {
throw new IllegalArgumentException("Value out of bigint range");
}
When using the counter column type, remember that increments are applied atomically; therefore, you should also validate the increment amount before issuing an UPDATE:
PreparedStatement incrementCounter = session.prepare(
"UPDATE user_metrics SET login_count = login_count + ? WHERE user_id = ?");
public void incrementLogin(UUID userId, long delta) {
if (delta < 0) {
// counters only support positive increments in this example
throw new IllegalArgumentException("Delta must be non‑negative");
}
session.execute(incrementCounter.bind(delta, userId));
}
Additional defensive tactics include:
- Using schema‑level
staticcolumns with user‑defined types that encapsulate validation logic (available in Cassandra 5.0+). - Enabling driver‑side logging or metrics to catch unexpected large values during testing.
- Integrating middleBrick scans into CI/CD (via the GitHub Action) so that any regression that re‑introduces unchecked numeric inputs fails the build before deployment.
By applying these checks, you eliminate the integer‑overflow attack surface while preserving Cassandra’s high‑performance write path.