HIGH rate limiting bypassexpresscockroachdb

Rate Limiting Bypass in Express with Cockroachdb

Rate Limiting Bypass in Express with Cockroachdb — how this specific combination creates or exposes the vulnerability

Rate limiting in Express is typically implemented in application code or via a reverse proxy or API gateway. When combined with CockroachDB as the backend store for rate-limiting state, misconfigurations or implementation gaps can enable bypasses that allow an attacker to exceed intended request caps. A common pattern is to store request counts per key (e.g., IP or API key) in CockroachDB using a row with a timestamp window. If these operations are not performed atomically and idempotently, an attacker can exploit race conditions or inconsistent reads to avoid detection.

Consider an Express route that checks a CockroachDB table rate_limits before processing a request. If the read to check the current count and the subsequent write to increment the count are separate operations, an attacker can issue concurrent requests that each read the same count (e.g., 4), increment locally, and write back 5, effectively bypassing a limit of 5 because each request independently sees the pre-increment state. This is a classic time-of-check-to-time-of-use (TOCTOU) race condition. CockroachDB’s serializable isolation helps, but if the Express code uses lower isolation levels or does not retry transactions, the protection weakens.

Another bypass vector involves identifier selection. If rate limiting is applied per IP but the Express code extracts IPs from headers like X-Forwarded-For without proper validation or chaining through req.ip (which accounts for proxies when trust proxy is set), an attacker can spoof the source identifier. Combined with CockroachDB, this means a single attacker can fragment requests across many fake identifiers, each staying under the threshold, effectively bypassing per-client limits. Additionally, if the window is implemented with a simple timestamp column and the server clock drifts or requests span a boundary, an attacker can time requests to slip into a fresh window prematurely.

Insecure transaction handling in Express further exacerbates the issue. For example, using auto-commit mode without explicit retries for serialization failures can lead to lost increments or stale reads. If the rate-limiting query does not explicitly lock or use upserts (e.g., INSERT ... ON CONFLICT DO UPDATE), concurrent requests can interleave in ways that violate intended limits. Since CockroachDB is strongly consistent, these pitfalls are often about application-level transaction design rather than database consistency guarantees.

Finally, missing or misconfigured middleware allows requests to skip the rate limiter entirely. If the rate-limiting middleware is not applied to all relevant routes, or if conditional logic excludes certain paths or methods, an attacker can target unprotected endpoints. With CockroachDB storing per-key state, the absence of a consistent enforcement point in Express means some requests never check or update the counter, enabling a practical bypass even when the database logic is sound.

Cockroachdb-Specific Remediation in Express — concrete code fixes

To prevent rate-limiting bypasses, implement atomic upserts with explicit serialization and retries in Express when using CockroachDB. Use the pg client (or an ORM that supports raw queries) to perform increment-and-check in a single transaction. Below is a robust example using async-retry to handle serialization errors, which CockroachDB returns with code '40001'.

const express = require('express');
const { Pool } = require('pg');
const retry = require('async-retry');

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

async function incrementAndGet(client, key, windowMs) {
  const cutoff = new Date(Date.now() - windowMs).toISOString();
  return retry(
    async (bail, attempt) => {
      await client.query('BEGIN');
      // Clean old entries and get current count in one atomic operation
      const res = await client.query(
        `WITH cleanup AS (
           DELETE FROM rate_limits WHERE key = $1 AND ts < $2
         ),
         current AS (
           SELECT count FROM rate_limits WHERE key = $1
         )
         INSERT INTO rate_limits (key, count, ts)
         SELECT $1, COALESCE((SELECT count FROM current), 0) + 1, NOW()
         WHERE NOT EXISTS (SELECT 1 FROM current)
         RETURNING count;
        `,
        [key, cutoff]
      );
      let count = 1;
      if (res.rows.length > 0) {
        count = res.rows[0].count;
      }
      await client.query('COMMIT');
      return count;
    },
    {
      retries: 5,
      onFailedAttempt: (error) => {
        if (error.code === '40001') {
          // Serialization error; will retry
          return;
        }
        throw error;
      },
    }
  );
}

app.use(async (req, res, next) => {
  const client = await pool.connect();
  try {
    const count = await incrementAndGet(client, req.ip, 60_000); // 1 minute window
    if (count > 10) { // limit of 10 requests per minute
      return res.status(429).send('Too Many Requests');
    }
    next();
  } finally {
    client.release();
  }
});

app.get('/api', (req, res) => {
  res.json({ ok: true });
});

app.listen(3000);

Ensure your CockroachDB table is defined with appropriate indexes to make the cleanup efficient and avoid full scans:

CREATE TABLE IF NOT EXISTS rate_limits (
  key STRING NOT NULL,
  count INT NOT NULL,
  ts TIMESTAMPTZ NOT NULL,
  PRIMARY KEY (key, ts)
);
CREATE INDEX IF NOT EXISTS idx_rate_limits_cleanup ON rate_limits (key, ts);

To address identifier spoofing, configure Express to trust the proxy and use req.ip consistently. Set app.set('trust proxy', 1) when behind a load balancer, and avoid relying on X-Forwarded-For without validation. Combine this with short, fixed time windows and monotonic counters to reduce boundary issues. For continuous monitoring, use the middleBrick dashboard or the CLI (middlebrick scan <url>) to detect inconsistencies in rate-limiting behavior and validate that enforcement aligns with your policy.

Related CWEs: resourceConsumption

CWE IDNameSeverity
CWE-400Uncontrolled Resource Consumption HIGH
CWE-770Allocation of Resources Without Limits MEDIUM
CWE-799Improper Control of Interaction Frequency MEDIUM
CWE-835Infinite Loop HIGH
CWE-1050Excessive Platform Resource Consumption MEDIUM

Frequently Asked Questions

Why do I need to retry transactions on serialization errors with CockroachDB in Express?
CockroachDB uses strict serializable isolation; concurrent writes that conflict will abort one transaction with a code 40001 error. Retrying ensures the increment-and-check operation succeeds despite interleaving, preventing lost updates that could lead to rate-limit bypass.
Can relying on the database alone prevent rate-limiting bypasses without Express-side enforcement?
No. The database enforces correctness only when the application uses atomic operations and retries. If Express routes skip the rate-limiting check or call it inconsistently, attackers can bypass limits regardless of CockroachDB's capabilities. Consistent middleware and transaction design are essential.