Distributed Denial Of Service in Fastapi with Cockroachdb
Distributed Denial Of Service in Fastapi with Cockroachdb — how this specific combination creates or exposes the vulnerability
A DDoS risk in a Fastapi service backed by Cockroachdb arises from the interaction between unbounded request rates, synchronous or poorly controlled database access, and Cockroachdb’s distributed consensus behavior under contention. When Fastapi endpoints execute long-running or unbounded SQL queries (e.g., missing filters, missing pagination, or unthrottled aggregation), they can tie up database connections and consume node resources across the Cockroachdb cluster. Because Cockroachdb relies with strong consistency and distributed transactions, high contention on hot rows or ranges can amplify latency, causing request queues to build up in Fastapi and increasing tail latencies. Without rate limiting, an attacker can send a high volume of legitimate-looking queries that stress the SQL layer, drive up CPU and I/O across nodes, and ultimately degrade availability for legitimate users. The 12 security checks in middleBrick include Rate Limiting and Input Validation, which highlight whether query patterns and request rates are controlled, and whether safeguards exist to prevent resource exhaustion specific to this stack.
Cockroachdb-Specific Remediation in Fastapi — concrete code fixes
Apply targeted query design, connection management, and runtime controls to reduce DDoS surface when Fastapi talks to Cockroachdb.
- Use context timeouts and request-level cancellation to avoid long-running queries that block connection pools:
from fastapi import FastAPI, HTTPException, Depends
from sqlalchemy import text
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
import asyncio
import os
DATABASE_URL = os.getenv("COCKROACHDB_URL", "postgresql+asyncpg://root@localhost:26257/defaultdb?sslmode=require")
engine: AsyncEngine = create_async_engine(DATABASE_URL, pool_pre_ping=True, pool_size=20, max_overflow=10)
app = FastAPI()
async def get_db():
async with engine.connect() as conn:
try:
yield conn
finally:
await conn.close()
@app.get("/widgets/{widget_id}")
async def read_widget(widget_id: int, db=Depends(get_db)):
try:
result = await asyncio.wait_for(
db.execute(text("SELECT id, name, quantity FROM widgets WHERE id = :id").bindparams(id=widget_id)),
timeout=5.0
)
row = result.fetchone()
if row is None:
raise HTTPException(status_code=404, detail="Widget not found")
return {"id": row[0], "name": row[1], "quantity": row[2]}
except asyncio.TimeoutError:
raise HTTPException(status_code=504, detail="Database timeout")
- Enforce pagination and limit result sizes to avoid heavy scans and memory pressure:
@app.get("/widgets")
async def list_widgets(
db=Depends(get_db),
skip: int = 0,
limit: int = 20
):
limit = min(limit, 100) # cap to prevent excessive scans
result = await db.execute(
text("SELECT id, name, quantity FROM widgets ORDER BY id LIMIT :limit OFFSET :skip").bindparams(limit=limit, skip=skip)
)
rows = result.fetchall()
return [{"id": r[0], "name": r[1], "quantity": r[2]} for r in rows]
- Use Cockroachdb-specific settings to reduce contention, such as setting max SQL memory and statement timeouts:
# Example connection arguments to reduce impact of heavy queries
engine = create_async_engine(
DATABASE_URL,
connect_args={
"options": "-c statement_timeout='5s' -c application_name='fastapi-api'"
},
pool_pre_ping=True,
pool_size=20,
max_overflow=10
)
- Apply rate limiting at the Fastapi layer (e.g., with slowapi or fastapi-limiter) to control request bursts before they hit Cockroachdb:
from slowapi import Limiter
from slowapi.util import get_remote_address
from fastapi import Request
limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
@app.get("/widgets/{widget_id}")
@limiter.limit("10/minute")
async def read_widget_limited(widget_id: int, db=Depends(get_db)):
...
- Instrument queries and monitor for long-running or high-concurrency patterns; use EXPLAIN (ANALYZE, DIST) to identify scans or contention on hot ranges in Cockroachdb and add or adjust indexes accordingly.