Stack Overflow in Adonisjs with Cockroachdb
Stack Overflow in Adonisjs with Cockroachdb — how this specific combination creates or exposes the vulnerability
AdonisJS encourages the use of database transactions and query builder patterns, which can interact poorly with CockroachDB’s distributed SQL behavior under high concurrency or large result sets. A Stack Overflow occurs when an application opens many unmanaged or long-lived database connections, or when a single transaction holds locks and result sets for too long, exhausting connection pools or memory on the CockroachDB nodes. In AdonisJS, this often manifests in controllers or scheduled tasks that execute heavy queries without streaming, pagination, or explicit transaction lifecycle management.
When using the Database service from @ioc:Adonis/Lucid, developers may write queries that return large rows without closing the cursor or releasing the transaction. CockroachDB, while compatible with PostgreSQL wire protocol, enforces strict isolation and consistency guarantees across replicas. Long-running queries or uncommitted transactions can cause session buildup, driving up memory usage and increasing the likelihood of hitting connection limits. This combination—AdonisJS application code that does not explicitly manage transaction scope and CockroachDB’s strict concurrency model—amplifies the risk of resource exhaustion and observable denial-of-service behavior.
Additionally, implicit transactions in AdonisJS (relying on Lucid’s default transaction handling) may leave connections open if errors are not handled correctly. CockroachDB’s node-per-store architecture can exacerbate this when client-side connection pools are misconfigured, leading to many idle sessions. The OWASP API Top 10 category ‘Broken Object Level Authorization’ can intersect here if overly broad queries expose more rows than intended, increasing load. Input validation failures that produce non-optimal query plans may further stress the database, creating conditions where Stack Overflow–like symptoms appear without an explicit exploit.
Cockroachdb-Specific Remediation in Adonisjs — concrete code fixes
Mitigate Stack Overflow risks by controlling transaction lifetime, using streaming or pagination, and ensuring proper connection pool configuration. Prefer explicit transaction blocks with clear commit/rollback paths, and release resources in finally blocks.
Example 1: Explicit transaction with try/finally
import Database from '@ioc:Adonis/Lucid/Database'
export default class ReportsController {
public async index() {
const client = await Database.connection()
try {
await client.transaction(async (trx) => {
const rows = await trx
.from('events')
.where('created_at', '>', new Date(Date.now() - 7 * 24 * 60 * 60 * 1000))
.limit(500)
.select('id', 'name')
for (const row of rows) {
// process row
}
})
} catch (error) {
// handle error
throw error
} finally {
await client.release()
}
}
}
Example 2: Streaming large result sets to avoid holding memory
import Database from '@ioc:Adonis/Lucid/Database'
export default class DataController {
public async export() {
const client = await Database.connection()
try {
const cursor = client.cursor(`
DECLARE c CURSOR FOR SELECT id, email FROM users WHERE status = $1
`, ['active'])
for await (const row of cursor) {
// yield or write to stream to avoid accumulation
console.log(row)
}
} catch (error) {
// handle error
throw error
} finally {
await client.raw('CLOSE ALL')
await client.release()
}
}
}
Example 3: Paginated query with explicit limit/offset and controlled connection usage
import Database from '@ioc:Adonis/Lucid/Database'
export default class UsersController {
public async index({ request }) {
const page = request.qs().page || 1
const limit = 50
const client = await Database.connection()
try {
const users = await client
.from('users')
.limit(limit)
.offset((page - 1) * limit)
.select('id', 'username', 'email')
return users
} finally {
await client.release()
}
}
}
Configuration guidance
- Set
connectionPoolsize inconfig/database.tsto a value aligned with your CockroachDB cluster’s node capacity; avoid overly large pools. - Use query timeouts and statement cancellation where supported to prevent long-running queries from holding sessions.
- Validate and sanitize all input that affects WHERE clauses or JOIN conditions to avoid unintentionally expensive query plans.