Hallucination Attacks in Adonisjs with Cockroachdb
Hallucination Attacks in Adonisjs with Cockroachdb — how this specific combination creates or exposes the vulnerability
A Hallucination Attack in the context of Adonisjs with Cockroachdb occurs when an LLM-integrated endpoint produces fabricated or misleading database facts that appear authoritative. Because Adonisjs applications often use LLM services to generate SQL or interpret query results, an attacker can supply crafted inputs that steer the model into inventing table states, row counts, or constraint violations that do not exist in the actual Cockroachdb cluster.
With Cockroachdb, this risk is heightened by its distributed SQL semantics and strict serializable isolation. An Adonisjs controller that passes user-supplied identifiers directly into an LLM prompt—such as "What is the balance for account_id 123?"—may cause the model to hallucinate a row that satisfies the prompt but violates uniqueness or foreign-key expectations. If the application then attempts a Cockroachdb query based on that hallucinated fact (e.g., a JOIN that references a non-existent row), the runtime behavior diverges from the spec, exposing information about schema or execution paths through error messages or inconsistent pagination.
The vulnerability chain typically involves: (1) an unauthenticated or weakly authenticated Adonisjs route that accepts free-form text; (2) integration with an LLM endpoint where prompt context includes raw user input and a description of the Cockroachdb schema; (3) the model generating plausible but incorrect SQL fragments or row identifiers; (4) the application executing those fragments against Cockroachdb, which may return empty results or unexpected subsets, allowing an attacker to infer schema details or trigger side channels. Because middleBrick does not fix or block these patterns, developers must validate and sanitize all inputs that influence LLM prompts and ensure runtime checks align with actual Cockroachdb query results.
Cockroachdb-Specific Remediation in Adonisjs — concrete code fixes
Remediation focuses on strict input validation, deterministic SQL generation, and avoiding direct concatenation of user data into prompts or queries. When using Cockroachdb with Adonisjs, prefer parameterized queries and schema-aware prompt engineering to reduce hallucination surface.
1. Parameterized queries with Knex (Adonisjs Lucid)
Always use bound parameters instead of string interpolation. This ensures user input is never interpreted as executable SQL, which curbs opportunities for hallucination-driven injection or misinterpretation.
import { BaseQueryClient } from '@ioc:Adonis/Lucid/Database'
export default class AccountsController {
public async show({ request, response }) {
const accountId = request.qs().id
if (!Number.isInteger(accountId) || accountId <= 0) {
return response.badRequest({ error: 'invalid_account_id' })
}
// Safe: parameterized query to Cockroachdb via Lucid/Knex
const account = await Database.from('accounts').where('id', accountId).first()
return response.ok(account)
}
}
2. Schema-constrained prompt generation
When constructing prompts for an LLM, embed the exact table and column names from Cockroachdb and enforce allowed values. This reduces the model’s freedom to invent column names or constraints that do not exist.
// Example prompt template for an LLM query assistant
const buildPrompt = (userId: number, allowedTables: string[]) => {
const safeTables = allowedTables.filter(t => ['users', 'accounts', 'transactions'].includes(t))
return `You have read-only access to Cockroachdb tables: ${safeTables.join(', ')}.
Generate a SQL query to fetch user_id=${userId} from the users table. Return only the SQL string.`
}
3. Post-execution validation against actual Cockroachdb metadata
After an LLM produces a query or row suggestion, validate it against Cockroachdb system tables (e.g., crdb_internal.tables) before execution. This detects hallucinated objects that do not exist in the cluster.
// Validate table existence in Cockroachdb before using LLM-suggested table names
import { Database } from '@ioc:Adonis/Lucid/Database'
export async function validateTable(tableName: string): Promise {
const result = await Database.raw(
`SELECT table_name FROM crdb_internal.tables WHERE table_schema = 'public' AND table_name = $1`,
[tableName]
)
return result.rows.length > 0
}
4. Rate limiting and anomaly detection on LLM outputs
Apply strict rate limits on LLM calls and monitor for abnormal patterns such as repeated requests with similar prompts that could indicate probing for hallucination weaknesses. Complement this with schema-aware assertions in your application logic.
// Example middleware sketch for Adonisjs route
import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
export default class RateLimiter {
private recent: Map = new Map()
public handle(ctx: HttpContextContract, next: () => Promise<void>) {
const key = ctx.request.url()
const now = Date.now()
const window = 60_000 // 1 minute
const entries = this.recent.get(key)?.filter(t => now - t < window) ?? []
if (entries.length > 10) {
ctx.response.status(429).send({ error: 'rate_limit_exceeded' })
return
}
entries.push(now)
this.recent.set(key, entries)
return next()
}
}
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |